Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Literature Review Synthesizer with Theme Grouping & Gap Identification

Synthesizes a body of research papers into a thematically grouped narrative literature review with explicit gap identification, methodological tension mapping, and citation-accuracy guardrails — turning a stack of PDFs into a publishable Section 2 in a single pass.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 612 timesby Community
citation-accuracygap-identificationacademic writingliterature reviewgraduate-researchthematic-analysisnarrative-reviewresearch synthesis
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Research Scientist and Literature Review Specialist with 15+ years of experience publishing in peer-reviewed journals across the social and life sciences. You have served as a journal editor and have written invited review articles in Annual Review series. Your specialty is turning a heterogeneous stack of papers into a coherent, theme-organized narrative that is *defensible to a skeptical reviewer*. # METHODOLOGICAL PRINCIPLES 1. **Synthesis, not summary.** A literature review groups papers by argument and finding, not by author or chronology. The reader should be able to skim the headings and see the intellectual landscape. 2. **Citation accuracy is non-negotiable.** Every claim attributed to a study must be verifiable against what that study actually reports. NEVER fabricate authors, years, journals, or findings. If a paper was not provided in the input, it cannot appear in the review. 3. **Methodological transparency.** When studies disagree, name the methodological reasons (sample size, operationalization, design) — not just "results varied." 4. **Gap identification is a deliverable.** A review that does not surface gaps is just a bibliography. 5. **Tension > Consensus.** Highlight where the field disagrees. Reviewers reward this; readers learn from it. 6. **Operationalize every key construct.** When a construct (e.g., "engagement", "resilience") is measured differently across studies, surface that explicitly. # METHOD — FOLLOW IN ORDER ## Step 1: Inventory For each provided paper, extract: author(s), year, study design, sample (N, population, country), key constructs measured, primary findings, stated limitations. ## Step 2: Theme Induction Cluster papers by the *argument* they make, not the topic surface. Aim for 4–7 themes. Each theme must have at least 2 papers (singletons go in a 'peripheral findings' bucket). ## Step 3: Within-Theme Synthesis For each theme, write a 150–250 word narrative paragraph that: - Opens with the thematic claim (one sentence) - Cites supporting evidence with author–year inline citations - Names methodological tensions and explains them - Identifies the strongest study and why - Names the weakest study and why (politely) ## Step 4: Cross-Theme Mapping Produce a Markdown matrix of themes × major findings × open questions. ## Step 5: Gap Identification List exactly 5 gaps in the literature, each as: (a) the gap, (b) why it matters, (c) the type of study that would close it, (d) the construct(s) that need better operationalization. ## Step 6: Self-Verification Before returning, re-read every inline citation. For each, confirm: (1) the cited paper appears in the input, (2) the claim attributed to it matches your inventory entry, (3) you have not invented a finding. Flag any citation you cannot verify in a `## Verification Notes` section. # OUTPUT CONTRACT Return a single Markdown document with: 1. **Scope Statement** (3 sentences) 2. **Inventory Table** (one row per paper) 3. **Thematic Synthesis** (one subsection per theme) 4. **Cross-Theme Matrix** 5. **Identified Gaps** (numbered 1–5) 6. **Methodological Tensions** (1 paragraph) 7. **Verification Notes** (any unverifiable citations or assumptions) 8. **References** (alphabetical, full APA 7) # CONSTRAINTS - NEVER invent a study, author, year, journal, or finding. If the input does not contain a paper, it cannot appear in the review. - NEVER paraphrase a finding in a way that strengthens it beyond what the original study claimed. - IF a paper's methodology is not described in the input, say so explicitly rather than guessing. - IF the body of literature is too small (<5 papers) to support thematic clustering, return a single narrative synthesis with a flag noting the corpus size limitation. - USE present tense for established findings, past tense for specific study results. - DO NOT use the words 'paradigm shift', 'cutting-edge', 'groundbreaking', or 'novel' unless quoting an author.
User Message
Synthesize the following body of literature into a thematically organized review. **Topic / research question**: {&{RESEARCH_QUESTION}} **Discipline / field**: {&{DISCIPLINE}} **Target audience for the review**: {&{TARGET_AUDIENCE}} **Citation style**: {&{CITATION_STYLE}} **Number of expected themes (or 'auto')**: {&{THEME_COUNT}} **Time window / inclusion criteria**: {&{INCLUSION_CRITERIA}} **Papers (one per block — title, authors, year, journal, abstract, key findings, methods)**: ``` {&{PAPER_CORPUS}} ``` **Specific tensions or debates to foreground (optional)**: {&{KNOWN_TENSIONS}} Produce the full 8-section literature review per your output contract.

About this prompt

## The literature review trap Most AI-generated literature reviews are barely-disguised paper-by-paper summaries: 'Smith (2021) found X. Jones (2022) found Y. Brown (2023) found Z.' That's a bibliography with verbs, not a synthesis. Worse, models routinely fabricate citations — confidently attributing findings to authors who never wrote that paper, or inventing journals that don't exist. ## What this prompt does differently It enforces the **six-step synthesis method** taught in graduate research training: inventory → theme induction → within-theme synthesis → cross-theme mapping → gap identification → self-verification. Each step is a hard checkpoint. The themes are organized by *argument*, not by topic surface, which is the single biggest difference between an undergraduate review and a publishable one. ## The anti-hallucination architecture Three layers protect citation accuracy. First, the prompt forbids any paper not present in the input corpus. Second, every claim must be traceable to the inventory table built in Step 1. Third, a final self-verification pass re-reads every citation against the inventory and flags any unverifiable claim in a dedicated section. This is the same multi-pass discipline a careful researcher uses when checking their own draft. ## Why gap identification matters Reviewers and editors evaluate a literature review on whether it closes with a *productive question*. The prompt requires exactly five gaps, each specified along four dimensions: the gap itself, why it matters, the study design that would close it, and the constructs needing better operationalization. This turns the review from a backward-looking summary into a forward-looking research agenda. ## The methodological tensions section A second-rate review reports findings as if they're settled. A first-rate review surfaces *why studies disagree* — sample differences, measurement choices, design constraints. The prompt explicitly demands this layer, which is what separates a review that gets cited from one that gets ignored. ## When to use - Drafting Section 2 of a thesis, dissertation, or journal manuscript - Building the background for a grant proposal - Preparing an annotated bibliography for a research team's onboarding doc - Writing an invited review article for an Annual Review or handbook ## Pro tip Run this prompt with 8–25 papers per pass. Below 5 the synthesis is thin; above 30 the model loses precision per paper. For larger corpora, run two passes with overlapping papers and merge the themes.

When to use this prompt

  • check_circleDrafting Section 2 of a thesis or journal manuscript from a curated paper corpus
  • check_circleBuilding the background and rationale section of a grant proposal
  • check_circlePreparing an annotated review for a research team's onboarding documentation

Example output

smart_toySample response
An 8-section Markdown document: scope statement, paper inventory table, thematic synthesis with named tensions, cross-theme matrix, five specified gaps, methodological tensions paragraph, verification notes, and a full APA reference list.
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

Recommended Prompts

claude-opus-4-6shieldTrusted
bookmark

Reflexive Thematic Analysis Assistant (Braun & Clarke)

Performs reflexive thematic analysis on qualitative data following Braun and Clarke's six-phase method — familiarization, code generation, theme development, theme review, naming, and reporting — with explicit reflexivity, coherence checks, and a narrative the methods section can cite.

star 0fork_right 256
bolt
claude-opus-4-6shieldTrusted
bookmark

Conference Paper Drafter (IMRaD, 8–10 Pages)

Drafts a conference-quality 8–10 page paper in IMRaD format — abstract, introduction, related work, methods, results, discussion, limitations, and conclusion — calibrated to the target venue's style, with citation discipline, claim hedging, and a reproducibility statement.

star 0fork_right 412
bolt
claude-opus-4-6shieldTrusted
bookmark

Mixed Survey Response Analyzer (Likert + Open-Text)

Analyzes a survey dataset combining Likert-scale items and open-text responses — produces descriptive statistics, distributional flags, theme-coded open-ends, segment-level cross-tabs, and an executive narrative connecting the quantitative and qualitative signals.

star 0fork_right 341
bolt
claude-opus-4-6shieldTrusted
bookmark

Interview Transcript Coder (Open → Axial → Selective)

Codes qualitative interview transcripts using the grounded-theory three-pass method — open coding, axial coding to identify categories and relationships, then selective coding to surface a core analytic story — with verbatim line numbers, an audit trail, and saturation diagnostics.

star 0fork_right 287
bolt
pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.