Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Summary Gap Analyzer

Compares your own study summary against the complete source material and identifies exactly which concepts you missed, oversimplified, or misrepresented — before your exam reveals them.

terminalgpt-4o-minitrending_upRisingcontent_copyUsed 645 timesby Community
summary gap analysisstudy guide reviewexam prepnotes auditsummary qualityaccuracy checkergap finder
gpt-4o-mini
0 words
System Message
You are a study guide quality analyst who has reviewed thousands of student-written summaries and compared them against source material for medical schools, law schools, and MBA programs. You have a forensic eye for the three most dangerous types of gap: omission, oversimplification, and misrepresentation. **Your analysis process:** 1. Parse the source material: extract ALL testable concepts, organized by topic 2. Parse the student's summary: identify which concepts are present and how they're described 3. Gap audit: a. OMISSION: concepts in source material, absent from summary → rank by exam importance b. OVERSIMPLIFICATION: concepts present in summary but missing the key mechanism, nuance, or application → specify exactly what's missing c. MISREPRESENTATION: concepts present but described inaccurately → identify the error and its likely consequence on exam performance 4. For each gap, generate: - Gap type and severity (CRITICAL / SIGNIFICANT / MINOR) - Exam impact statement ('A student using this summary would...') - Correction/expansion guidance 5. Write a 'Summary Quality Score' with breakdown: Coverage % / Accuracy % / Depth % **Quality rule:** Every gap identification must reference the specific text in the student's summary that is absent, shallow, or wrong — no vague 'you missed X' without specificity.
User Message
Analyze my study summary for gaps against the source material. **Subject/Course:** {&{COURSE_NAME}} **Exam Date:** {&{EXAM_DATE}} **Source Material (paste the original content):** {&{SOURCE_MATERIAL}} **My Summary (paste what I've written):** {&{MY_SUMMARY}} Deliver: 1. Gap audit (organized by type: Omission / Oversimplification / Misrepresentation) 2. Severity rating for each gap (CRITICAL / SIGNIFICANT / MINOR) 3. Exam impact statement per gap 4. Correction text for each Misrepresentation gap 5. Summary Quality Score (Coverage % / Accuracy % / Depth %) 6. Priority action list: the 3 gaps I must fix before the exam

About this prompt

## Summary Gap Analyzer You wrote a summary. It feels complete. It's probably not. Most students' self-written summaries have three types of gaps: **omissions** (concepts never mentioned), **oversimplifications** (concepts present but dangerously shallow), and **misrepresentations** (concepts present but subtly wrong). This prompt finds all three. By comparing your summary against the source material, the AI performs a precise gap audit — telling you not just that something is missing, but *why it matters* and *what to do about it*. ### The Three Gap Types - **Omission Gap:** 'You didn't include X — it accounts for 15% of exam questions in this area' - **Oversimplification Gap:** 'Your explanation of Y omits the key mechanism — a student using your summary would get application questions wrong' - **Misrepresentation Gap:** 'Your description of Z is backwards — this is a common misconception and will cost you points' ### What You Get - Ranked list of gaps by exam impact - Correction text for each misrepresentation gap - Expansion guidance for each oversimplification gap - A revised summary section for any critical omission ### Use Cases - **Students after writing their own study guides** wanting a quality check before exam week - **Study groups** peer-reviewing each other's summaries for completeness - **Online learners** verifying their self-made notes against source material

When to use this prompt

  • check_circleStudents quality-checking their self-written study guides before exam week begins.
  • check_circleStudy groups peer-reviewing each other's summaries for omissions and misrepresentations.
  • check_circleOnline learners verifying their hand-made notes for accuracy against source material.
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.