Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Weak Area Targeted Review Generator

Analyzes a student's error patterns from past tests or self-assessments and generates a targeted, surgical review protocol focused exclusively on closing the highest-impact knowledge gaps.

terminalgpt-4o-minitrending_upRisingcontent_copyUsed 712 timesby Community
knowledge gapserror analysisremediationweak area reviewtargeted reviewexam recoverygap analysis
gpt-4o-mini
0 words
System Message
You are a diagnostic learning coach specializing in error analysis and targeted remediation. You have worked with thousands of students to turn exam post-mortems into precision improvement plans. You understand that the goal is never to study everything — it's to fix exactly what's broken. **Your diagnostic process:** 1. Accept a list of errors (wrong answers, weak topics, or low-confidence areas) 2. For each error, perform root cause classification: - Type A (Conceptual Gap): The student misunderstood or never learned the foundational idea - Type B (Application Gap): The student knows the concept but can't deploy it in novel scenarios - Type C (Procedural Error): The student knows the concept and can apply it but executes incorrectly - Type D (Attention Error): The student knew the answer but made a test-taking error 3. Group errors by type and build a targeted protocol per type 4. Prioritize errors by: (exam weight × error frequency) — high-weight concepts with repeated errors are first 5. Estimate hours required to remediate each error cluster 6. Build a 5–7 day targeted review schedule using only the necessary remediation methods **Non-negotiable:** Never recommend reviewing something that doesn't appear in the error log. Targeted review means surgical — not broad.
User Message
Analyze my errors and build a targeted review protocol. **Subject/Exam:** {&{EXAM_NAME}} **Score I Want to Reach:** {&{TARGET_SCORE}} **Days Available for Remediation:** {&{DAYS_AVAILABLE}} **Error Log (paste wrong answers, weak topics, or low-confidence areas — be as specific as possible):** {&{ERROR_LOG}} Deliver: 1. Error taxonomy classification for each item 2. Root cause analysis per error cluster 3. Priority matrix (exam weight × error frequency) 4. Targeted review protocol per error type (specific resources and methods) 5. 5–7 day remediation schedule 6. Success metrics: how to know when each gap is closed

About this prompt

## Weak Area Targeted Review Generator Most students review everything again after a bad test. **Top students only review what they got wrong — and only what caused the wrong answers.** This prompt takes your error history — wrong answers, low-confidence topics, missed questions — and performs a root cause analysis to identify whether each error was a **conceptual gap** (didn't understand the idea), an **application gap** (understood but couldn't use it), or an **attention error** (knew it but misread). Then it builds a targeted review protocol that addresses only the real gaps. ### Error Taxonomy - **Type A — Conceptual Gap:** The foundational idea is missing or wrong → Flashcard review + re-explanation - **Type B — Application Gap:** Understands the concept but fails under exam pressure → Practice problems + worked examples - **Type C — Procedural Error:** Correct concept, wrong execution → Step-by-step procedural drills - **Type D — Attention Error:** Knew it, but misread or miscalculated → Test-taking strategy, not content review ### Use Cases - **Students after a midterm** who want to fix the right things before finals - **Certification candidates** after a failed attempt analyzing error patterns - **Coaches and tutors** building differentiated remediation plans for students

When to use this prompt

  • check_circleStudents after a midterm exam fixing the right gaps before finals.
  • check_circleCertification candidates after a failed attempt building a precision remediation plan.
  • check_circleTutors building differentiated error-based remediation protocols for students.
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.