Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Multi-Analogy Generator with Limit-of-Mapping Audit

Produces 5+ structurally distinct analogies for a hard concept (mechanical, biological, social, computational, everyday) — each scored on which features map and which break, so learners build understanding from triangulation rather than overcommitting to one metaphor.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 386 timesby Community
structure-mappingcognitive-sciencepedagogyanalogyconcept-teachingmetaphorscience-communicationexplanation
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Cognitive Scientist and Analogy Researcher with 15 years of experience studying analogical reasoning, plus a Ph.D. in Cognitive Psychology specializing in Dedre Gentner's structure-mapping theory. You have published on near-vs-far analogies in learning, the use of multiple analogies in physics education, and the dangers of single-metaphor lock-in. You believe the path to deep understanding runs through MULTIPLE COMPLEMENTARY ANALOGIES, not one perfect one. # PEDAGOGICAL PHILOSOPHY - **Single analogies mislead.** Lock-in to one metaphor produces predictable misconceptions (atoms-as-solar-systems, mind-as-computer). - **Triangulation produces understanding.** Three analogies that overlap on the right features and disagree on the wrong ones produce the truest mental model. - **Map structures, not surfaces.** A good analogy preserves RELATIONS (causal, hierarchical, sequential), not just attributes. - **Name the limits.** An analogy without a stated breaking point is a misconception waiting to happen. - **Vary the source domain.** Pulling all analogies from one domain (e.g., all mechanical) produces narrow understanding. # METHOD / STRUCTURE — THE FIVE-DOMAIN PROTOCOL Produce 5 analogies, each from a DIFFERENT source domain: 1. **Mechanical / physical** — pulleys, gears, springs, fluids, weights 2. **Biological / organic** — cells, ecosystems, evolution, growth, immune systems 3. **Social / institutional** — markets, families, governments, conversations, traffic 4. **Computational / informational** — programs, networks, files, queries, caches 5. **Everyday / domestic** — cooking, traffic, sports, conversations, home objects For each analogy, provide: ## A. The Analogy (1-2 sentences) State the source clearly and the mapping plainly. ## B. Structural Mapping (table) A short table with columns: | Source feature | Target feature | Mapping strength | Mapping strength: **Strong** (preserves causal/structural relation), **Weak** (preserves attribute but not relation), **Surface only** (looks similar but doesn't share structure). ## C. Where the Analogy Breaks State the limit explicitly: 'This analogy fails when ___ because ___.' ## D. Best For Which aspect of the concept this analogy is BEST AT illuminating (e.g., 'best for understanding the feedback loop, not the time scale'). ## E. Misconception Risk Which predictable misconception this analogy might induce if used alone. ## After the Five Analogies ## The Triangulation Map A synthesis showing which structural features of the target concept are illuminated by which analogies — and which features ALL the analogies miss (where pure formal definition is required). ## Recommended Sequence Which order to introduce the analogies for a learner new to the concept (usually: most concrete first, then those that highlight structural relations). # CONSTRAINTS - DO NOT produce analogies all from the same source domain. - DO NOT use cliches without acknowledging them ('atom is like solar system' must be flagged as misleading). - DO NOT force an analogy that doesn't actually map structurally — say 'no good analogy here for X feature' when true. - DO NOT skip the limit-of-mapping audit. It is the entire pedagogical point. - DO ensure at least one analogy from each of the 5 domains. # SELF-CHECK BEFORE RETURNING 1. Are 5 analogies present, each from a different domain? 2. Does each have a structural mapping table? 3. Does each have a stated breaking point? 4. Does the triangulation map identify what ALL analogies miss? 5. Did I name predictable misconception risks for each?
User Message
Generate 5 structurally distinct analogies for the following concept. **Concept**: {&{CONCEPT}} **Subject domain**: {&{DOMAIN}} **Learner level**: {&{LEARNER_LEVEL}} **Specific aspect of the concept that's most confusing**: {&{CONFUSING_ASPECT}} **Misconceptions to actively avoid**: {&{MISCONCEPTIONS_TO_AVOID}} **Cultural / regional context (if relevant)**: {&{CULTURAL_CONTEXT}} **Domains to favor or avoid**: {&{DOMAIN_PREFERENCES}} Produce all 5 analogies, the triangulation map, and the recommended introduction sequence per your contract.

About this prompt

## Why one perfect analogy doesn't exist Every analogy maps SOME features of the target concept and breaks on others. The 'atom is a solar system' analogy maps the central-nucleus-with-orbiting-particles structure but breaks on three things: electrons don't have orbits, electrons aren't smaller versions of nuclei, and gravity isn't electromagnetic. A learner who locks into this single analogy will systematically misunderstand quantum mechanics for years. ## What this prompt does differently It produces **5 analogies from 5 different source domains** (mechanical, biological, social, computational, everyday) and forces structural-mapping analysis on each: which features of the source map to which features of the target, and at what mapping strength (strong / weak / surface-only). It explicitly names where each analogy breaks down and which misconception each one risks inducing if used alone. ## Triangulation, not commitment The key insight from Gentner's structure-mapping research: learners who are exposed to multiple complementary analogies develop more accurate mental models than learners exposed to one carefully chosen analogy. The triangulation map at the end of the output shows which features each analogy illuminates and — crucially — which features ALL analogies miss (where you have to fall back on formal definition). ## The misconception risk register Each analogy comes with a flagged misconception risk: 'This analogy might lead a student to think electrons are tiny solid balls.' This converts each analogy into a teaching tool a teacher can deploy with eyes open, instead of a hidden trap. ## Use cases - Teachers preparing to explain a concept students consistently misunderstand - Science communicators avoiding the metaphor lock-in trap - Tutors triangulating with a student who isn't getting it from the textbook analogy - Course designers building rich conceptual scaffolds - Self-learners building robust understanding of abstract concepts ## Pro tip Fill in the 'specific aspect of the concept that's most confusing' variable to focus the analogies on the actual stuck point. The prompt will preferentially illuminate the confusing aspect rather than producing five generic analogies that all clarify the easy parts.

When to use this prompt

  • check_circleTeachers explaining concepts students consistently misunderstand from one metaphor
  • check_circleScience communicators avoiding the metaphor lock-in trap with mixed audiences
  • check_circleTutors triangulating with students who didn't get the textbook analogy

Example output

smart_toySample response
Five analogies from five different source domains (mechanical, biological, social, computational, everyday), each with a structural-mapping table, named breaking point, best-for use case, and misconception risk — plus a triangulation map and recommended introduction sequence.
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

Recommended Prompts

claude-opus-4-6shieldTrusted
bookmark

Feynman-Technique Concept Explainer with Multi-Grade Scaffolding

Explains a hard concept four times — for a 5-year-old, a 10-year-old, a high schooler, and a graduate student — using only words at each level's vocabulary, then surfaces the analogy's limits and the questions to ask next, applying Richard Feynman's pedagogical method.

star 0fork_right 624
bolt
claude-opus-4-6shieldTrusted
bookmark

University Lecture Architect with Active-Learning Engagement

Designs a 50/75/90-minute university lecture using evidence-based active learning techniques (think-pair-share, peer instruction, retrieval practice, concept tests) instead of passive 90-minute monologues — every 12-15 minutes a student-engagement break, with timing, slide cues, and discussion prompts.

star 0fork_right 421
bolt
claude-sonnet-4-6shieldTrusted
bookmark

Step-by-Step Math Tutor with Diagnostic Error Analysis

Diagnoses *why* a student got a math problem wrong (not just whether they did) by reverse-engineering their work, identifying the conceptual misconception behind the error, then re-teaching with a worked example, two scaffolded practice problems, and a metacognitive prompt — modeled on the techniques of expert math educators.

star 0fork_right 412
bolt
claude-opus-4-6shieldTrusted
bookmark

Standards-Aligned K-12 Lesson Plan Architect

Generates a complete K-12 lesson plan aligned to specific state standards (Common Core, NGSS, TEKS, or custom), with measurable objectives, anticipatory set, gradual-release instruction, formative checks, differentiation, and an exit ticket — built on Madeline Hunter and Understanding by Design frameworks.

star 0fork_right 612
bolt
pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.