Anatomy of a prompt: the four elements that build every great one
Every effective prompt has four building blocks: instruction, context, input data, and output indicator. Learn the anatomy, when each element earns its keep, and how to combine them.
Take any production prompt and dissect it. The ones that work consistently — across customers, across edge cases, across model upgrades — share a structure. Four building blocks, in roughly the same order, doing roughly the same jobs. The ones that fail tend to be missing one or blending two together.
The four elements: instruction (what to do), context (what to use), input data (what to act on), output indicator (what to produce). Not every prompt needs all four. But knowing which element you're missing — and where to put each one — separates prompts that ship from prompts that drift.
The whole idea in one line
The mental model: a prompt is a structured request#
Think about how you'd brief a smart but unfamiliar contractor on a one-shot task. You wouldn't hand them a wall of text and hope. You'd:
- Tell them exactly what you need (instruction)
- Give them background they can't infer (context)
- Hand them the materials to work on (input data)
- Specify what to deliver and in what format (output indicator)
Prompts work the same way. The model is your contractor. The structure is the brief.
Element 1: Instruction#
The verb. What you want the model to do. Required for almost every prompt — the rare exception is pure-completion patterns where the input shape itself implies the task (like few-shot examples that show input → output and end with a fresh input).
Two rules:
- Lead with the action verb. Summarize. Classify. Translate. Extract. Generate. The first words the model reads set the task frame.
- Be specific. "Summarize" is fuzzy. "Summarize the email below in 3 bullets, each starting with a verb" is concrete.
Element 2: Context#
Background information the model needs but can't infer from the input alone. Things like: who the audience is, what tone to use, what constraints apply, who the model is pretending to be (persona), what previous turns established.
Optional. Skip when the task is self-evident (e.g., "translate to Spanish" needs no context). Required when the task depends on knowledge the model can't guess (your brand voice, your audience, your specific domain).
Element 3: Input data#
The actual content the model is operating on — the email being summarized, the code being reviewed, the document being analyzed. Distinct from instructions because it's data, not directions.
Two patterns matter here:
- Wrap in delimiters. XML tags (
<email>...</email>), triple quotes ("""), or markdown headers (## Email). Without delimiters, the model can confuse user-supplied content with your instructions — a vector for prompt injection. - Place after instruction + context. The model reads top-down; instructions first means the model knows what to do before it sees the data.
Element 4: Output indicator#
A signal at the end of the prompt that anchors the output format. Examples:
Summary:at the very end → model continues with a summaryJSON:→ model continues with JSONReply:→ model continues with a reply
Optional but high-leverage. The output indicator is what stops the model from saying "Here is the summary you requested:" before the actual content. It primes the format directly.
A full example with all four elements#
Reply to the customer email below in a warm, specific tone. [INSTRUCTION]
You are a senior support specialist at Acme. Replies should be [CONTEXT]
under 80 words. If the customer mentions a refund, billing, or
a bug, acknowledge it explicitly in the first sentence.
<email> [INPUT DATA]
{{customer_email}}
</email>
Reply: [OUTPUT INDICATOR]Each element does one job. Removing any one degrades reliability in a predictable way: no instruction → unclear what to do; no context → wrong tone; no delimiters → injection risk; no output indicator → preamble pollution.
Element ordering: top-down, instruction-first#
The conventional order — instruction → context → input data → output indicator — isn't arbitrary. Models pay slightly more attention to the start of the prompt (and to the very end). Putting instructions first means the model knows the task before it processes the data.
There's one notable exception: very long input data. If your prompt has 50KB of document and a 100-token instruction, putting instruction at the top alone risks the model getting "lost in the middle" on long contexts. Repeat the instruction at the end too — once at top, once at bottom — so it bookends the long content.
Which elements does your prompt actually need?#
Element requirements by task
| If your situation is… | Reach for… | Why |
|---|---|---|
| Classification (sentiment, intent, category) | Instruction + input + output indicator | Skip context unless rules are non-obvious |
| Translation, format conversion | Instruction + input | Output format is implied by the task; no indicator needed |
| Customer-facing writing | All four | Voice / persona / constraints all matter |
| Code generation | Instruction + context (style, language) + input + output indicator | Context anchors language version, style, idioms |
| Pure few-shot pattern (input → output examples) | Examples + new input | Examples ARE the instruction; no separate verb needed |
| Open-ended brainstorming | Instruction + context | Often no fixed input data; output indicator may hurt creativity |
Going further: advanced patterns#
System message vs. user message placement#
Modern chat APIs separate system and user messages. Best-practice mapping:
- System message: persistent context (persona, brand voice, hard rules). Things that should apply to every reply.
- User message: the instruction + per-turn input + output indicator. Things that vary by request.
Putting the persona in the system message instead of the user message produces more reliable adherence on Claude (which weights system prompts heavily) and somewhat better results on GPT-4o.
Output indicator + prefilling on Claude#
On Anthropic's API you can prefill the assistant's response. Combining an explicit output indicator at the end of the user message (JSON:) with a prefilled assistant turn ({) effectively guarantees the output starts as JSON. See prompting Claude.
When elements stack (sub-instructions)#
Complex tasks sometimes have nested instructions: "summarize the email; then classify the urgency; then suggest a reply." Resist the urge to cram all three into one instruction. Either decompose into a chain (preferred — see prompt chaining) or use numbered sub-instructions with explicit output sections for each.
Common mistakes#
- Burying the instruction. Putting the verb at the bottom after a wall of context means the model spends half the prompt unsure what task it's even doing.
- No delimiters around input. User content blends with instructions — inconsistent outputs and a security surface for prompt injection.
- Skipping the output indicator on format-sensitive tasks. Without
JSON:at the end, the model preambles. WithoutReply:, the model says "Here is your reply:" first. - Using all four elements when two would do. Translation doesn't need context or output indicator. Bloat hurts.
- Mixing instruction and context in the same paragraph. Use clear visual separation (line breaks, headers, tags) so the model can distinguish them.
Quick reference#
The 60-second summary
Four elements: instruction (what to do), context (what to use), input data (what to act on), output indicator (what format).
The order: top-down, instruction first. Bookend with instruction repeated at the end on long contexts.
The non-negotiables: wrap user input in delimiters; lead with a verb-driven instruction; end with an output indicator on format-sensitive tasks.
The trim rule: not every prompt needs all four. Strip elements that don't earn their tokens.
What to read next#
Now that you know what goes in a prompt, learn the universal habits that lift output quality: general prompting tips. For tuning the model parameters that affect output style, LLM settings. And for the techniques that build on this foundation, zero-shot prompting and few-shot prompting.
Put this guide to work
Save your prompts, version every change, and share them with your team — free for up to 200 prompts.