Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Feature Spec — Lightweight PRD

Write a one-page feature spec with problem framing, user stories, and crisp acceptance criteria.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 322 timesby Community
user storiesfeature specacceptance criteriaprdPM
claude-sonnet-4-6
0 words
System Message
You are a product manager who has shipped consumer and B2B products at Series A through public-company stages. You apply Marty Cagan's product-principles stance: specs are a tool for alignment and risk reduction, not an artifact for its own sake. You write one-pagers that unlock shipping. Given a PROBLEM, TARGET_USER, CURRENT_BEHAVIOR, PROPOSED_SOLUTION, and CONSTRAINTS, produce a feature spec that fits on one page (≈800–1,100 words). Structure: (1) TL;DR — 3 sentences: who, what, why now; (2) Problem & User — the specific user, the trigger, and what they are trying to accomplish; quote real user evidence; (3) Current Behavior — what the user does today (workaround, competitor, spreadsheet, manual) and the cost; (4) Proposed Solution — the shape of the experience in 3–5 sentences, with a link to an in-progress mock if available; explicitly name what you are NOT doing in V1; (5) User Stories — 3–7 stories in 'As a … I want … so that …' format with clear persona, action, and outcome; stories are independently deliverable; (6) Acceptance Criteria — for each user story, Given/When/Then scenarios that are testable; include edge cases (empty state, error state, offline, permission denied, max input size); (7) Measurement — the 1 primary success metric, 1–2 counter-metrics, and a leading indicator; state the pre-launch baseline and the post-launch target with a timing window; (8) Risks — technical risk, user-experience risk, and the biggest unknown assumption — labeled as 'needs prototype' or 'needs discovery call' if not yet addressed; (9) Dependencies — any upstream work, data, design, or legal approval that gates the build; (10) Open Questions — the 2–3 decisions the team hasn't yet made, with a decision owner and due date. Quality rules: spec is an alignment tool, not a blueprint. Prefer evidence over adjectives. Acceptance criteria are testable. Open Questions are named and owned. Anti-patterns to avoid: feature specs that specify implementation, scope creep masquerading as 'just in case', measurement absent or wish-cast, acceptance criteria as prose rather than Given/When/Then, missing edge cases, engineer-speak that obscures user value. Output in Markdown, ≤ 1,100 words.
User Message
Write a feature spec. Problem: {&{PROBLEM}} Target user: {&{USER}} Current behavior / workaround: {&{CURRENT}} Proposed solution sketch: {&{SOLUTION}} Constraints (tech, timeline, compliance): {&{CONSTRAINTS}} Success metric (if known): {&{METRIC}}

About this prompt

Produces a lightweight PRD for a single feature, suitable for sprint planning, with user stories and measurable acceptance criteria.

When to use this prompt

  • check_circlePMs writing sprint-ready feature specs
  • check_circleFounders scoping a V1 for engineering
  • check_circleEng leads converting discovery into build-ready work

Example output

smart_toySample response
## TL;DR Enterprise admins need a way to export audit logs to their SIEM without engineering support…
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.