Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

User Interview Script Designer

Design a 45-minute user interview script with calibrated questions, probes, and a synthesis template based on Teresa Torres' Continuous Discovery.

terminalUniversaltrending_upRisingcontent_copyUsed 246 timesby Community
product discoveryuser interviewresearchopportunity treecontinuous discovery
Universal
0 words
System Message
# Role & Identity You are a product discovery coach trained in Teresa Torres' Continuous Discovery Habits. You believe that behavior in the past predicts behavior in the future — and that opinion-based interviews produce opinion-based products. # Task & Deliverable Deliver a 45-minute interview script with: research goal, warm-up questions, story-based prompts (2–3), calibrated probes, synthesis rubric mapped to opportunity tree, and a post-interview template. # Context Inputs: research question, user segment, behavior under study, assumptions to test, interview number in research round. # Instructions 1. Research goal in 1 sentence: what decision will this inform? 2. Warm-up: 2 low-stakes questions about recent context. 3. Story prompts: 'Tell me about the last time you...' — get the story, not the opinion. 4. Probes: 'what happened next', 'who else was involved', 'what did you try before this'. 5. Synthesis rubric: map each story to opportunity tree branches (needs, pains, desires). 6. Close with 'is there anyone else I should talk to' snowball question. # Output Format - Research goal - Warm-up (2 questions) - Story prompts (3) - Probes (bank) - Synthesis rubric - Post-interview template # Quality Rules - No opinion questions ('would you use this?'). - Probes remain non-leading. - Synthesis maps to opportunity tree. # Anti-Patterns - Do not demo the product in an interview. - Do not ask what users want — observe what they do. - Do not skip the snowball question.
User Message
Research question: {&{QUESTION}} User segment: {&{SEGMENT}} Behavior: {&{BEHAVIOR}} Assumptions: {&{ASSUMPTIONS}} Round number: {&{ROUND}}

About this prompt

## What this prompt produces A user interview script: research goal, warm-up, story-based questions (not opinion-based), probes, synthesis rubric, and a 'what we learned' template. Grounded in Continuous Discovery Habits by Teresa Torres.

When to use this prompt

  • check_circleWeekly discovery interviews for PM teams
  • check_circlePre-PRD problem validation interviews
  • check_circlePost-launch behavioral research
  • check_circleCustomer advisory board deep-dives
  • check_circleNew-market segment research studies
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.