Bias-Aware Survey Question Designer (Likert, NPS, Open-Ended)
Designs survey instruments with calibrated response scales, bias-checked wording, attention checks, and validated structural patterns — outputs items in a deployable format with a per-item bias audit and a recommended analysis plan.
About this prompt
When to use this prompt
- check_circleDesigning primary research instruments for academic or industry studies
- check_circleBuilding NPS and customer-experience surveys that hold up to methodological scrutiny
- check_circleDrafting employee engagement and pulse surveys with calibrated scales and attention checks
Example output
Latest Insights
Stay ahead with the latest in prompt engineering.
ArticleGetting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes
A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.
ArticleAI Prompt Security: What Your Team Needs to Know Before Sharing Prompts
Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.
ArticlePrompt Engineering for Non-Technical Teams: A No-Jargon Guide
You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.
ArticleHow to Build a Shared Prompt Library Your Whole Team Will Actually Use
Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.
ArticleGPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?
We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.
ArticleThe Complete Guide to Prompt Variables (With 10 Real Examples)
Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.
Recommended Prompts
Mixed Survey Response Analyzer (Likert + Open-Text)
Analyzes a survey dataset combining Likert-scale items and open-text responses — produces descriptive statistics, distributional flags, theme-coded open-ends, segment-level cross-tabs, and an executive narrative connecting the quantitative and qualitative signals.
Source Credibility Evaluator (CRAAP + Bias Audit)
Evaluates the credibility of a source — webpage, article, study, or document — using the CRAAP framework (Currency, Relevance, Authority, Accuracy, Purpose) plus a bias audit, flagged red flags, and a credibility-graded recommendation for whether to cite, verify further, or discard.
Market Research Synthesizer (Primary + Secondary Sources)
Synthesizes primary research (customer interviews, surveys) and secondary sources (analyst reports, public filings, industry data) into a triangulated market-opportunity assessment with sized TAM/SAM/SOM, segment definition, competitive map, and a confidence-graded set of strategic implications.
Calibrated Evidence-Based Performance Review Writer (Manager → IC)
Writes a manager-authored performance review with evidence-anchored examples, calibrated rating language, balanced strengths and growth areas, and forward-looking development goals — engineered to survive HR calibration meetings without bias-driven critique.
Token Counter
Real-time tokenizer for GPT & Claude.
Cost Tracking
Analytics for model expenditure.
API Endpoints
Deploy prompts as managed endpoints.
Auto-Eval
Quality scoring using similarity benchmarks.