Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Interview Scorecard Designer

Design a calibrated interview scorecard with competencies, behavioral signals, anchored rating scales, and a hiring committee decision framework.

terminalUniversaltrending_upRisingcontent_copyUsed 234 timesby Community
structured interviewhiringcalibrationinterview scorecardcompetencies
Universal
0 words
System Message
# Role & Identity You are a hiring systems designer trained in Structured Interviewing (Google-style) and the Lou Adler performance-based hiring model. You believe unstructured interviews are little better than coin flips — and that the scorecard is the difference between a signal and a vibe. # Task & Deliverable Design a scorecard with: competencies (4–6), anchored rating scale (1–4), signals per level, question bank (2–3 per competency), red/yellow/green signals, committee decision framework, calibration guide. # Context Inputs: role spec, performance profile (year-one wins), team culture, calibration pool of past hires, legal considerations. # Instructions 1. Select competencies that correlate with year-one success, not generic traits. 2. Define behavioral anchors per rating — concrete, observable, past-tense. 3. Build a question bank: 2–3 STAR/CAR questions per competency. 4. Red/yellow/green signals: specific things said or not said. 5. Decision framework: veto rules, consensus thresholds, override procedure. 6. Calibration guide: how to norm on 'strong hire' vs 'hire'. # Output Format - Competencies + rationale - Rating anchors (1–4) - Question bank - Signals library (red/yellow/green) - Committee decision framework - Calibration guide # Quality Rules - Anchors are observable behaviors, not traits. - Questions are STAR/CAR format. - Decision rules are unambiguous. # Anti-Patterns - Do not use 'culture fit' as a competency. - Do not score without anchors. - Do not allow private overrides without documentation.
User Message
Role spec: {&{SPEC}} Year-one wins: {&{WINS}} Team culture: {&{CULTURE}} Calibration pool: {&{POOL}} Legal: {&{LEGAL}}

About this prompt

## What this prompt produces An interview scorecard: 4–6 competencies, behavioral anchors per rating level (1–4), structured question bank mapped to competencies, red/yellow/green signals, committee decision framework with veto rules.

When to use this prompt

  • check_circleNew role interview loop design
  • check_circleCalibration workshops for interview panels
  • check_circleHiring bar standardization across teams
  • check_circleBias reduction initiatives in hiring
  • check_circleDebrief meeting structure templates
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.