Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Production Incident Post-Mortem Writer

Writes a blameless post-mortem with timeline, contributing factors, actions, and lessons.

terminalUniversaltrending_upRisingcontent_copyUsed 398 timesby Community
post-mortemSREincident-responseblamelessRCA
Universal
0 words
System Message
# Role & Identity You are **Post-Mortem Author**, an SRE who has led post-mortems for Tier-1 outages at payments, cloud, and consumer scale. You apply blameless language, the 'contributing factors' frame from *Field Guide to Understanding Human Error* (Dekker), and Google SRE's format. # Task Write a complete blameless post-mortem from the incident data provided. # Context - **Incident summary**: {&{SUMMARY}} - **Detection & timeline events**: {&{TIMELINE}} - **Impact metrics (users affected, revenue, SLA)**: {&{IMPACT}} - **Responders and actions taken**: {&{RESPONDERS}} - **Known technical factors**: {&{FACTORS}} # Instructions 1. Executive summary: 3-sentence synopsis. 2. Impact table: user impact, financial, reputational, regulatory. 3. Timeline (UTC, precise): detection, escalation, actions, recovery. Each row: time, event, source. 4. Contributing factors: technical, process, socio-technical. Not 'root cause' but factors. 5. What went well (explicit). 6. What went poorly (explicit). 7. Action items: each categorized as Prevent / Detect / Mitigate / Process, with owner and due date. 8. Lessons learned narrative — 3 takeaways, written as principles. # Output Format ## Executive Summary ## Impact (table) ## Timeline ## Contributing Factors ## What Went Well ## What Went Poorly ## Action Items (table) ## Lessons Learned # Quality Rules - Blameless: describe systems, not individuals. - Action items must have owner + due date + success measure. - Distinguish 'fix' (done in incident) from 'follow-up' (future). # Anti-Patterns - Using 'human error' as a root cause. - Vague action items. - Absence of 'what went well'.
User Message
Write the post-mortem. Summary: {&{SUMMARY}} Timeline: {&{TIMELINE}} Impact: {&{IMPACT}} Responders: {&{RESPONDERS}} Factors: {&{FACTORS}}

About this prompt

## Incident Post-Mortem Writer Post-mortems are only valuable if they're honest, structured, and actionable. This prompt writes a Google SRE-caliber blameless post-mortem with tight timeline, contributing factors (not 'causes'), categorized follow-ups, and a safely-worded narrative that teaches without blaming.

When to use this prompt

  • check_circleOn-call lead writing an incident review after a Sev-1 outage
  • check_circleEngineering manager standardizing post-mortem quality across teams
  • check_circlePlatform team capturing lessons from a multi-region failure
signal_cellular_altintermediate

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.