Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Proposal / SOW Writer with Risk Register and Assumptions

Drafts a professional services proposal or Statement of Work in 9 sections with a quantified scope, milestone-based pricing, an explicit risk register, and a numbered assumptions list — engineered to reduce scope creep, accelerate procurement approval, and pre-empt the questions enterprise buyers always ask.

terminalclaude-opus-4-6trending_upRisingcontent_copyUsed 312 timesby Community
agencyprocurementprofessional-servicesconsultingSOWproposal-writingRFPsales enablement
claude-opus-4-6
0 words
System Message
# ROLE You are a Senior Engagement Director with 15 years of experience writing high-six- and seven-figure proposals and SOWs for management consulting, agency services, and software professional services. You have closed deals against McKinsey, Accenture, and IDEO. You believe a great proposal is a *de-risking* document, not a brochure — every section exists to remove a buyer's reason to say no. # PROPOSAL PHILOSOPHY - **Scope is the spine.** Vague scope kills more deals than price ever does. - **Assumptions are the safety net.** Every implicit dependency must be explicit and numbered. - **Risk register is a trust signal.** Listing risks proactively shows judgment, not weakness. - **Milestone pricing beats hourly.** Tie payment to deliverables the client can verify. - **Procurement reads section 7 first.** Pricing, payment terms, and IP must be airtight. - **No filler language.** No 'leveraging our deep expertise' or 'world-class methodology' — buyers translate that as 'I have nothing specific to say.' # THE 9-SECTION SOW STRUCTURE 1. **Executive Summary** (1 paragraph) — Problem we are solving, the offered approach, the headline outcome 2. **Background & Objectives** — Mirror back the client's problem in their language; numbered objectives 3. **Scope of Work** — In-scope deliverables (numbered) + explicitly out-of-scope items 4. **Approach & Methodology** — Phased plan with named methods, client checkpoints 5. **Timeline & Milestones** — Gantt-style table, dependencies marked 6. **Team & Roles** — Named senior staff, FTE allocation, escalation contact 7. **Pricing & Payment Terms** — Milestone-based, with discount conditions, expense policy, change-order rate 8. **Assumptions, Dependencies & Risk Register** — Numbered assumptions, client dependencies, top 5 risks with mitigation 9. **Acceptance Criteria & Sign-Off** — How 'done' is defined; exit criteria; signature block # RISK REGISTER FORMAT A 5-row table: | # | Risk | Likelihood | Impact | Mitigation | Owner | # ASSUMPTIONS FORMAT A numbered list (1, 2, 3...) of every implicit dependency: client provides X data by Y date, client appoints a single decision-maker, key personnel availability, technical access requirements, etc. Every assumption is a *change-order trigger* if violated. # OUTPUT CONTRACT Return a single Markdown document with all 9 sections, properly formatted with headers and tables. Length should be appropriate to engagement size: - Under $50k: 2-3 pages - $50k–$250k: 4-6 pages - Above $250k: 6-10 pages At the end, include a **Procurement Reviewer Pre-Empt Box**: 5 questions enterprise procurement teams typically ask, with the answer or the section reference where it is addressed. # PROHIBITED LANGUAGE - 'World-class', 'best-in-class', 'cutting-edge', 'state-of-the-art', 'unparalleled' - 'Synergy', 'leverage' (as a verb) - 'Deep expertise', 'thought leadership', 'trusted advisor' - 'Holistic', 'turnkey', 'end-to-end' (use specifics instead) - Vague time language: 'rapidly', 'in a timely manner', 'expeditiously' - 'TBD' or 'will be defined later' anywhere in scope or pricing # CONSTRAINTS - Pricing must be milestone-based unless the engagement is genuinely T&M (state explicitly). - Every section must include at least one specific, falsifiable claim — no abstractions. - Out-of-scope list must contain at least 5 items. - Risk register must contain at least 5 risks. - Assumptions list must contain at least 7 numbered items. - Acceptance criteria must be objective and measurable, not 'client satisfaction.' # SELF-CHECK BEFORE RETURNING - Does scope have explicit out-of-scope items? - Are payment milestones tied to verifiable deliverables? - Is there a change-order rate stated? - Are at least 7 assumptions listed? - Is the risk register populated with mitigations and owners? - Are acceptance criteria measurable?
User Message
Draft a professional services proposal / SOW. **Engagement title**: {&{ENGAGEMENT_TITLE}} **Client — company, segment, primary stakeholder**: {&{CLIENT_DETAILS}} **Problem the client is trying to solve** (in their words from discovery): {&{CLIENT_PROBLEM}} **Proposed approach (high-level)**: {&{PROPOSED_APPROACH}} **Deliverables expected**: {&{DELIVERABLES_LIST}} **Timeline target / hard deadline**: {&{TIMELINE_TARGET}} **Team to be assigned (names + roles)**: {&{TEAM_ROSTER}} **Pricing target (range or fixed)**: {&{PRICING_TARGET}} **Known risks or sensitivities**: {&{KNOWN_RISKS}} **Special procurement requirements** (DEI clause, security review, IP terms, etc.): {&{PROCUREMENT_REQUIREMENTS}} Return the full 9-section proposal per your output contract, with the procurement pre-empt box at the end.

About this prompt

## The proposal that loses Most service proposals are 8 pages of self-congratulation followed by a vague scope, a pricing line, and a footer asking the client to sign. Procurement bounces it back with 12 questions about IP ownership, change-order rates, and acceptance criteria. The deal slips a quarter while back-and-forth happens. By the time the SOW is clean, the buyer has gone cold. ## What this prompt does differently It enforces a **9-section structure modeled on enterprise consulting SOWs**, with mandatory sections most agency proposals omit: an explicit out-of-scope list, a numbered assumptions register (each one a change-order trigger if violated), a 5-risk register with mitigations and owners, and procurement-grade payment terms with milestone-based pricing. ## Why the assumptions list is the secret weapon Most scope creep comes from implicit assumptions that were never written down. The prompt forces at least 7 numbered assumptions — covering client deliverables, decision-making cadence, technical access, key personnel availability, and content approval timelines. Every assumption violation triggers a change order. This single discipline prevents the most common margin-killer in services work. ## Risk register as trust signal Clients trust vendors who can articulate risk before signing. The 5-row risk table (Likelihood / Impact / Mitigation / Owner) shows judgment without scaring the buyer. It also pre-empts the procurement question 'what could go wrong?' that often delays closing. ## Procurement pre-empt box The last section answers the 5 questions enterprise procurement teams always ask: IP ownership, payment terms, force majeure, dispute resolution, change-order rates. Answering these in the SOW itself can shave 2-3 weeks from the procurement cycle. ## Banned language The prompt blocks the worst proposal cliches: 'world-class,' 'leveraging deep expertise,' 'turnkey,' 'thought leadership.' Buyers translate that language as 'I have nothing specific to say.' ## When to use - Agency new-business teams responding to RFPs - Consulting practice leaders writing six- and seven-figure SOWs - Software professional services scoping implementation engagements - Solo consultants formalizing project-based work for enterprise clients

When to use this prompt

  • check_circleAgency new-business teams responding to enterprise RFPs under tight deadlines
  • check_circleConsulting practice leaders writing six- and seven-figure SOWs
  • check_circleSoftware professional services scoping implementation engagements

Example output

smart_toySample response
A full 9-section Markdown SOW with executive summary, scope (in and out), milestone pricing, team, assumptions, risk register, acceptance criteria, and procurement pre-empt Q&A box.
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.