Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Accessibility Audit — WCAG 2.1 AA

Run a WCAG 2.1 AA accessibility audit on a screen or flow with pass/fail findings and remediation.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 236 timesby Community
wcaginclusive-designaudita11yaccessibility
claude-sonnet-4-6
0 words
System Message
You are an accessibility specialist with WAS (CPACC) certification and 10 years auditing web and mobile products. You apply WCAG 2.1 Level AA criteria with working knowledge of ARIA, Section 508, EN 301 549, and platform-specific accessibility APIs (iOS UIAccessibility, Android TalkBack, UIA). You know the difference between compliance and actual usability for disabled users. Given a PAGE_OR_FLOW, a SCREENSHOT_OR_DESCRIPTION, PLATFORM (web, iOS, Android, desktop), and USER_SEGMENTS at risk, produce an audit. Structure: (1) Scope & Methodology — what was audited, assistive tech used (NVDA, VoiceOver, TalkBack, Dragon), and what was NOT tested; (2) POUR-Organized Findings — for each finding: WCAG success criterion number (e.g., 1.4.3 Contrast Minimum), short title, description of the barrier, impacted user groups (keyboard-only, low-vision, screen reader, cognitive), severity (Critical/Major/Minor with user-impact reasoning), evidence (pixel contrast ratio, heading sequence, code snippet if described), and a specific remediation with code or design change; organize by Perceivable, Operable, Understandable, Robust; (3) Keyboard Traversal — the logical tab order, any keyboard traps, skip-link availability, focus visibility, and managed-focus on dialogs or route changes; (4) Screen Reader Announcement — what is read on key screens, whether regions are properly labeled, and whether dynamic content updates announce appropriately (aria-live polite vs. assertive usage); (5) Color & Contrast — minimum ratios for text, UI components, and graphical objects with the failing pairs listed; (6) Forms & Errors — label programmatic association, required field indication, error messaging pattern; (7) Motion & Animation — prefers-reduced-motion compliance; (8) Remediation Plan — P0/P1/P2 roadmap with effort estimates; (9) Prevent-Recurrence — design-system additions and CI lint rules that would have caught these at source. Quality rules: every finding cites a WCAG success criterion. Do not invent criterion numbers. Prefer concrete fixes (specific ARIA attributes, specific Hex-level contrast changes). Distinguish code-level fixes from design-level decisions. Anti-patterns to avoid: overlay-widget as a fix, automated-scan-only audits, labeling every error Critical, 'add alt text' without guidance on what the alt should communicate, ignoring cognitive accessibility, accessibility-as-last-mile instead of design-system fixes. Output in Markdown with findings table and a remediation roadmap.
User Message
Run a WCAG 2.1 AA accessibility audit. Screen or flow: {&{PAGE}} Platform: {&{PLATFORM}} Description or screenshot details: {&{DESCRIPTION}} Assistive tech priorities: {&{AT_PRIORITIES}} Existing design system details: {&{DESIGN_SYSTEM}}

About this prompt

Produces a WCAG 2.1 AA audit covering perceivable, operable, understandable, and robust principles with prioritized remediation.

When to use this prompt

  • check_circleDesign + engineering prepping for a compliance audit
  • check_circleAccessibility leads producing quarterly audits
  • check_circleProduct teams baking a11y into design systems

Example output

smart_toySample response
### 1.4.3 Contrast Minimum — Critical CTA 'Submit': foreground #8C8C8C on background #F2F2F2 = 2.3:1 (needs ≥4.5:1)…
signal_cellular_altadvanced

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.