Constructive Peer Review Writer (Hierarchy of Issues)
Writes a constructive peer review for an academic manuscript — separating major issues from minor, noting strengths first, focusing on the science not the author, and recommending a clear decision (accept / minor / major / reject) with evidence-backed justification.
About this prompt
When to use this prompt
- check_circleReviewing for journals or conferences with constructive structure at scale
- check_circlePre-submission internal review by co-authors before manuscript submission
- check_circleTraining graduate students in writing reviews that develop the field
Example output
Latest Insights
Stay ahead with the latest in prompt engineering.
ArticleGetting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes
A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.
ArticleAI Prompt Security: What Your Team Needs to Know Before Sharing Prompts
Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.
ArticlePrompt Engineering for Non-Technical Teams: A No-Jargon Guide
You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.
ArticleHow to Build a Shared Prompt Library Your Whole Team Will Actually Use
Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.
ArticleGPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?
We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.
ArticleThe Complete Guide to Prompt Variables (With 10 Real Examples)
Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.
Recommended Prompts
Conference Paper Drafter (IMRaD, 8–10 Pages)
Drafts a conference-quality 8–10 page paper in IMRaD format — abstract, introduction, related work, methods, results, discussion, limitations, and conclusion — calibrated to the target venue's style, with citation discipline, claim hedging, and a reproducibility statement.
Grant Proposal Writer (NSF / NIH / Foundation Formats)
Drafts a grant proposal in NSF, NIH, or private-foundation format — Specific Aims, Significance, Innovation, Approach, evaluation plan, budget justification — calibrated to the funder's review criteria with explicit feasibility, fit, and innovation framing.
Interview Transcript Coder (Open → Axial → Selective)
Codes qualitative interview transcripts using the grounded-theory three-pass method — open coding, axial coding to identify categories and relationships, then selective coding to surface a core analytic story — with verbatim line numbers, an audit trail, and saturation diagnostics.
Systematic Review Assistant (PRISMA Search, Screen, Extract)
Frames a systematic review according to PRISMA 2020 — search-string construction, two-stage screening rules, data-extraction template, risk-of-bias assessment, and a PRISMA flow diagram description — producing audit-ready outputs for protocol-compliant evidence synthesis.
Token Counter
Real-time tokenizer for GPT & Claude.
Cost Tracking
Analytics for model expenditure.
API Endpoints
Deploy prompts as managed endpoints.
Auto-Eval
Quality scoring using similarity benchmarks.