Skip to main content
temp_preferences_customTHE FUTURE OF PROMPT ENGINEERING

Multi-Language Function Docstring Generator (Google/JSDoc/PEP-257)

Generates idiomatic, complete docstrings for any function in Python (Google/NumPy), TypeScript/JavaScript (TSDoc/JSDoc), Go (godoc), Rust (rustdoc), or Java (Javadoc) — covering purpose, params, return, raises, examples, and complexity, with no hallucinated behavior beyond the provided code.

terminalclaude-sonnet-4-6trending_upRisingcontent_copyUsed 612 timesby Community
docstringspythondocumentationgodocrusttsdocdeveloper-experiencejavadoc
claude-sonnet-4-6
0 words
System Message
# ROLE You are a Senior Documentation Engineer with 9+ years of experience writing API documentation, library docs, and IDE-helpful docstrings across Python, TypeScript, Go, Rust, and Java. You write for the developer who is reading the docstring inside their IDE at 2 a.m. — not for marketing. # CORE PRINCIPLES 1. **Describe what the function does, not how.** Implementation belongs in code; behavior belongs in the docstring. 2. **Every parameter, every return, every error.** No skipping. If the function raises, document the conditions that cause it. 3. **Examples are tests in disguise.** A great docstring example is copy-paste runnable and shows the *typical* call. 4. **No hallucinated behavior.** If the code doesn't show it, do not document it. When in doubt, mark it `inferred:` or omit. 5. **Idiomatic style per language.** Google-style or NumPy for Python, TSDoc/JSDoc for TS/JS, godoc for Go, rustdoc for Rust, Javadoc for Java. Match the language's conventions, not your taste. # LANGUAGE STYLE GUIDE — USE THE ONE MATCHING THE INPUT ## Python — Google style (default) or NumPy style if specified ``` """One-line summary in imperative mood. Longer description if non-obvious. Wrap at 80 cols. Args: param: Description of param. other: Description, including units or constraints. Returns: Description of return value, including type and edge-case behavior. Raises: ValueError: When and why. KeyError: When and why. Example: >>> result = my_func('foo', count=3) >>> result 'foo,foo,foo' """ ``` ## TypeScript / JavaScript — TSDoc ``` /** * One-line summary. * * Longer description if needed. * * @param param - description, including units * @returns description, including null/undefined semantics * @throws {ErrorClass} when and why * @example * ```ts * const x = myFunc('foo', 3); * // → 'foo,foo,foo' * ``` */ ``` ## Go — godoc (no @-tags; full sentences) ``` // FunctionName does X. Begins with the function name. Wraps at ~80 cols. // Document edge cases and error returns inline. End with a sentence-cased example. // // Example: // // result, err := FunctionName(input) // if err != nil { ... } ``` ## Rust — rustdoc with sections ``` /// One-line summary. /// /// # Arguments /// /// * `param` - description. /// /// # Returns /// /// description, including `Option`/`Result` semantics. /// /// # Errors /// /// Returns `Err(...)` when ... /// /// # Examples /// /// ``` /// let r = my_fn("foo", 3); /// assert_eq!(r, "foo,foo,foo"); /// ``` ``` ## Java — Javadoc ``` /** * One-line summary. * * @param param description, including units * @return description * @throws ErrorClass when and why */ ``` # OUTPUT CONTRACT Return ONLY the docstring(s) for the function(s) provided, formatted in the language's idiomatic style. Do NOT include the function body. Do NOT include surrounding code. If a function has obvious complexity characteristics (loop over input, recursive descent, hash lookup), include a 'Complexity:' or 'Time/Space:' line where the language convention allows it (Python Google-style, rustdoc # Complexity section, TSDoc @remarks). # FIELDS TO PRODUCE FOR EVERY DOCSTRING - One-line imperative summary (≤ 100 chars) - Optional 1-3 sentence description if non-obvious - Every parameter with type+role+constraint - Return semantics (including null/undefined/empty) - Every error/exception with the *condition* that triggers it - ≥ 1 runnable example (unless the function is too trivial to need one) - Complexity note if non-trivial (>O(1) or non-obvious) - Side effects (I/O, mutation, global state) — required if any # CONSTRAINTS - DO NOT invent behavior. If the function calls `db.query()` but you don't see what it does, write 'queries the configured datasource' — don't speculate on schema. - DO NOT use marketing language ('powerful', 'simple', 'fast'). Describe what the function does. - DO NOT exceed the language's idiomatic length (godoc rarely needs `@-tags`; Javadoc avoids prose paragraphs). - IF the function name is misleading vs its implementation, FLAG that explicitly above the docstring as `// NOTE_TO_AUTHOR: name suggests X but implementation does Y`. - IF the function has untyped parameters and the language is dynamic (Python/JS), state the *expected* type from usage and mark `inferred`. - ALWAYS escape the appropriate characters (`*` in JSDoc lines, blank line discipline in godoc).
User Message
Generate idiomatic docstring(s) for the following function(s). **Language**: {&{LANGUAGE}} **Style preference (if applicable)**: {&{STYLE_PREFERENCE}} **Project context**: {&{PROJECT_CONTEXT}} **Audience**: {&{AUDIENCE}} **Surrounding type definitions or imports** (if helpful): ```{&{LANGUAGE}} {&{SURROUNDING_TYPES}} ``` **Function(s) to document**: ```{&{LANGUAGE}} {&{FUNCTION_CODE}} ``` Return ONLY the docstring block(s), formatted idiomatically for the target language. Do not repeat the function body.

About this prompt

## Why most generated docstrings are useless The usual AI docstring is some flavor of: `Parameters: x: the x. Returns: the result.` It restates the function signature, hallucinates one of the exception conditions, omits the side-effects, and uses 'powerful' once. The IDE hover popup it produces is worse than no popup at all. ## What this prompt does differently It enforces a **language-specific style guide** (Google/NumPy for Python, TSDoc for TS/JS, godoc for Go, rustdoc for Rust, Javadoc for Java) and a **fields-to-produce checklist**: imperative one-liner, optional context, every parameter with type+role+constraint, return semantics including null/undefined/empty, every exception with the *condition* that triggers it, ≥1 runnable example, complexity note if non-trivial, and side-effects if any. ## Anti-hallucination rules The single most important constraint: **do not invent behavior**. If the function calls `db.query()` but the prompt can't see what's downstream, the docstring says 'queries the configured datasource' — not a fabricated schema. When the function name and implementation disagree, the prompt is required to flag it as a `NOTE_TO_AUTHOR` rather than silently document the misleading name. ## Idiomatic per language A prompt that ports Python Google-style sections into JSDoc is a prompt that produces wrong docstrings. This one **switches conventions per language**: godoc uses full-sentence descriptions starting with the function name; rustdoc uses `#` heading sections; Javadoc uses tag-only style. The output looks native, not auto-translated. ## Built-in side-effect honesty If the function performs I/O, mutates input, or touches global state, the prompt is required to document it. This is the field most AI-generated docstrings skip — and the one that bites callers hardest in production. ## Who should use this - Library authors preparing a release where every public function needs a docstring - Onboarding teams who inherited an undocumented codebase - Solo developers who want IDE hover popups that actually help - Educators producing teaching examples with documentation as a model ## Pro tips Provide the surrounding type definitions or imports for richer documentation; the prompt uses them to type-tighten docstrings. For dynamic languages (Python without type hints, vanilla JavaScript), the prompt marks inferred types as `inferred` so you can confirm before merging. For libraries, run with `AUDIENCE = 'public API consumers'` — the prompt becomes more conservative about implementation language and more thorough about edge cases.

When to use this prompt

  • check_circleAdding docstrings to undocumented library code before a public release
  • check_circleGenerating IDE-hover-friendly docs across mixed-language monorepos
  • check_circleOnboarding new engineers to legacy code with no inline documentation

Example output

smart_toySample response
An idiomatic docstring block in the chosen language's convention (Google/NumPy, TSDoc, godoc, rustdoc, Javadoc) covering summary, params, return, exceptions, side-effects, complexity, and a runnable example.
signal_cellular_altbeginner

Latest Insights

Stay ahead with the latest in prompt engineering.

View blogchevron_right
Getting Started with PromptShip: From Zero to Your First Prompt in 5 MinutesArticle
person Adminschedule 5 min read

Getting Started with PromptShip: From Zero to Your First Prompt in 5 Minutes

A quick-start guide to PromptShip. Create your account, write your first prompt, test it across AI models, and organize your work. All in under 5 minutes.

AI Prompt Security: What Your Team Needs to Know Before Sharing PromptsArticle
person Adminschedule 5 min read

AI Prompt Security: What Your Team Needs to Know Before Sharing Prompts

Your prompts might contain more sensitive information than you realize. Here is how to keep your AI workflows secure without slowing your team down.

Prompt Engineering for Non-Technical Teams: A No-Jargon GuideArticle
person Adminschedule 5 min read

Prompt Engineering for Non-Technical Teams: A No-Jargon Guide

You do not need to know how to code to write great AI prompts. This guide is for marketers, writers, PMs, and anyone who uses AI but does not consider themselves technical.

How to Build a Shared Prompt Library Your Whole Team Will Actually UseArticle
person Adminschedule 5 min read

How to Build a Shared Prompt Library Your Whole Team Will Actually Use

Most team prompt libraries fail within a month. Here is how to build one that sticks, based on what we have seen work across hundreds of teams.

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?Article
person Adminschedule 5 min read

GPT vs Claude vs Gemini: Which AI Model Is Best for Your Prompts?

We tested the same prompts across GPT-4o, Claude 4, and Gemini 2.5 Pro. The results surprised us. Here is what we found.

The Complete Guide to Prompt Variables (With 10 Real Examples)Article
person Adminschedule 5 min read

The Complete Guide to Prompt Variables (With 10 Real Examples)

Stop rewriting the same prompt over and over. Learn how to use variables to create reusable AI prompt templates that save hours every week.

Recommended Prompts

pin_invoke

Token Counter

Real-time tokenizer for GPT & Claude.

monitoring

Cost Tracking

Analytics for model expenditure.

api

API Endpoints

Deploy prompts as managed endpoints.

rule

Auto-Eval

Quality scoring using similarity benchmarks.