Use when crafting prompts for AI models, reviewing existing prompts for quality, or designing system prompts for agents and tools - provides expert guidance on structure, clarity, common patterns, and model-specific optimizations to create effective, token-efficient prompts
/plugin marketplace add mthalman/superpowers/plugin install superpowers@claude-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
model-notes.mdEffective prompts are specific, well-structured, and match task complexity to response needs.
This skill provides expert knowledge for:
Core principle: Specificity beats generality. Structure aids parsing. Examples clarify intent.
Use when:
Don't use for:
XML tags (especially effective with Claude):
<instructions>
Task description here
</instructions>
<examples>
Example 1: input → output
Example 2: input → output
</examples>
<context>
Background information
</context>
Markdown structure (universal):
# Main Task
Brief overview
## Requirements
- Requirement 1
- Requirement 2
## Output Format
Specify exact structure here
## Examples
Show 1-2 examples
Hierarchical organization:
Overview (what you're asking)
↓
Details (specifics, constraints)
↓
Examples (desired output)
↓
Context (background info if needed)
ALWAYS specify output format. This is the most common gap in weak prompts.
Bad:
"Review this code and tell me what's wrong"
Good:
"Review this code. Output format:
- Issue description
- Severity (Critical/High/Medium/Low)
- Line number
- Suggested fix"
Even better with structure:
{
"issues": [
{
"description": "string",
"severity": "Critical|High|Medium|Low",
"line": number,
"fix": "string"
}
]
}
❌ "Analyze this function"
✅ "Analyze this Python function. Assume Python 3.10+. Focus on type safety and error handling."
❌ "Write clean code"
✅ "Write code following: single responsibility per function, descriptive names, <100 lines per function"
❌ "Parse this input"
✅ "Parse this input. If empty: return {}. If malformed: return error with specific line. If valid: return parsed object."
❌ "Don't forget error handling"
✅ "Include error handling for: network failures, invalid input, timeout"
❌ "Brief summary"
✅ "Summary in 2-3 sentences, max 50 words"
❌ "Several examples"
✅ "3-5 examples showing common cases"
<instructions>
Extract all IP references from the document below.
</instructions>
<context>
Legal documents typically reference IP in sections about licensing, warranties, and indemnification...
</context>
<document>
[Document here]
</document>
| Pattern | When to Use | Structure |
|---|---|---|
| Chain-of-thought | Complex reasoning | "Think step-by-step: 1) Analyze X, 2) Consider Y, 3) Conclude Z" |
| Few-shot learning | Demonstrating format | "Example 1: input → output\nExample 2: input → output\nNow: [new input]" |
| Role-based prompting | Need specific expertise | "You are a [expert role] with expertise in [domain]..." |
| Constrained generation | Specific format required | "Output must be valid JSON matching: {schema}" |
| Decomposition | Multi-step tasks | "First X, then Y, finally Z. Show work for each step." |
| Self-critique | Quality checking | "Generate answer, review for [criteria], provide final version" |
Chain-of-thought:
❌ Without: "Is this function optimized?"
✅ With: "Analyze this function step-by-step:
1. Identify time complexity
2. Find performance bottlenecks
3. Suggest specific optimizations
4. Estimate improvement impact"
Few-shot learning:
"Extract action items from meeting notes.
Example 1:
Input: 'John will send the report by Friday. Sarah needs to review before Monday.'
Output:
- [ ] John: Send report (Due: Friday)
- [ ] Sarah: Review report (Due: Monday)
Example 2:
Input: 'Team agreed to refactor auth module. Mike volunteered.'
Output:
- [ ] Mike: Refactor auth module (Due: Not specified)
Now extract from: [your text here]"
Role-based:
"You are a security engineer specializing in web application security with 10+ years experience in OWASP Top 10 vulnerabilities.
Review the authentication code below for security issues..."
Key elements:
Example structure:
# [Agent Name] System Prompt
You are a [role] that [primary purpose]. Your capabilities include [list].
## Core Responsibilities
1. [Responsibility 1]
2. [Responsibility 2]
## Decision-Making Framework
When faced with [situation], you should [approach].
## Tool Usage
- [Tool 1]: Use when [trigger condition]
- [Tool 2]: Use when [trigger condition]
## Error Handling
If [error type], then [recovery approach].
## Output Format
Always structure responses as:
[Format specification]
Effective tool descriptions need:
tool-name: Brief one-line summary
Use when: [Specific trigger conditions - when this tool applies]
Parameters:
- param1 (required): type, format, constraints
- param2 (optional): type, default value
Returns: [What the tool outputs]
Examples:
- [Common use case 1]
- [Common use case 2]
When NOT to use:
- [Alternative tool for different case]
- [What this tool doesn't do]
Bad tool description:
search-code: searches for code
Good tool description:
search-code: Find functions, classes, or patterns in codebase using regex
Use when:
- Finding where a function is defined
- Locating all uses of a specific API
- Identifying code patterns across files
Parameters:
- pattern (required): regex pattern to match
- file_type (optional): filter by extension (js, py, etc.)
- context_lines (optional): lines before/after match (default: 0)
Returns: List of matches with file path, line number, and context
When NOT to use:
- Reading entire files (use read-file)
- Broad exploration (use explore-codebase)
Problem: Agents deal with limited context windows and need to manage information flow.
Techniques:
1. Compression - Extract structured data:
❌ Pass forward: "The codebase uses Express with PostgreSQL. Authentication is handled
by passport.js. There are 47 routes spread across 8 files. The database has 12 tables..."
✅ Extract structure:
{
"framework": "Express",
"database": "PostgreSQL",
"auth": "passport.js",
"routes": {"count": 47, "files": 8},
"db_tables": 12
}
2. Reference strategies:
❌ "The User model at src/models/User.js line 15-47 has fields: id, name, email, password, created_at..."
✅ "User model: src/models/User.js:15-47 (5 fields: id, name, email, password, created_at)"
3. State tracking:
What to persist:
- Decisions made
- Validations passed
- Errors encountered
- Files modified
What to discard:
- Exploration paths not taken
- Detailed logs from successful operations
- Redundant context
Validation gates:
After [step], check:
✓ [Criterion 1] - if fail: [recovery]
✓ [Criterion 2] - if fail: [recovery]
✓ [Criterion 3] - if fail: [recovery]
If all pass → Proceed to [next step]
Error recovery:
If [error type]:
1. Log error details
2. Attempt [recovery approach]
3. If recovery fails: [fallback]
4. Report to user: [what information]
Human checkpoints:
Pause for human decision when:
- Multiple valid approaches with tradeoffs
- Architectural decisions
- Scope clarification needed
- Risk acceptance required
Format:
"I've identified [N] approaches:
A) [Approach] - Pros: [X], Cons: [Y]
B) [Approach] - Pros: [X], Cons: [Y]
Which fits your needs?"
Principle: Precision reduces tokens more than brevity.
❌ "I would like you to please review the code and let me know what you think..." (17 words)
✅ "Review this code for security issues." (6 words)
❌ Prose explanation (200 tokens)
✅ JSON object (50 tokens)
❌ "The function should accept a string parameter called name and return..."
✅
Parameters:
- name: string
Returns: string
❌ "The output should be formatted as a JSON object with a 'summary' field containing
a brief description, an 'issues' array with objects that have 'description' and 'severity'
fields..." (30 words)
✅ "Output format:
{
"summary": "brief description",
"issues": [{"description": "...", "severity": "High"}]
}" (15 words + example)
❌ "Check for bugs, errors, issues, or problems in the code"
✅ "Check for bugs and logical errors"
When reviewing prompts, check:
| Mistake | Why It's Bad | Fix |
|---|---|---|
| Vague verbs ("improve", "optimize") | Unclear what to optimize for | "Reduce time complexity from O(n²) to O(n log n)" |
| Assumed context | Model doesn't know your codebase | Provide relevant context explicitly |
| Multiple tasks | Dilutes focus, harder to evaluate | One clear task per prompt |
| No output format | Gets variable/unexpected responses | Specify JSON, markdown, bullets, etc. |
| Overly verbose | Wastes tokens, buries key info | Remove filler, use structure |
| Missing examples | Ambiguous intent | Show 1-2 examples of desired output |
| Negative instructions | Less effective than positive | "Do Y instead of don't do X" |
| Undefined scope | Endless or mismatched responses | Set clear boundaries |
| Generic role | Lack of expertise framing | "Security expert" not just "helpful assistant" |
| Skipping edge cases | Breaks on unusual input | "If empty, return X. If invalid, return Y." |
Universal principles above apply to all models. Different models have specific quirks and optimizations.
See model-notes.md for detailed guidance on:
Quick model selection guide:
| Task Type | Recommended Model | Why |
|---|---|---|
| Long document analysis (100K+ tokens) | Claude Opus/Sonnet | 200K context window |
| Code generation | GPT-4, Claude Sonnet | Strong code capabilities |
| Quick simple tasks | Claude Haiku, GPT-3.5 | Cost-effective, fast |
| Multimodal (text + images) | Gemini, GPT-4 Vision | Native multimodal support |
| Structured output | GPT-4 (function calling), Claude | Strong structure following |
| Complex reasoning | Claude Opus, GPT-4 | Superior reasoning capabilities |
When crafting any prompt, ask yourself:
Template for quick prompts:
[Role]: You are a [expert type]
[Task]: [Specific action on specific target]
[Constraints]:
- [Constraint 1]
- [Constraint 2]
[Output Format]:
[Exact structure specification]
[Example]:
Input: [example input]
Output: [example output]
If your prompt has ANY of these, revise before using:
Effective prompts have three elements:
Start with these questions:
Then add:
Finally, remove:
The goal: Minimal tokens for maximum clarity.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.