Use when creating new skills, editing existing skills, or verifying skills work before deployment
Creates skills using test-driven development by first observing agent failures without documentation.
npx claudepluginhub harmaalbers/claude-requirements-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/persuasion-principles.mdreferences/skill-authoring-best-practices.mdreferences/skill-testing-guide.mdreferences/testing-skills-with-subagents.mdWriting skills IS Test-Driven Development applied to process documentation.
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
REQUIRED BACKGROUND: You MUST understand requirements-framework:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
Official guidance: For Anthropic's official skill authoring best practices, see references/skill-authoring-best-practices.md. That document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| Test fails (RED) | Agent violates rule without skill (baseline) |
| Test passes (GREEN) | Agent complies with skill present |
| Refactor | Close loopholes while maintaining compliance |
| Write test first | Run baseline scenario BEFORE writing skill |
| Watch it fail | Document exact rationalizations agent uses |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify agent now complies |
| Refactor cycle | Find new rationalizations, plug, re-verify |
Create when:
Don't create for:
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
Way of thinking about problems (flatten-with-flags, test-invariants)
API docs, syntax guides, tool documentation
plugins/requirements-framework/skills/
skill-name/
SKILL.md # Main reference (required)
references/ # Supporting files subdirectory
supporting-file.md # Only if needed
Key conventions for this framework:
references/ subdirectory (not flat alongside SKILL.md)git_hash field for version trackingSeparate files for:
Keep inline:
Frontmatter (YAML):
name, description, git_hashname: Use letters, numbers, and hyphens only (no parentheses, special chars)description: Third-person, describes ONLY when to use (NOT what it does)
git_hash: Set to uncommitted for new files; updated by ./update-plugin-versions.sh---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
git_hash: uncommitted
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Requirements Integration (if applicable)
How this skill interacts with the requirements system:
- Which requirement it satisfies (satisfied_by_skill mapping)
- Auto-satisfy behavior
- Chaining to next skill
## Common Mistakes
What goes wrong + fixes
Critical for discovery: Future Claude needs to FIND your skill.
Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
Format: Start with "Use when..." to focus on triggering conditions.
CRITICAL: Description = When to Use, NOT What the Skill Does
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow in the description.
Why this matters: Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content. A description saying "code review between tasks" caused Claude to do ONE review, even though the skill's flowchart clearly showed TWO reviews (spec compliance then code quality).
When the description was changed to just "Use when executing implementation plans with independent tasks" (no workflow summary), Claude correctly read the flowchart and followed the two-stage review process.
The trap: Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
# BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: Use when executing plans - dispatches subagent per task with code review between tasks
# BAD: Too much process detail
description: Use for TDD - write test first, watch it fail, write minimal code, refactor
# GOOD: Just triggering conditions, no workflow summary
description: Use when executing implementation plans with independent tasks in the current session
# GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
Content:
Use words Claude would search for:
Use active voice, verb-first:
creating-skills not skill-creationcondition-based-waiting not async-test-helpersGerunds (-ing) work well for processes:
creating-skills, testing-skills, debugging-with-logsProblem: Bootstrap and frequently-referenced skills load into EVERY conversation. Every token counts.
Target word counts:
Techniques:
Move details to supporting files:
# BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# GOOD: Reference help or separate file
search-conversations supports multiple modes and filters. Run --help for details.
Use cross-references:
# BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use requirements-framework:dispatching-parallel-agents for workflow.
Compress examples:
# BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent -> synthesis]
When writing documentation that references other skills:
Use skill name only, with explicit requirement markers:
**REQUIRED SUB-SKILL:** Use requirements-framework:test-driven-development**REQUIRED BACKGROUND:** You MUST understand requirements-framework:systematic-debuggingSee skills/testing/test-driven-development (unclear if required)@skills/testing/test-driven-development/SKILL.md (force-loads, burns context)Why no @ links: @ syntax force-loads files immediately, consuming 200k+ context before you need them.
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Use flowcharts ONLY for:
Never use flowcharts for:
One excellent example beats many mediocre ones.
Choose most relevant language:
Good example:
Don't:
You're good at porting — one great example is enough.
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.
No exceptions:
REQUIRED BACKGROUND: The requirements-framework:test-driven-development skill explains why this matters. Same principles apply to documentation.
Testing different skill types and bulletproofing against rationalization: See references/skill-testing-guide.md
Follow the TDD cycle:
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
This is "watch the test fail" — you must see what agents naturally do before writing the skill.
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
Testing methodology: See references/testing-skills-with-subagents.md for the complete testing methodology:
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
All of these mean: Test before deploying. No exceptions.
"In session 2025-10-03, we found empty projectDir caused..." Why bad: Too specific, not reusable
example-js.js, example-py.py, example-go.go Why bad: Mediocre quality, maintenance burden
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
helper1, helper2, step3, pattern4 Why bad: Labels should have semantic meaning
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
The deployment checklist below is MANDATORY for EACH skill.
RED Phase — Write Failing Test:
GREEN Phase — Write Minimal Skill:
references/REFACTOR Phase — Close Loopholes:
Quality Checks:
references/ subdirectoryRequirements Integration (if applicable):
satisfied_by_skill mapping to auto-satisfy-skills.py~/.claude/messages/ (if new requirement)examples/global-requirements.yamlDeployment:
./update-plugin-versions.sh to update git_hash./sync.sh deploy to deploy to runtimeplugin.json if neededThis skill has no direct requirement mapping — it's a meta-skill for creating other skills. However, skills you create using this methodology may need their own requirement integrations (see checklist above).
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.