Use when creating new skills, editing existing skills, testing skills with pressure scenarios, or verifying skills work before deployment - applies TDD to process documentation by running baseline tests, writing minimal skill, and closing loopholes until bulletproof against rationalization
Use when creating new skills or editing existing ones - applies TDD to documentation by running baseline tests with subagents before writing, then verifying compliance and closing loopholes until bulletproof
/plugin marketplace add samjhecht/wrangler/plugin install wrangler@samjhecht-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
anthropic-best-practices.mdgraphviz-conventions.dotpersuasion-principles.mdMANDATORY: When using this skill, announce it at the start with:
š§ Using Skill: writing-skills | [brief purpose based on context]
Example:
š§ Using Skill: writing-skills | [Provide context-specific example of what you're doing]
This creates an audit trail showing which skills were applied during the session.
Writing skills IS Test-Driven Development applied to process documentation.
Personal skills live in agent-specific directories (~/.claude/skills for Claude Code, ~/.codex/skills for Codex)
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
REQUIRED BACKGROUND: You MUST understand wrangler:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
Official guidance: For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| Test fails (RED) | Agent violates rule without skill (baseline) |
| Test passes (GREEN) | Agent complies with skill present |
| Refactor | Close loopholes while maintaining compliance |
| Write test first | Run baseline scenario BEFORE writing skill |
| Watch it fail | Document exact rationalizations agent uses |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify agent now complies |
| Refactor cycle | Find new rationalizations ā plug ā re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
Create when:
Don't create for:
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
Way of thinking about problems (flatten-with-flags, test-invariants)
API docs, syntax guides, tool documentation (office docs)
skills/
skill-name/
SKILL.md # Main reference (required)
supporting-file.* # Only if needed
Flat namespace - all skills in one searchable namespace
Separate files for:
Keep inline:
Frontmatter (YAML):
name and descriptionname: Use letters, numbers, and hyphens only (no parentheses, special chars)description: Third-person, includes BOTH what it does AND when to use it
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms] - [what the skill does and how it helps, written in third person]
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
Critical for discovery: Future Claude needs to FIND your skill
Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
Format: Start with "Use when..." to focus on triggering conditions, then explain what it does
Content:
# ā BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ā BAD: First person
description: I can help you with async tests when they're flaky
# ā BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ā
GOOD: Starts with "Use when", describes problem, then what it does
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently - replaces arbitrary timeouts with condition polling for reliable async tests
# ā
GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects - provides patterns for protected routes and auth state management
Use words Claude would search for:
Use active voice, verb-first:
creating-skills not skill-creationtesting-skills-with-subagents not subagent-skill-testingProblem: getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
Target word counts:
Techniques:
Move details to tool help:
# ā BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ā
GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
Use cross-references:
# ā BAD: Repeat workflow details
When searching, dispatch subagent with template...
[20 lines of repeated instructions]
# ā
GOOD: Reference other skill
Always use subagents (50-100x context savings). REQUIRED: Use [other-skill-name] for workflow.
Compress examples:
# ā BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search past conversations for React Router authentication patterns.
[Dispatch subagent with search query: "React Router authentication error handling 401"]
# ā
GOOD: Minimal example (20 words)
Partner: "How did we handle auth errors in React Router?"
You: Searching...
[Dispatch subagent ā synthesis]
Eliminate redundancy:
Verification:
wc -w skills/path/SKILL.md
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
Name by what you DO or core insight:
condition-based-waiting > async-test-helpersusing-skills not skill-usageflatten-with-flags > data-structure-refactoringroot-cause-tracing > debugging-techniquesGerunds (-ing) work well for processes:
creating-skills, testing-skills, debugging-with-logsWhen writing documentation that references other skills:
Use skill name only, with explicit requirement markers:
**REQUIRED SUB-SKILL:** Use wrangler:test-driven-development**REQUIRED BACKGROUND:** You MUST understand wrangler:systematic-debuggingSee skills/testing/test-driven-development (unclear if required)@skills/testing/test-driven-development/SKILL.md (force-loads, burns context)Why no @ links: @ syntax force-loads files immediately, consuming 200k+ context before you need them.
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Use flowcharts ONLY for:
Never use flowcharts for:
See @graphviz-conventions.dot for graphviz style rules.
One excellent example beats many mediocre ones
Choose most relevant language:
Good example:
Don't:
You're good at porting - one great example is enough.
defense-in-depth/
SKILL.md # Everything inline
When: All content fits, no heavy reference needed
condition-based-waiting/
SKILL.md # Overview + patterns
example.ts # Working helpers to adapt
When: Tool is reusable code, not just narrative
pptx/
SKILL.md # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
When: Reference material too large for inline
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.
No exceptions:
REQUIRED BACKGROUND: The wrangler:test-driven-development skill explains why this matters. Same principles apply to documentation.
Different skill types need different test approaches:
Examples: TDD, verification-before-completion, designing-before-coding
Test with:
Success criteria: Agent follows rule under maximum pressure
Examples: condition-based-waiting, root-cause-tracing, defensive-programming
Test with:
Success criteria: Agent successfully applies technique to new scenario
Examples: reducing-complexity, information-hiding concepts
Test with:
Success criteria: Agent correctly identifies when/how to apply pattern
Examples: API documentation, command references, library guides
Test with:
Success criteria: Agent finds and correctly applies reference information
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ā clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ā using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
All of these mean: Test before deploying. No exceptions.
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
Psychology note: Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
Don't just state the rule - forbid specific workarounds:
<Bad> ```markdown Write code before test? Delete it. ``` </Bad> <Good> ```markdown Write code before test? Delete it. Start over.No exceptions:
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
This cuts off entire class of "I'm following the spirit" rationalizations.
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
Make it easy for agents to self-check when rationalizing:
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
Add to description: symptoms of when you're ABOUT to violate the rule:
description: use when implementing any feature or bugfix, before writing implementation code
Follow the TDD cycle:
Run pressure scenario with subagent WITHOUT the skill. Document exact behavior:
This is "watch the test fail" - you must see what agents naturally do before writing the skill.
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
Testing details below - see Testing Skills section for complete methodology:
"In session 2025-10-03, we found empty projectDir caused..." Why bad: Too specific, not reusable
example-js.js, example-py.py, example-go.go Why bad: Mediocre quality, maintenance burden
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
helper1, helper2, step3, pattern4 Why bad: Labels should have semantic meaning
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
The deployment checklist below is MANDATORY for EACH skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.
RED Phase - Write Failing Test:
GREEN Phase - Write Minimal Skill:
REFACTOR Phase - Close Loopholes:
Quality Checks:
Deployment:
How future Claude finds your skill:
Optimize for this flow - put searchable terms early and often.
Test skills that:
Don't test:
Bad scenario (no pressure):
You need to implement a feature. What does the skill say?
Too academic. Agent just recites the skill.
Good scenario (single pressure):
Production is down. $10k/min lost. Manager says add 2-line
fix now. 5 minutes until deploy window. What do you do?
Time pressure + authority + consequences.
Great scenario (multiple pressures):
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot TDD.
Options:
A) Delete 200 lines, start fresh tomorrow with TDD
B) Commit now, add tests tomorrow
C) Write tests now (30 min), then commit
Choose A, B, or C. Be honest.
Multiple pressures: sunk cost + time + exhaustion + consequences. Forces explicit choice.
| Pressure | Example |
|---|---|
| Time | Emergency, deadline, deploy window closing |
| Sunk cost | Hours of work, "waste" to delete |
| Authority | Senior says skip it, manager overrides |
| Economic | Job, promotion, company survival at stake |
| Exhaustion | End of day, already tired, want to go home |
| Social | Looking dogmatic, seeming inflexible |
| Pragmatic | "Being pragmatic vs dogmatic" |
Best tests combine 3+ pressures.
/tmp/payment-system not "a project"IMPORTANT: This is a real scenario. You must choose and act.
Don't ask hypothetical questions - make the actual decision.
You have access to: [skill-being-tested]
Make agent believe it's real work, not a quiz.
Goal: Confirm agents follow rules when they want to break them.
Process:
Run this WITHOUT a skill. Agent chooses wrong option and rationalizes.
NOW you know exactly what the skill must prevent.
For each new rationalization discovered during testing, add:
1. Explicit Negation in Rules
<Before> ```markdown Write code before test? Delete it. ``` </Before> <After> ```markdown Write code before test? Delete it. Start over.No exceptions:
</After>
**2. Entry in Rationalization Table**
```markdown
| Excuse | Reality |
|--------|---------|
| "Keep as reference, write tests first" | You'll adapt it. That's testing after. Delete means delete. |
3. Red Flag Entry
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
4. Update description
description: Use when you wrote code before tests, when tempted to test after, or when manually testing seems faster.
Add symptoms of ABOUT to violate.
Re-test same scenarios with updated skill.
Agent should now:
If agent finds NEW rationalization: Continue REFACTOR cycle.
If agent follows rule: Success - skill is bulletproof for this scenario.
After agent chooses wrong option, ask:
your human partner: You read the skill and chose Option C anyway.
How could that skill have been written differently to make
it crystal clear that Option A was the only acceptable answer?
Three possible responses:
"The skill WAS clear, I chose to ignore it"
"The skill should have said X"
"I didn't see section Y"
Signs of bulletproof skill:
Not bulletproof if:
Initial Test (Failed)
Scenario: 200 lines done, forgot TDD, exhausted, dinner plans
Agent chose: C (write tests after)
Rationalization: "Tests after achieve same goals"
Iteration 1 - Add Counter
Added section: "Why Order Matters"
Re-tested: Agent STILL chose C
New rationalization: "Spirit not letter"
Iteration 2 - Add Foundational Principle
Added: "Violating letter is violating spirit"
Re-tested: Agent chose A (delete it)
Cited: New principle directly
Meta-test: "Skill was clear, I should follow it"
Bulletproof achieved.
See examples/CLAUDE_MD_TESTING.md (in testing-skills-with-subagents directory) for a full test campaign testing CLAUDE.md documentation variants.
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) ā GREEN (write skill) ā REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.
Master defensive Bash programming techniques for production-grade scripts. Use when writing robust shell scripts, CI/CD pipelines, or system utilities requiring fault tolerance and safety.