TDD for process documentation - write test cases (pressure scenarios), watch baseline fail, write skill, iterate until bulletproof against rationalization.
Creates process documentation using test-driven development principles with pressure scenario validation.
npx claudepluginhub lerianstudio/ringThis skill inherits all available tools. When active, it can use any tool Claude has access to.
anthropic-best-practices.mdgraphviz-conventions.dotpersuasion-principles.mdWriting skills IS Test-Driven Development applied to process documentation.
Personal skills live in agent-specific directories (e.g., ~/.claude/skills for Claude Code, ~/.codex/skills for Codex, or custom agent directories)
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
REQUIRED BACKGROUND: You MUST understand ring:test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
Official guidance: For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future agent instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| Test fails (RED) | Agent violates rule without skill (baseline) |
| Test passes (GREEN) | Agent complies with skill present |
| Refactor | Close loopholes while maintaining compliance |
| Write test first | Run baseline scenario BEFORE writing skill |
| Watch it fail | Document exact rationalizations agent uses |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify agent now complies |
| Refactor cycle | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
Create when:
Don't create for:
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
Way of thinking about problems (flatten-with-flags, test-invariants)
API docs, syntax guides, tool documentation (office docs)
skills/skill-name/SKILL.md (required) + optional supporting files. Flat namespace.
Separate files for: Heavy reference (100+ lines), reusable tools. Keep inline: Principles, code patterns (<50 lines).
Frontmatter (YAML):
name and descriptionname: Use letters, numbers, and hyphens only (no parentheses, special chars)description: Third-person, includes BOTH what it does AND when to use it
---
name: ring:Skill-Name-With-Hyphens
description: Use when [triggers/symptoms] - [what it does, third person]
---
# Skill Name
## Overview (1-2 sentences), ## When to Use (symptoms, NOT to use)
## Core Pattern (before/after code), ## Quick Reference (table for scanning)
## Implementation (inline or link), ## Common Mistakes, ## Real-World Impact (optional)
Critical for discovery: Future agents need to FIND your skill
Purpose: Agents read description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
Format: Start with "Use when..." to focus on triggering conditions, then explain what it does
Content:
| Quality | Example |
|---|---|
| BAD | For async testing (vague), I can help... (first person), setTimeout/sleep (tech-specific but skill isn't) |
| GOOD | Use when tests have race conditions... - replaces timeouts with condition polling (problem + solution) |
Use words agents would search for:
Use active voice, verb-first:
creating-skills not skill-creationtesting-skills-with-subagents not subagent-skill-testingProblem: getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
Target word counts by skill type:
Rationale: Complex skills need extensive rationalization prevention and complete templates. Don't artificially compress at the cost of effectiveness.
Techniques: Reference --help instead of documenting flags. Cross-reference other skills instead of repeating. Compress examples (42 words → 20 words). Don't repeat cross-referenced content.
Verify: wc -w skills/path/SKILL.md (check against word counts above)
Name by what you DO or core insight:
ring:condition-based-waiting > async-test-helpersusing-skills not skill-usageflatten-with-flags > data-structure-refactoringring:root-cause-tracing > debugging-techniquesGerunds (-ing) work well for processes:
creating-skills, testing-skills, debugging-with-logsWhen writing documentation that references other skills:
Use skill name only, with explicit requirement markers:
**REQUIRED SUB-SKILL:** Use ring:test-driven-development**REQUIRED BACKGROUND:** You MUST understand systematic-debuggingSee skills/testing/test-driven-development (unclear if required)@skills/testing/test-driven-development/SKILL.md (force-loads, burns context)Why no @ links: @ syntax force-loads files immediately, consuming 200k+ context before you need them.
Only for: Non-obvious decisions, process loops, "A vs B" choices. Never for: Reference (→tables), code (→blocks), linear steps (→lists). See graphviz-conventions.dot for conventions.
One excellent example in most relevant language. Complete, well-commented WHY, real scenario, ready to adapt. Don't: multi-language, fill-in-blank templates, contrived examples.
| Type | Structure | When |
|---|---|---|
| Self-Contained | skill/SKILL.md only | All content fits inline |
| With Tool | SKILL.md + example.ts | Reusable code, not narrative |
| Heavy Reference | SKILL.md + *.md refs + scripts/ | Reference >100 lines |
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.
No exceptions:
REQUIRED BACKGROUND: The ring:test-driven-development skill explains why this matters. Same principles apply to documentation.
| Skill Type | Examples | Test With | Success Criteria |
|---|---|---|---|
| Discipline (rules) | TDD, verification | Pressure scenarios (time + sunk cost + exhaustion), academic questions | Agent follows rule under maximum pressure |
| Technique (how-to) | condition-based-waiting, root-cause-tracing | Application + variation + gap testing | Agent applies technique to new scenario |
| Pattern (mental model) | reducing-complexity | Recognition + application + counter-examples | Agent identifies when/how to apply |
| Reference (docs/APIs) | API docs, command refs | Retrieval + application + gap testing | Agent finds and applies info correctly |
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps, unclear sections. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. 15 min testing saves hours. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "Academic review is enough" | Reading ≠ using. Test application scenarios. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
All of these mean: Test before deploying. No exceptions.
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
Psychology note: Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
Don't just state rule - forbid specific workarounds:
Write code before test? Delete it.Delete it. Start over. + explicit No exceptions: list (don't keep as reference, don't adapt, don't look, delete means delete)Add foundational principle early:
**Violating the letter of the rules is violating the spirit of the rules.**
This cuts off entire class of "I'm following the spirit" rationalizations.
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
| Excuse | Reality |
|--------|---------|
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
Make it easy for agents to self-check when rationalizing:
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
Add to description: symptoms of when you're ABOUT to violate the rule:
description: use when implementing any feature or bugfix, before writing implementation code
| Phase | Action |
|---|---|
| RED | Run pressure scenario WITHOUT skill → document choices/rationalizations verbatim |
| GREEN | Write skill addressing specific failures → verify agent complies |
| REFACTOR | Find new rationalizations → add counters → re-test until bulletproof |
REQUIRED SUB-SKILL: Use testing-skills-with-subagents for pressure scenarios, pressure types, hole-plugging, meta-testing.
| Pattern | Example | Why Bad |
|---|---|---|
| Narrative | "In session 2025-10-03, we found..." | Too specific, not reusable |
| Multi-language | example-js.js, example-py.py | Mediocre quality, maintenance burden |
| Code in flowcharts | step1 [label="import fs"] | Can't copy-paste, hard to read |
| Generic labels | helper1, step3, pattern4 | Labels need semantic meaning |
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
The deployment checklist below is MANDATORY for EACH skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
Use TodoWrite for each phase.
| Phase | Requirements |
|---|---|
| RED | 3+ pressure scenarios, run WITHOUT skill, document rationalizations verbatim |
| GREEN | Name (letters/numbers/hyphens), YAML frontmatter (<1024 chars), description starts "Use when...", third person, keywords, address baseline failures, one excellent example, verify compliance |
| REFACTOR | New rationalizations → add counters, build rationalization table, create red flags, re-test |
| Quality | Flowchart only if non-obvious, quick ref table, common mistakes, no narrative |
| Deploy | Commit and push, consider contributing PR |
How future agents find your skill:
Optimize for this flow - put searchable terms early and often.
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.
STOP and report if:
| Decision Type | Blocker Condition | Required Action |
|---|---|---|
| Missing baseline test | No failing test exists before writing skill content | STOP and run baseline scenario first |
| Untested skill deployment | Skill created without pressure scenario testing | STOP and delete skill - start over with TDD |
| Missing rationalization coverage | Skill enforces rules but lacks anti-rationalization table | STOP and add rationalization prevention |
| No red-green verification | Wrote skill without watching agent fail then pass | STOP and revert - follow RED-GREEN-REFACTOR |
The following requirements CANNOT be waived:
| Severity | Condition | Required Action |
|---|---|---|
| CRITICAL | Skill deployed without any testing | MUST delete skill and start over with TDD |
| CRITICAL | Discipline skill lacks anti-rationalization table | MUST add table before deployment |
| HIGH | Skill written before baseline test | MUST revert and run baseline first |
| HIGH | No pressure scenario tested | MUST test with subagent before deployment |
| MEDIUM | Skill missing ASO optimization | Should add searchable terms and improve description |
| LOW | Minor structure improvements needed | Fix in next iteration |
| User Says | Your Response |
|---|---|
| "Just write the skill quickly, we can test it later" | "CANNOT write skill before baseline test - TDD applies to documentation too" |
| "This skill is simple, no need for testing" | "Simple skills still need verification - 15 minutes testing saves hours of debugging" |
| "Skip the rationalization table, it's obvious" | "Obvious to me ≠ obvious to agents - MUST include rationalization prevention" |
| "We're in a hurry, deploy now and iterate" | "CANNOT deploy untested skill - will run pressure scenarios first" |
| "The skill is just documentation, not code" | "Documentation IS code for agent behavior - same TDD discipline applies" |
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "I know what the agent needs, no baseline test required" | Assumptions about agent behavior are unreliable without evidence. | MUST run baseline scenario and document actual agent behavior. |
| "The skill is straightforward, testing is overhead" | Simple-looking skills often have subtle failure modes under pressure. | MUST test with pressure scenarios before deployment. |
| "I'll add the rationalization table after shipping" | After = never. Agents will find loopholes immediately. | MUST include anti-rationalization table before deployment. |
| "One test pass is enough verification" | Single pass misses edge cases and pressure-induced failures. | MUST run multiple scenarios including adversarial pressure. |
| "Existing skills don't follow TDD, why should mine?" | Legacy skills are not the standard. New skills MUST follow TDD. | MUST follow RED-GREEN-REFACTOR regardless of existing skills. |
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.