Use when creating or editing rules/skills in .claude/rules/, whether context-specific skills (alwaysApply: false) or always-merged rules (alwaysApply: true) - applies TDD by identifying failure patterns (RED), writing rule/skill (GREEN), then closing loopholes (REFACTOR). Supports globs for file patterns. NEVER use "skill-creator" skill.
/plugin marketplace add udecode/dotai/plugin install skills@dotaiThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Writing rules (skills + always-apply rules) IS Test-Driven Development applied to process documentation.
Rules are maintained in .claude/rules/ as MDC files with frontmatter. Ruler processes rules based on alwaysApply:
alwaysApply: false (or omitted) → generates .claude/skills/ (context-loaded)alwaysApply: true → merged into AGENTS.md (always present)How it works:
.mdc file in .claude/rules/ with frontmatter (alwaysApply: true/false)npx skiller@latest apply after ANY rule creation or updatealwaysApply:
false → generates .claude/skills/ (context-loaded by Claude Code)true → merges into AGENTS.md (always present)globs for file pattern matchingYou identify common failure patterns (baseline behavior), write the skill (documentation) addressing those patterns, then verify through application scenarios, and refactor (close loopholes).
Core principle: If you didn't identify what agents naturally do wrong, you don't know if the skill prevents the right failures.
REQUIRED BACKGROUND: You MUST understand test-driven-development before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
Official guidance: For Anthropic's official skill authoring best practices, see anthropic-best-practices.md. This document provides additional patterns and guidelines that complement the TDD-focused approach in this skill.
Complete worked example: See examples/CLAUDE_MD_TESTING.md for a full verification campaign testing CLAUDE.md documentation variants.
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
| TDD Concept | Skill Creation |
|---|---|
| Test case | Anticipated failure pattern from experience |
| Production code | Skill document (.mdc file in .claude/rules/) |
| Test fails (RED) | Identify common mistakes without skill |
| Test passes (GREEN) | Skill addresses those specific mistakes |
| Refactor | Close loopholes while maintaining clarity |
| Write test first | Identify failure patterns BEFORE writing skill |
| Watch it fail | Document exact rationalizations from experience |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify skill clarity through application |
| Refactor cycle | Find new rationalizations → plug → re-verify |
The entire skill creation process follows RED-GREEN-REFACTOR.
Create when:
Don't create for:
Concrete method with steps to follow (condition-based-waiting, root-cause-tracing)
Way of thinking about problems (flatten-with-flags, test-invariants)
API docs, syntax guides, tool documentation (office docs)
Single-file skills (default):
.claude/rules/
skill-name.mdc # MDC file with frontmatter
Multi-file skills (only if >1 file needed):
.claude/rules/
skill-name/
skill-name.mdc # Main skill (same basename as folder)
supporting-file.* # Additional files
Ruler auto-generates from rules - npx skiller@latest apply creates .claude/skills/ from .claude/rules/
When to use folders:
Single .mdc file when:
MDC format (.mdc files) with frontmatter:
Frontmatter fields:
name: Skill identifier (letters, numbers, hyphens only)description: Discovery text (max 1024 chars total for frontmatter)
alwaysApply: Must be false or omitted (ruler only generates skills from non-always rules)---
name: skill-name-with-hyphens
description: Use when [specific triggering conditions and symptoms] - [what the skill does and how it helps, written in third person]
alwaysApply: false
---
# Skill Name
## Overview
What is this? Core principle in 1-2 sentences.
## When to Use
[Small inline flowchart IF decision non-obvious]
Bullet list with SYMPTOMS and use cases
When NOT to use
## Core Pattern (for techniques/patterns)
Before/after code comparison
## Quick Reference
Table or bullets for scanning common operations
## Implementation
Inline code for simple patterns
Link to file for heavy reference or reusable tools
## Common Mistakes
What goes wrong + fixes
## Real-World Impact (optional)
Concrete results
Critical for discovery: Future Claude needs to FIND your skill
Purpose: Claude reads description to decide which skills to load for a given task. Make it answer: "Should I read this skill right now?"
Format: Start with "Use when..." to focus on triggering conditions, then explain what it does
Content:
# ❌ BAD: Too abstract, vague, doesn't include when to use
description: For async testing
# ❌ BAD: First person
description: I can help you with async tests when they're flaky
# ❌ BAD: Mentions technology but skill isn't specific to it
description: Use when tests use setTimeout/sleep and are flaky
# ✅ GOOD: Starts with "Use when", describes problem, then what it does
description: Use when tests have race conditions, timing dependencies, or pass/fail inconsistently - replaces arbitrary timeouts with condition polling for reliable async tests
# ✅ GOOD: Technology-specific skill with explicit trigger
description: Use when using React Router and handling authentication redirects - provides patterns for protected routes and auth state management
Use words Claude would search for:
Use active voice, verb-first:
creating-skills not skill-creationwriting-rules not rule-writingProblem: getting-started and frequently-referenced skills load into EVERY conversation. Every token counts.
Target word counts:
Techniques:
Move details to tool help:
# ❌ BAD: Document all flags in SKILL.md
search-conversations supports --text, --both, --after DATE, --before DATE, --limit N
# ✅ GOOD: Reference --help
search-conversations supports multiple modes and filters. Run --help for details.
Use cross-references:
# ❌ BAD: Repeat workflow details
When implementing feature, follow these 20 steps...
[20 lines of repeated instructions from another skill]
# ✅ GOOD: Reference other skill
For implementation workflow, REQUIRED: Use [other-skill-name] for complete process.
Compress examples:
# ❌ BAD: Verbose example (42 words)
your human partner: "How did we handle authentication errors in React Router before?"
You: I'll search for React Router authentication patterns in our codebase and documentation to find previous implementations.
[Detailed explanation of search process...]
# ✅ GOOD: Minimal example (15 words)
Partner: "How did we handle auth errors in React Router?"
You: [Search codebase → provide solution]
Eliminate redundancy:
Verification:
wc -w .claude/rules/skill-name.mdc
# getting-started workflows: aim for <150 each
# Other frequently-loaded: aim for <200 total
Name by what you DO or core insight:
condition-based-waiting > async-test-helpersusing-skills not skill-usageflatten-with-flags > data-structure-refactoringroot-cause-tracing > debugging-techniquesGerunds (-ing) work well for processes:
creating-skills, testing-skills, debugging-with-logsWhen writing documentation that references other skills:
Use skill name only, with explicit requirement markers:
**REQUIRED SUB-SKILL:** Use superpowers test-driven-development**REQUIRED BACKGROUND:** You MUST understand superpowers systematic-debuggingSee .claude/rules/test-driven-development.mdc (unclear if required)@.claude/rules/test-driven-development.mdc (force-loads, burns context)Why no @ links: @ syntax force-loads files immediately, consuming 200k+ context before you need them.
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Use flowcharts ONLY for:
Never use flowcharts for:
See @graphviz-conventions.dot for graphviz style rules.
One excellent example beats many mediocre ones
Choose most relevant language:
Good example:
Don't:
You're good at porting - one great example is enough.
.claude/rules/
defense-in-depth.mdc # Everything inline
When: All content fits, no heavy reference needed
.claude/rules/
condition-based-waiting/
condition-based-waiting.mdc # Overview + patterns
example.ts # Working helpers to adapt
When: Tool is reusable code, not just narrative
.claude/rules/
pptx/
pptx.mdc # Overview + workflows
pptxgenjs.md # 600 lines API reference
ooxml.md # 500 lines XML structure
scripts/ # Executable tools
When: Reference material too large for inline
Note: Ruler copies all files from skill folder to .claude/skills/ when .mdc basename matches folder name
NO SKILL WITHOUT IDENTIFYING FAILURE PATTERNS FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before identifying what it prevents? Delete it. Start over. Edit skill without identifying new failure patterns? Same violation.
No exceptions:
REQUIRED BACKGROUND: The test-driven-development skill explains why this matters. Same principles apply to documentation.
Different skill types need different verification approaches:
Verify with:
Success criteria: Skill prevents violations under anticipated pressures
Examples: condition-based-waiting, root-cause-tracing, defensive-programming
Verify with:
Success criteria: Skill enables successful technique application
Examples: reducing-complexity, information-hiding concepts
Verify with:
Success criteria: Skill enables correct pattern recognition and application
Examples: API documentation, command references, library guides
Verify with:
Success criteria: Skill enables finding and correctly applying information
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Verify it. |
| "It's just a reference" | References can have gaps, unclear sections. Verify information access. |
| "Verification is overkill" | Unverified skills have issues. Always. 15 min verification saves hours. |
| "I'll verify if problems arise" | Problems = agents can't use skill. Verify BEFORE deploying. |
| "Too tedious to verify" | Verification is less tedious than debugging bad skill in production. |
| "I'm confident it's good" | Overconfidence guarantees issues. Verify anyway. |
| "Academic review is enough" | Reading ≠ using. Verify through application. |
| "No time to verify" | Deploying unverified skill wastes more time fixing it later. |
All of these mean: Verify before deploying. No exceptions.
Skills that enforce discipline (like TDD) need to resist rationalization. Agents are smart and will find loopholes when under pressure.
Psychology note: Understanding WHY persuasion techniques work helps you apply them systematically. See persuasion-principles.md for research foundation (Cialdini, 2021; Meincke et al., 2025) on authority, commitment, scarcity, social proof, and unity principles.
After applying skill and finding it unclear, ask yourself:
I read the skill and still chose wrong approach.
How could that skill have been written differently to make
it crystal clear what the correct choice was?
Three possible answers:
"The skill WAS clear, I chose to ignore it"
"The skill should have said X"
"I didn't see section Y"
Signs of bulletproof skill:
Not bulletproof if:
Initial Version (Failed):
Scenario: 200 lines done, forgot rule, exhausted, dinner plans
Applied skill: Still chose wrong approach
Rationalization: "Already achieve same goals differently"
Iteration 1 - Add Counter:
Added section: "Why This Specific Approach Matters"
Re-applied: STILL chose wrong approach
New rationalization: "Spirit not letter"
Iteration 2 - Add Foundational Principle:
Added: "Violating letter is violating spirit"
Re-applied: Chose correct approach
Cited: New principle directly
Meta-verification: "Skill was clear, I should follow it"
Bulletproof achieved.
Don't just state the rule - forbid specific workarounds:
<Bad> ```markdown Write code before test? Delete it. ``` </Bad> <Good> ```markdown Write code before test? Delete it. Start over.No exceptions:
</Good>
### Address "Spirit vs Letter" Arguments
Add foundational principle early:
```markdown
**Violating the letter of the rules is violating the spirit of the rules.**
This cuts off entire class of "I'm following the spirit" rationalizations.
Capture rationalizations from baseline testing (see Testing section below). Every excuse agents make goes in the table:
| Excuse | Reality |
| -------------------------------- | ----------------------------------------------------------------------- |
| "Too simple to test" | Simple code breaks. Test takes 30 seconds. |
| "I'll test after" | Tests passing immediately prove nothing. |
| "Tests after achieve same goals" | Tests-after = "what does this do?" Tests-first = "what should this do?" |
Make it easy for agents to self-check when rationalizing:
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "It's about spirit not ritual"
- "This is different because..."
**All of these mean: Delete code. Start over with TDD.**
Add to description: symptoms of when you're ABOUT to violate the rule:
description: use when implementing any feature or bugfix, before writing implementation code
Follow the TDD cycle:
Before writing skill, identify what goes wrong without it:
Process:
Anticipate pressure scenarios - Think through realistic situations with multiple pressures:
Example pressure scenario (3+ combined pressures):
You spent 3 hours, 200 lines, manually tested. It works.
It's 6pm, dinner at 6:30pm. Code review tomorrow 9am.
Just realized you forgot [rule from skill].
Options:
A) Delete 200 lines, start fresh tomorrow following rule
B) Commit now, address rule tomorrow
C) Apply rule now (30 min delay)
Without skill: Agent likely chooses B or C
Rationalizations: "Already working", "Tests after achieve same goals", "Deleting is wasteful"
Pressure Types to Consider:
| Pressure | Example |
|---|---|
| Time | Emergency, deadline, deploy window closing |
| Sunk cost | Hours of work, "waste" to delete |
| Authority | Senior says skip it, manager overrides |
| Economic | Job, promotion, company survival at stake |
| Exhaustion | End of day, already tired, want to go home |
| Social | Looking dogmatic, seeming inflexible |
| Pragmatic | "Being pragmatic vs dogmatic" |
Best scenarios combine 3+ pressures.
Write skill that addresses those specific rationalizations. Don't add extra content for hypothetical cases.
Verify through application: Apply skill to real scenarios in this session. Does it make the correct choice obvious? Does it address the rationalizations you identified?
Found new rationalizations during application? Add explicit counter. Re-verify until bulletproof.
For each new rationalization, add:
Rule before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing
- Don't look at it
- Delete means delete
| Excuse | Reality |
| ------------------- | ---------------------------------------------------------------- |
| "Keep as reference" | You'll adapt it. That's violating the rule. Delete means delete. |
## Red Flags - STOP
- "Keep as reference" or "adapt existing code"
- "I'm following the spirit not the letter"
description: Use when [about to violate], when tempted to [rationalization]...
"In session 2025-10-03, we found empty projectDir caused..." Why bad: Too specific, not reusable
example-js.js, example-py.py, example-go.go Why bad: Mediocre quality, maintenance burden
step1 [label="import fs"];
step2 [label="read file"];
Why bad: Can't copy-paste, hard to read
helper1, helper2, step3, pattern4 Why bad: Labels should have semantic meaning
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
The deployment checklist below is MANDATORY for EACH skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
IMPORTANT: Use TodoWrite to create todos for EACH checklist item below.
RED Phase - Identify Failure Patterns:
GREEN Phase - Write Minimal Skill:
npx skiller@latest apply to generate .claude/skills/REFACTOR Phase - Close Loopholes:
npx skiller@latest apply after ANY changesQuality Checks:
Deployment:
How future Claude finds your skill:
Optimize for this flow - put searchable terms early and often.
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without identifying failure patterns first. Same cycle: RED (identify patterns) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.