Guide to effective Claude Code skill authoring using TDD methodology and persuasion principles. Triggers: skill authoring, skill writing, new skill, TDD skills, skill creation, skill best practices, skill validation, skill deployment, skill compliance Use when: creating new skills from scratch, improving existing skills with low compliance rates, learning skill authoring best practices, validating skill quality before deployment, understanding what makes skills effective DO NOT use when: evaluating existing skills - use skills-eval instead. DO NOT use when: analyzing skill architecture - use modular-skills instead. DO NOT use when: writing general documentation for humans. YOU MUST write a failing test before writing any skill. This is the Iron Law.
/plugin marketplace add athola/claude-night-market/plugin install abstract@claude-night-marketThis skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdmodules/anti-rationalization.mdmodules/deployment-checklist.mdmodules/description-writing.mdmodules/graphviz-conventions.mdmodules/persuasion-principles.mdmodules/progressive-disclosure.mdmodules/tdd-methodology.mdmodules/testing-with-subagents.mdscripts/skill_validator.pyThis skill teaches how to write effective Claude Code skills using Test-Driven Development (TDD) methodology, persuasion principles from compliance research, and official Anthropic best practices. It treats skill writing as process documentation requiring empirical validation rather than theoretical instruction writing.
Skills are not essays or documentation—they are behavioral interventions designed to change Claude's behavior in specific, measurable ways. Like software, they must be tested against real failure modes before deployment.
NO SKILL WITHOUT A FAILING TEST FIRST
Every skill must begin with documented evidence of Claude failing without it. This ensures you're solving real problems, not building imaginary solutions.
Purpose: Teach specific methods or approaches
Examples: TDD workflow, commit message style, API design patterns
Structure: Step-by-step procedures with decision points
Purpose: Document recurring solutions to common problems
Examples: Error handling patterns, module organization, testing strategies
Structure: Problem-solution pairs with variations
Purpose: Provide quick lookup information
Examples: Command reference, configuration options, best practice checklists
Structure: Tables, lists, and indexed information
Document Baseline Failure (RED)
Write Minimal Skill (GREEN)
Bulletproof Against Rationalization (REFACTOR)
Required:
SKILL.md - Main skill file with YAML frontmatterOptional (use only when needed):
modules/ - Supporting content for progressive disclosurescripts/ - Executable tools and validatorsREADME.md - Plugin-level overview (not skill-level)Naming Convention:
skill-nameSkill descriptions are critical for Claude's discovery process. They must be optimized for both semantic search and explicit triggering.
[What it does] + [When to use it] + [Key triggers]
Always:
Never:
Good:
description: Guides API design using RESTful principles and best practices. Use when designing new APIs, reviewing API proposals, or standardizing endpoint patterns. Covers resource modeling, HTTP method selection, and versioning strategies.
Bad:
description: This skill helps you design better APIs.
Goal: Establish empirical evidence that intervention is needed
Process:
Create 3+ pressure scenarios combining:
Run scenarios in fresh Claude instances WITHOUT skill
Document failures verbatim:
## Baseline Scenario 1: Simple API endpoint
**Prompt**: "Quickly add a user registration endpoint"
**Claude Response** (actual, unedited):
[paste exact response]
**Failures Observed**:
- Skipped error handling
- No input validation
- Missing rate limiting
- Didn't consider security
Identify patterns across failures
Goal: Create smallest intervention that addresses documented failures
Process:
Write SKILL.md with required frontmatter:
---
name: skill-name
description: [optimized description]
version: 1.0.0
category: [appropriate category]
tags: [discovery, keywords, here]
dependencies: []
estimated_tokens: [realistic estimate]
---
Add content that directly counters baseline failures
Include ONE example showing correct behavior
Test with skill present:
Goal: Eliminate Claude's ability to explain away requirements
Process:
Run new pressure scenarios with skill active
Document rationalizations:
**Scenario**: Add authentication to API
**Claude's Rationalization**:
"Since this is a simple internal API, basic authentication
is sufficient for now. We can add OAuth later if needed."
**What Should Happen**:
Security requirements apply regardless of API scope.
Internal APIs need proper authentication.
Add explicit counters:
Iterate until rationalizations stop
Claude is sophisticated at finding ways to bypass requirements while appearing compliant. Skills must explicitly counter common rationalization patterns.
| Excuse | Counter |
|---|---|
| "This is just a simple task" | Complexity doesn't exempt you from core practices. Use skills anyway. |
| "I remember the key points" | Skills evolve. Always run current version. |
| "Spirit vs letter of the law" | Foundational principles come first. No shortcuts. |
| "User just wants quick answer" | Quality and speed aren't exclusive. Both matter. |
| "Context is different here" | Skills include context handling. Follow the process. |
| "I'll add it in next iteration" | Skills apply to current work. No deferral. |
Skills should include explicit red flag lists:
## Red Flags That You're Rationalizing
Stop immediately if you think:
- "This is too simple for the full process"
- "I already know this, no need to review"
- "The user wouldn't want me to do all this"
- "We can skip this step just this once"
- "The principle doesn't apply here because..."
When exceptions truly exist, document them explicitly:
## When NOT to Use This Skill
**Don't use when:**
- User explicitly requests prototype/draft quality
- Exploring multiple approaches quickly (note for follow-up)
- Working in non-production environment (document shortcut)
**Still use for:**
- "Quick" production changes
- "Simple" fixes to production code
- Internal tools and scripts
For detailed implementation guidance:
modules/tdd-methodology.md for RED-GREEN-REFACTOR cycle detailsmodules/persuasion-principles.md for compliance research and techniquesmodules/description-writing.md for discovery optimizationmodules/progressive-disclosure.md for file structure patternsmodules/anti-rationalization.md for bulletproofing techniquesmodules/graphviz-conventions.md for process diagram standardsmodules/testing-with-subagents.md for pressure testing methodologymodules/deployment-checklist.md for final validationBefore deploying a new skill:
python scripts/skill_validator.py
Exit codes:
0 = Success, ready to deploy1 = Warnings, should fix but can deploy2 = Errors, must fix before deployingProblem: Creating skills based on what "should" work
Solution: Always start with documented failures (RED phase)
Problem: "Helps with testing" - not discoverable or actionable
Solution: "Guides TDD workflow with RED-GREEN-REFACTOR cycle. Use when writing new tests, refactoring existing code, or ensuring test coverage."
Problem: Everything in SKILL.md, 1000+ lines
Solution: Keep main file under 500 lines, use progressive disclosure with modules
Problem: Claude finds creative ways to bypass requirements
Solution: Add explicit exception tables, red flags, and commitment statements
Problem: Examples show ideal scenarios, not real challenges
Solution: Use actual failure cases from RED phase as examples
Effective skill authoring requires:
Remember: Skills are behavioral interventions, not documentation. If you haven't tested it against real failure modes, you haven't validated that it works.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.