From arcforge
Guides ArcForge maintainers in creating, editing, and verifying skills using TDD cycle: baseline agent failures, document fixes, confirm compliance.
npx claudepluginhub gregoryho/arcforge --plugin arcforgeThis skill uses the workspace's default tool permissions.
This is a **project-level meta skill** for maintaining ArcForge's own composable skill system. It is not a general promoted/user-facing core skill for ordinary product work.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
This is a project-level meta skill for maintaining ArcForge's own composable skill system. It is not a general promoted/user-facing core skill for ordinary product work.
Writing skills IS Test-Driven Development applied to process documentation.
You write test cases (pressure scenarios with subagents), watch them fail (baseline behavior), write the skill (documentation), watch tests pass (agents comply), and refactor (close loopholes).
Core principle: If you didn't watch an agent fail without the skill, you don't know if the skill teaches the right thing.
REQUIRED BACKGROUND: You MUST understand arc-tdd before using this skill. That skill defines the fundamental RED-GREEN-REFACTOR cycle. This skill adapts TDD to documentation.
A skill is a reference guide for proven techniques, patterns, or tools. Skills help future Claude instances find and apply effective approaches.
Skills are: Reusable techniques, patterns, tools, reference guides
Skills are NOT: Narratives about how you solved a problem once
| TDD Concept | Skill Creation |
|---|---|
| Test case | Pressure scenario with subagent |
| Production code | Skill document (SKILL.md) |
| Test fails (RED) | Agent violates rule without skill (baseline) |
| Test passes (GREEN) | Agent complies with skill present |
| Refactor | Close loopholes while maintaining compliance |
| Write test first | Run baseline scenario BEFORE writing skill |
| Watch it fail | Document exact rationalizations agent uses |
| Minimal code | Write skill addressing those specific violations |
| Watch it pass | Verify agent now complies |
| Refactor cycle | Find new rationalizations → plug → re-verify |
Use this skill for ArcForge maintainer work: changing skills/, skill tests, pressure fixtures, evals, and skill distribution behavior.
Do not use it as a default workflow for non-ArcForge product implementation. For ordinary project work, route to the smallest useful product-facing skill instead.
Create when:
Don't create for:
Two orthogonal axes are useful when designing a skill — composition (how it gets triggered) and content (what it teaches). Pick deliberately on each.
| Type | Trigger Mechanism | Composition | Example |
|---|---|---|---|
| Workflow | Handoff from previous step | "After This Skill" section defines next step | arc-brainstorming → arc-writing-tasks |
| Discipline | Conditional — fires during ANY workflow when condition is met | Listed in arc-using routing table | arc-tdd, arc-verifying |
| Meta | Independent — user, maintainer, or project-level task invokes directly | No routing needed | arc-writing-skills, arc-evaluating |
When creating a new skill:
arc-using's "Discipline Skills — Conditional Triggers" table — and the routing condition must be concrete (no global "always invoke" language; preserve harness/eval isolation).Discovered through eval — don't repeat:
| Platform | Skills Directory |
|---|---|
| Claude Code | ~/.claude/skills/ |
| Codex | ~/.codex/skills/ |
| Cursor | ~/.cursor/skills/ |
| Gemini | ~/.gemini/skills/ |
skills/
skill-name/
SKILL.md # Core logic and decisions (required)
references/ # Detailed material, loaded on-demand
patterns.md # Detailed patterns, examples
api.md # API docs, syntax reference
scripts/ # Executable utilities (run, not loaded)
agents/ # Subagent templates
Flat namespace - all skills in one searchable namespace
What stays in SKILL.md: Iron law, decision logic, routing, red flags, checklists — anything the agent needs to make the right choice.
What moves to references/: Detailed examples, API docs, comprehensive syntax, lengthy tables, extended rationale. Reference from SKILL.md so the agent knows when to load them.
Keep inline: Principles, concepts, code patterns (< 50 lines)
arcforge ships as a plugin. At runtime the LLM works in a user's project — cwd is the user's project, NOT the plugin install. Any reference to plugin internal files from skill prose must be absolute, derived from ${ARCFORGE_ROOT} — never bare cwd-relative.
${ARCFORGE_ROOT} is set by the SessionStart hook (inject-skills) and points at the plugin install root.
| Reference target | Prefix | Example |
|---|---|---|
Plugin shared library (${ARCFORGE_ROOT}/scripts/lib/, ${ARCFORGE_ROOT}/scripts/cli.js) | ${ARCFORGE_ROOT}/ | ${ARCFORGE_ROOT}/scripts/lib/print-schema.js |
Skill's own files (skills/<name>/scripts/, references/) | ${SKILL_ROOT}/ | ${SKILL_ROOT}/scripts/planner.js |
| Plugin templates / agents referenced from a skill | ${ARCFORGE_ROOT}/ | ${ARCFORGE_ROOT}/templates/<name>.md |
| User's project files (not plugin) | (none — bare is correct) | specs/<spec-id>/spec.xml |
${SKILL_ROOT} is set via the skill loader header. Use this idiom at the top of any Bash block that needs it:
: "${SKILL_ROOT:=${ARCFORGE_ROOT:-}/skills/<your-skill-name>}"
# WRONG — cwd-relative require breaks when cwd ≠ plugin root
node -e "require('./scripts/lib/sdd-utils')"
# WRONG — bare prose path; LLM follows literally and fails in user's cwd
"Read scripts/lib/sdd-schemas/spec.md for the schema."
# CORRECT — direct read with prefix (preferred for LLM consumption)
"Read ${ARCFORGE_ROOT}/scripts/lib/sdd-schemas/spec.md for the schema."
# CORRECT — Bash invocation with prefix
node "${ARCFORGE_ROOT}/scripts/lib/print-schema.js" spec --markdown
CI lint scans skills/**/SKILL.md, skills/**/references/**/*.md, templates/**/*.md, and agents/**/*.md for ${ARCFORGE_ROOT}/scripts/lib/ discipline: any plugin shared-library reference must use that exact prefix. Failures block merge. The only exception is this fenced Anti-patterns teaching block. Skill-local relative paths (using ${SKILL_ROOT}/, or cd ${SKILL_ROOT} then bare) are author's judgment — not enforced.
Frontmatter (YAML):
Required fields:
name: Letters, numbers, and hyphens onlydescription: Third-person, describes ONLY when to use (NOT what it does)
Combined name + description must stay under 1024 characters.
Optional fields (use only when they earn their place):
| Field | Use when |
|---|---|
argument-hint | Skill takes CLI-style arguments and you want them surfaced in the slash-command palette (e.g., arc-maintaining-obsidian). Pure UX — no triggering effect. |
allowed-tools | You want to constrain which tools the skill may use. Encouraged for skills that don't need full tool access — defense in depth at the skill layer rather than relying on the harness default. |
disable-model-invocation | Skill must be user-invocable only, never auto-triggered by the model. |
user-invocable | Skill should appear in the slash command list. |
Avoid model, context, agent, hooks in skill frontmatter unless you have a concrete reason — those couple the skill to runtime/harness concerns better managed at the plugin or settings level.
---
name: Skill-Name-With-Hyphens
description: Use when [specific triggering conditions and symptoms]
---
# Skill Name
## Overview
Core principle in 1-2 sentences.
## When to Use
Bullet list with SYMPTOMS and use cases. When NOT to use.
## Core Pattern
Before/after code comparison (for techniques/patterns)
## Quick Reference
Table or bullets for scanning
## Common Mistakes
What goes wrong + fixes
Critical for discovery: Future Claude needs to FIND your skill
CRITICAL: Description = When to Use, NOT What the Skill Does
The description should ONLY describe triggering conditions. Do NOT summarize the skill's process or workflow.
Why this matters: Testing revealed that when a description summarizes the skill's workflow, Claude may follow the description instead of reading the full skill content.
The trap: Descriptions that summarize workflow create a shortcut Claude will take. The skill body becomes documentation Claude skips.
# BAD: Summarizes workflow - Claude may follow this instead of reading skill
description: Use for TDD - write test first, watch it fail, write minimal code
# GOOD: Triggering conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
Use words Claude would search for:
| Rule | Details |
|---|---|
| Prefix | arc- required |
| Case | kebab-case |
| Voice | Verb-first, active |
| Form | Gerund (-ing) for process skills |
| Structure | arc-<action>[-<object>[-<scope>]] |
Patterns:
| Pattern | When | Example |
|---|---|---|
arc-<gerund> | Single action | arc-brainstorming, arc-debugging |
arc-<gerund>-<object> | Action + target | arc-writing-tasks, arc-requesting-review |
arc-using-<tool> | Tool usage | arc-using-worktrees |
arc-<acronym> | Well-known abbreviation | arc-tdd |
Avoid:
arc-coordinator → arc-coordinatingarc-debug → arc-debuggingarc-task-writer → arc-writing-tasksSkills use progressive disclosure — not everything loads at once:
| Level | What loads | When | Token cost |
|---|---|---|---|
| 1. Description | name + description frontmatter | Always in context | ~100 tokens per skill |
| 2. SKILL.md body | Full markdown content | On skill invocation | 500–4,000 tokens |
| 3. References | Files in references/, agents/, etc. | On-demand when agent reads them | Zero until needed |
Keep SKILL.md lean and high-signal. Move detail to references.
| Tier | Limit | Use for |
|---|---|---|
| Lean | <500w | Simple triggers, thin wrappers |
| Standard | <1000w | Most workflow skills |
| Comprehensive | <1800w | Complex multi-path skills |
| Meta | <2500w | Self-referential teaching skills |
Split when any of these are true:
Point to reference files from SKILL.md with clear loading guidance:
**Testing methodology:** See `testing-skills-with-subagents.md` for complete testing methodology.
For large reference files (300+ lines), include a table of contents at the top so the agent can navigate efficiently.
Use explicit requirement markers:
**REQUIRED SUB-SKILL:** Use arc-debugging when encountering failures
**REQUIRED BACKGROUND:** You MUST understand arc-using first
Never use at-sign file syntax - it force-loads files immediately, consuming context before needed.
digraph when_flowchart {
"Need to show information?" [shape=diamond];
"Decision where I might go wrong?" [shape=diamond];
"Use markdown" [shape=box];
"Small inline flowchart" [shape=box];
"Need to show information?" -> "Decision where I might go wrong?" [label="yes"];
"Decision where I might go wrong?" -> "Small inline flowchart" [label="yes"];
"Decision where I might go wrong?" -> "Use markdown" [label="no"];
}
Use flowcharts ONLY for:
Never use flowcharts for:
See graphviz-conventions.dot for graphviz style rules.
Visualizing for your human partner: Use render-graphs.js to render a skill's flowcharts to SVG:
./render-graphs.js ../some-skill # Each diagram separately
./render-graphs.js ../some-skill --combine # All diagrams in one SVG
Description (good vs bad):
# BAD: Summarizes workflow
description: Use for TDD - write tests first and refactor after
# GOOD: Trigger conditions only
description: Use when implementing any feature or bugfix, before writing implementation code
Structure (good vs bad):
BAD: Long narrative with no headings, no checklist, no red flags
GOOD: Overview → When to Use → Core Pattern → Common Mistakes → Checklist
NO SKILL WITHOUT A FAILING TEST FIRST
This applies to NEW skills AND EDITS to existing skills.
Write skill before testing? Delete it. Start over. Edit skill without testing? Same violation.
No exceptions:
Test with:
Success criteria: Agent follows rule under maximum pressure
Test with:
Success criteria: Agent successfully applies technique
Test with:
Success criteria: Agent correctly identifies when/how to apply pattern
Test with:
Success criteria: Agent finds and correctly applies reference
| Excuse | Reality |
|---|---|
| "Skill is obviously clear" | Clear to you ≠ clear to other agents. Test it. |
| "It's just a reference" | References can have gaps. Test retrieval. |
| "Testing is overkill" | Untested skills have issues. Always. |
| "I'll test if problems emerge" | Problems = agents can't use skill. Test BEFORE deploying. |
| "Too tedious to test" | Testing is less tedious than debugging bad skill. |
| "I'm confident it's good" | Overconfidence guarantees issues. Test anyway. |
| "No time to test" | Deploying untested skill wastes more time fixing it later. |
Don't just state the rule - forbid specific workarounds:
# BAD
Write code before test? Delete it.
# GOOD
Write code before test? Delete it. Start over.
**No exceptions:**
- Don't keep it as "reference"
- Don't "adapt" it while writing tests
- Delete means delete
**Violating the letter of the rules is violating the spirit of the rules.**
This cuts off entire class of "I'm following the spirit" rationalizations.
Every excuse agents make goes in the table with counter.
## Red Flags - STOP and Start Over
- Code before test
- "I already manually tested it"
- "Tests after achieve the same purpose"
- "This is different because..."
**All of these mean: Delete. Start over.**
Run pressure scenario with subagent WITHOUT the skill. Document:
Write skill addressing those specific rationalizations. Don't add extra content for hypothetical cases.
Run same scenarios WITH skill. Agent should now comply.
Agent found new rationalization? Add explicit counter. Re-test until bulletproof.
Testing methodology: See testing-skills-with-subagents.md for complete testing methodology.
Structured evaluation: Use agents/ templates for structured grading (skill-grader.md), blind comparison (skill-comparator.md), pattern analysis (skill-analyzer.md), and description testing (description-tester.md). See references/eval-schemas.md for data formats.
"In session 2025-10-03, we found..." Why bad: Too specific, not reusable
example-js.js, example-py.py, example-go.go Why bad: Mediocre quality, maintenance burden
Why bad: Can't copy-paste, hard to read
helper1, helper2, step3 Why bad: Labels should have semantic meaning
After writing ANY skill, you MUST STOP and complete the deployment process.
Do NOT:
The deployment checklist below is MANDATORY for EACH skill.
Deploying untested skills = deploying untested code. It's a violation of quality standards.
IMPORTANT: Use TodoWrite to create todos for EACH checklist item.
RED Phase - Write Failing Test:
GREEN Phase - Write Minimal Skill:
name + description (combined under 1024 chars); any optional fields used per the Frontmatter sectionREFACTOR Phase - Close Loopholes:
Deployment:
This skill includes supporting files for comprehensive skill development:
Methodology:
testing-skills-with-subagents.md - Complete testing methodology with pressure scenariosanthropic-best-practices.md - Official skill authoring guidance (conciseness, structure, evaluation)Psychology:
persuasion-principles.md - Research on persuasion techniques for skill design (authority, commitment, scarcity, social proof, unity)Flowcharts:
graphviz-conventions.dot - Style guide for graphviz flowcharts (node shapes, edge labels, naming patterns)render-graphs.js - Utility to render SKILL.md flowcharts to SVGEvaluation Agents (subagent templates):
agents/skill-grader.md - Grades behavioral compliance under pressure, extracts rationalizationsagents/skill-comparator.md - Blind A/B comparison of skill versions for REFACTOR phaseagents/skill-analyzer.md - Assertion discrimination, benchmark pattern analysis, pressure scenario qualityagents/description-tester.md - Automated description trigger testing with train/test splitEvaluation Reference:
references/eval-schemas.md - JSON schemas for evals, grading, benchmarks, and comparisonsExamples:
examples/CLAUDE_MD_TESTING.md - Example of testing documentation variants with pressure scenariosHow future Claude finds your skill:
Optimize for this flow - put searchable terms early and often.
Creating skills IS TDD for process documentation.
Same Iron Law: No skill without failing test first. Same cycle: RED (baseline) → GREEN (write skill) → REFACTOR (close loopholes). Same benefits: Better quality, fewer surprises, bulletproof results.
If you follow TDD for code, follow it for skills. It's the same discipline applied to documentation.