Design and iterate Claude Code skills: description triggers activation, instructions shape behavior. Invoke whenever task involves any interaction with Claude Code skills — creating, evaluating, debugging, or understanding how they work.
Designs and iterates Claude Code skills by structuring descriptions and instructions for activation and behavior.
npx claudepluginhub xobotyi/cc-foundryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/advanced-patterns.mdreferences/creation.mdreferences/evaluation.mdreferences/iteration.mdreferences/prompt-techniques.mdreferences/spec.mdreferences/troubleshooting.mdSkills are prompt templates that extend Claude with domain expertise.
They load on-demand: Claude sees only name and description at startup,
then loads full SKILL.md content when triggered.
Skill(ai-helpers:prompt-engineering)
Skip only for trivial edits (typos, formatting). </prerequisite>
| Situation | Reference | Contents |
|---|---|---|
| SKILL.md format, frontmatter rules, directory layout | spec.md | Frontmatter fields, name rules, string substitutions, progressive disclosure |
| Creating a skill from scratch | creation.md | Step-by-step process, scope sizing, description examples, archetype templates |
| Evaluating skill quality (review, audit) | evaluation.md | Scoring rubric (5 dimensions), testing protocol, common issues by score range |
| Skill not triggering, wrong output, refinement | iteration.md | Activation fixes, output fixes, restructuring, splitting guidance |
| Multi-file skills, scripts, subagents, hooks | advanced-patterns.md | Fork pattern, workflow skills, composable skills, permission scoping |
| Debugging activation failures, script errors | troubleshooting.md | Diagnostic steps for structure, activation, output, script, reference issues |
| Writing persuasive instructions, reasoning | prompt-techniques.md | XML tags, chain of thought, format control, instruction strengthening |
Read the relevant reference before proceeding.
The description determines when Claude activates your skill. It's the highest-leverage field — poor descriptions cause missed activations.
[What it does] + [When to invoke — broad domain claim with trigger examples]
Good — functional description + broad claim:
description: >-
Go language conventions, idioms, and toolchain. Invoke when task
involves any interaction with Go code — writing, reviewing,
refactoring, debugging, or understanding Go projects.
Good — what it does + when with trigger keywords:
description: >-
Design and iterate Claude Code skills: description triggers
activation, instructions shape behavior. Invoke whenever task
involves any interaction with Claude Code skills — creating,
evaluating, debugging, or understanding how they work.
Bad — vague, no trigger surface:
description: Helps with documents
Bad — slogan instead of functional description:
description: >-
Speed and simplicity over compatibility layers: Bun runtime
conventions, APIs, and toolchain.
Bad — narrow verb list instead of domain claim:
description: >-
Skills for Claude Code. Invoke when creating, editing, debugging,
or asking questions about skills.
SKILL.md must be behaviorally self-sufficient. An agent reading only SKILL.md — without loading any references — must be able to do the job correctly. References provide depth, not breadth.
This applies to all skill types, not just coding disciplines.
| Content Type | Location | Rationale |
|---|---|---|
| Behavioral rules | SKILL.md body | Agent must follow these during work — can't afford a missed reference read |
| Catalog/lookup content | references/ | Agent reads on-demand for specific lookups (API tables, comparison charts) |
| Situational content | references/ | Only needed in specific phases (migration guides, deployment patterns) |
| Voluminous structural content | references/ | Too large to inline, inherently lookup-oriented (schemas, 50+ entry tables) |
Behavioral rules are directives an agent must follow to do the work correctly. If an agent skipping a reference would produce wrong output, that content is behavioral and belongs in SKILL.md.
When a reference contains both rules and depth, use a two-resolution split:
The agent works correctly at working resolution. References let it zoom in.
Example: A quality assessment skill puts a 6-row checklist with criteria and weights in SKILL.md. The reference provides detailed 0-20 scoring rubrics for each criterion with examples at each level.
Example: A Node.js skill puts 17 module system rules in SKILL.md. The reference provides ESM/CJS comparison tables, file extension edge cases, and interop patterns.
When a skill has references, include a route table with a Contents column describing what type of depth the reference provides:
| Topic | Reference | Contents |
|-------|-----------|----------|
| Modules | `references/modules.md` | ESM/CJS comparison tables, file extension rules |
| Streams | `references/streams.md` | Stream types table, pipeline patterns, backpressure |
The Contents column tells the agent what's inside, enabling informed read decisions. Without it, agents either over-read (wasting context) or skip (missing content).
Place rules in the body section where they're contextually relevant. State
them as positive directives: "Use pipeline() for stream composition" — not
"Don't use .pipe()" in a separate anti-pattern table. Separate anti-pattern
tables duplicate body content and waste tokens.
Keep an anti-pattern table only when the "don't" side is genuinely non-obvious from the positive rule (e.g., common migration pitfalls in a version upgrade skill where users carry muscle memory from the old version).
skill-name/
├── SKILL.md # Required: complete behavioral specification
└── references/ # Optional: deepening material
└── *.md
Locations:
~/.claude/skills/<name>/SKILL.md.claude/skills/<name>/SKILL.md<plugin>/skills/<name>/SKILL.mdSkills are prompts. Apply prompt engineering fundamentals:
Be clear and direct. If a colleague reading your instructions would be confused, Claude will be too.
Use imperative voice:
1. Read the input file
2. Extract entities matching pattern X
3. Return as JSON with keys: name, type, value
Not:
You should read the file and then you might want to
extract some entities from it...
Use XML tags for structure:
<instructions>
1. Parse the input
2. Validate against schema
3. Return results
</instructions>
<output_format>
Return as JSON: {"status": "ok|error", "data": [...]}
</output_format>
Add examples. Few-shot prompting is the most reliable way to communicate expected behavior. Include input/output pairs.
Place critical rules at the end. Instructions near the context boundary are followed more reliably.
Numbered lists vs bullet lists:
node: prefix..."If the items can be reordered without changing meaning, use bullets.
Sequential phases with clear inputs/outputs and checkpoints. The SKILL.md contains the complete workflow; references provide detailed rubrics, templates, or extended checklists. Example: a CLAUDE.md auditor with discover → assess → report → update phases.
Complete specification for a tool, format, or API. Everything inline — the agent needs the full spec to do the work. References are rare; when present, they hold example collections. Example: a hookify rule-writing skill containing the entire rule syntax (~300 lines, no references).
Conventions and rules for a language, framework, or platform. Structure:
# [Technology]
[Philosophy statement — one line]
## References (route table with Contents column)
## [Topic sections with declarative rules as bullet lists]
## Application (writing mode vs reviewing mode)
## Integration (relationship to other skills)
[Closing maxim]
Key patterns:
Simple skill (no references):
---
name: my-skill
description: >-
[What it does]. Invoke whenever task involves any interaction
with [domain] — [specific triggers].
---
# My Skill
## Instructions
[Clear, imperative steps or declarative rules]
## Examples
**Input:** [request]
**Output:** [expected result]
Skill with references:
---
name: my-skill
description: >-
[What it does]. Invoke whenever task involves any interaction
with [domain] — [specific triggers].
---
# My Skill
[Philosophy or purpose statement]
## References
| Topic | Reference | Contents |
|-------|-----------|----------|
| [topic] | `references/[file].md` | [type of depth: tables, examples, patterns] |
## [Topic Sections]
[Working-resolution rules — complete behavioral spec]
[Pointers to references for extended examples, lookup tables, edge cases]
Skills are prompts. Every prompt engineering principle applies. Use clear structure, examples, XML tags, and explicit format specifications.
Description is the trigger. Claude activates based solely on matching request to description. Vague descriptions → missed activations.
SKILL.md is the discipline. An agent reading only SKILL.md must be able to do the job correctly. References deepen — they don't complete.
References are optional depth. Detailed rubrics, extended examples, full catalogs, niche how-tos. Never core behavioral rules.
One skill, one purpose. Broad skills produce mediocre results. If scope creeps, split.
Before deploying:
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.