From ai-helpers
Design and iterate Claude Code skills: SKILL.md structure, description formulas, content architecture, and quality evaluation. Invoke whenever task involves any interaction with Claude Code skills — creating, reviewing, evaluating, debugging, or improving skills.
npx claudepluginhub xobotyi/cc-foundry --plugin ai-helpersThis skill uses the workspace's default tool permissions.
Skills are prompt templates that extend Claude with domain expertise. A skill lives in `skill-name/SKILL.md` with an
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Skills are prompt templates that extend Claude with domain expertise. A skill lives in skill-name/SKILL.md with an
optional references/ directory for deepening material. SKILL.md must be behaviorally self-sufficient — an agent
reading only SKILL.md, without loading any references, must be able to do the job correctly. References provide depth,
not breadth. Description triggers activation; instructions shape behavior. Claude sees only name and description at
startup, then loads full SKILL.md content when triggered.
Skill(ai-helpers:prompt-engineering)
Skip only for trivial edits (typos, formatting).
${CLAUDE_SKILL_DIR}/references/spec.md] Frontmatter
fields, name rules, string substitutions, progressive disclosure mechanics, discovery/precedence, instruction budget${CLAUDE_SKILL_DIR}/references/creation.md] Step-by-step creation workflow,
scope sizing guidance, evaluation-driven development process, archetype deep dives with extended structural patterns${CLAUDE_SKILL_DIR}/references/evaluation.md] Scoring rubric (5
dimensions), evaluation-driven development, testing protocol, common issues by score range${CLAUDE_SKILL_DIR}/references/iteration.md] Activation fixes,
output fixes, restructuring, splitting guidance${CLAUDE_SKILL_DIR}/references/advanced-patterns.md] Fork
pattern, workflow skills, composable skills, verifiable intermediate outputs, permission scoping${CLAUDE_SKILL_DIR}/references/troubleshooting.md] Diagnostic
steps for structure, activation reliability, output, script, reference issues${CLAUDE_SKILL_DIR}/references/prompt-techniques.md] CoT trade-off
research and decision rules, instruction strengthening escalation patterns, format control techniques, security
blocks, debugging instruction failuresRead the relevant reference before proceeding.
The description determines when Claude activates your skill. It's the highest-leverage field — poor descriptions cause missed activations.
[What it does] + [When to invoke — broad domain claim with trigger examples]
Good — functional description + broad claim:
description: >-
Go language conventions, idioms, and toolchain. Invoke when task
involves any interaction with Go code — writing, reviewing,
refactoring, debugging, or understanding Go projects.
Good — what it does + when with trigger keywords:
description: >-
Design and iterate Claude Code skills: SKILL.md structure,
description formulas, content architecture, and quality evaluation.
Invoke whenever task involves any interaction with Claude Code
skills — creating, reviewing, evaluating, debugging, or improving
skills.
Bad — vague, no trigger surface:
description: Helps with documents
Bad — slogan instead of functional description:
description: >-
Speed and simplicity over compatibility layers: Bun runtime
conventions, APIs, and toolchain.
Bad — narrow verb list instead of domain claim:
description: >-
Skills for Claude Code. Invoke when creating, editing, debugging,
or asking questions about skills.
SKILL.md must be behaviorally self-sufficient. An agent reading only SKILL.md — without loading any references — must be able to do the job correctly. References provide depth, not breadth.
This applies to all skill types, not just coding disciplines.
Behavioral rules are directives an agent must follow to do the work correctly. If an agent skipping a reference would produce wrong output, that content is behavioral and belongs in SKILL.md.
When a reference contains both rules and depth, use a two-resolution split:
The agent works correctly at working resolution. References let it zoom in.
Example: A quality assessment skill puts a 6-row checklist with criteria and weights in SKILL.md. The reference provides detailed 0-20 scoring rubrics for each criterion with examples at each level.
Example: A Node.js skill puts 17 module system rules in SKILL.md. The reference provides ESM/CJS comparison tables, file extension edge cases, and interop patterns.
Format choice measurably affects LLM comprehension — up to 16pp between formats on identical content. Choose format by data type:
KV lists outperform tables for lookup tasks (+8.8pp accuracy) because explicit key-value pairing eliminates column-header-to-cell inference. Tables outperform KV for comparison tasks because grid structure enables cross-row scanning.
Default to KV lists. Use tables only when removing a column would lose comparative meaning.
When a skill has references, include a route list describing what depth each reference provides. Use
$\{CLAUDE_SKILL_DIR\} for all reference paths — it resolves to the skill's absolute directory at load time, so the
agent sees unambiguous paths it can pass directly to the Read tool.
- **Modules** — `$\{CLAUDE_SKILL_DIR\}/references/modules.md`
ESM/CJS comparison tables, file extension rules, interop patterns
- **Streams** — `$\{CLAUDE_SKILL_DIR\}/references/streams.md`
Stream types table, pipeline patterns, backpressure handling
Each entry names the topic, provides the path, and describes the contents — enabling informed read decisions. Without content descriptions, agents either over-read (wasting context) or skip (missing depth).
Skills are prompts. Apply prompt engineering fundamentals.
Match instruction specificity to task fragility:
Think of it as a bridge vs. an open field: narrow bridge with cliffs needs exact guardrails (low freedom); open field needs general direction (high freedom).
Choose instruction style based on what the content demands:
Default to declarative. Research shows declarative knowledge provides greater performance benefits than procedural in the majority of tasks. Reserve numbered steps for workflows where order genuinely matters.
Models follow a U-shaped attention curve: instructions at the beginning and end of a document are followed most reliably; middle content suffers from attention decay.
Dual-placement strategy: For rules that absolutely must be followed, state them near the top AND reinforce at the end. Use different phrasing — frame as a principle at the top, as a checklist item at the bottom.
Research shows unnecessary requirements reduce task success even when the model can follow them. Every instruction competes for attention. Before adding a rule, verify the model's default behavior is insufficient — if deleting the rule doesn't change output quality, remove it.
This does not mean minimize everything — skills exist to add rules the model doesn't know. It means: don't add rules for things the model already does well. When auditing a skill, apply the deletion test: "if I remove this rule, does output quality measurably change?"
node:
prefix..."If the items can be reordered without changing meaning, use bullets.
Place rules in the body section where they're contextually relevant. State them as positive directives: "Use
pipeline() for stream composition" — not "Don't use .pipe()" in a separate anti-pattern table. Separate anti-pattern
tables duplicate body content and waste tokens.
Keep an anti-pattern table only when the "don't" side is genuinely non-obvious from the positive rule (e.g., common migration pitfalls in a version upgrade skill where users carry muscle memory from the old version).
Sequential phases with clear inputs/outputs and checkpoints. The SKILL.md contains the complete workflow; references provide detailed rubrics, templates, or extended checklists. Example: a CLAUDE.md auditor with discover → assess → report → update phases.
Complete specification for a tool, format, or API. Everything inline — the agent needs the full spec to do the work. References are rare; when present, they hold example collections. Example: a hookify rule-writing skill containing the entire rule syntax (~300 lines, no references).
Conventions and rules for a language, framework, or platform. Structure:
# [Technology]
[Philosophy statement — one line]
## References (route list with content descriptions)
## [Topic sections with declarative rules as bullet lists]
## Application (writing mode vs reviewing mode)
## Integration (relationship to other skills)
[Closing maxim]
Key patterns:
Simple skill (no references):
---
name: my-skill
description: >-
[What it does]. Invoke whenever task involves any interaction
with [domain] — [specific triggers].
---
# My Skill
## Instructions
[Clear, imperative steps or declarative rules]
## Examples
**Input:** [request]
**Output:** [expected result]
Skill with references:
---
name: my-skill
description: >-
[What it does]. Invoke whenever task involves any interaction
with [domain] — [specific triggers].
---
# My Skill
[Philosophy or purpose statement]
## References
- **[topic]** — `$\{CLAUDE_SKILL_DIR\}/references/[file].md`
[type of depth: tables, examples, patterns]
## [Topic Sections]
[Working-resolution rules — complete behavioral spec]
[Pointers to references for extended examples, lookup tables, edge cases]
Deletion test before adding. Every rule competes for attention. Before adding a rule to a skill, verify the model's default behavior is insufficient. If removing the rule doesn't change output quality, it shouldn't exist.
References must not duplicate SKILL.md. If a reference restates rules already in SKILL.md body, it wastes tokens and creates maintenance burden. References provide genuinely different depth: detailed rubrics, extended examples, full catalogs, comparison tables, edge case coverage.
Description is activation, not documentation. Every token in the description must increase activation probability. Slogans, philosophy, cross-skill dependencies, and filler verbs ("understanding", "assisting") have zero activation value.
Declarative by default. Use numbered steps only for workflows with strict ordering. Bullet-list rules for everything else. If the items can be reordered without changing meaning, use bullets.
One skill, one purpose. If scope creeps, split. Broad skills produce mediocre results because instructions compete for attention.
Before deploying:
prompt-engineering — load first for instruction design techniques (skills are prompts)subagent-engineering — skills and subagents complement each other; skills run inline, subagents run in isolationoutput-style-engineering — output styles replace the system prompt; skills extend itclaude-code-sdk — consult for SKILL.md frontmatter fields, plugin layout, and invocation control details