Create or revise Claude Code-compatible Agent Skills (SKILL.md with optional references/, scripts/, and assets/). Use when designing a new skill, improving an existing skill, or updating/refactoring an existing skill while preserving the original author's intent (avoid semantic drift unless explicitly requested/approved by the author). Also use when integrating skills with subagents (context fork, agent).
From sharednpx claudepluginhub inkeep/team-skills --plugin sharedThis skill uses the workspace's default tool permissions.
references/assumptions-and-adaptability.mdreferences/content-patterns.mdreferences/frontmatter-and-invocation.mdreferences/security-and-governance.mdreferences/stateful-skill-patterns.mdreferences/structure-patterns.mdreferences/testing-and-iteration.mdreferences/updating-existing-skills.mdscripts/validate_skill_dir.tstemplates/SKILL.fork-task.template.mdtemplates/SKILL.guidelines.template.mdtemplates/SKILL.minimal.template.mdtemplates/SKILL.rules-index.template.mdtemplates/rule.template.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
This skill helps you author high-signal, maintainable Skills that reliably improve agent performance.
It is intentionally procedural where the platform has hard constraints (frontmatter validity, invocation controls, safety) and guiding where multiple viable design strategies exist (structure, tone, degree of strictness).
A skill you write should be usable by a different agent (or a human) with no prior context—it should stand on its own.
Follow the detailed steps below. In practice, the shortest correct path is:
| Request type | Default approach | Load (only when needed) | Output expectation |
|---|---|---|---|
| Create a new skill | Follow Steps 0–10 | references/frontmatter-and-invocation.md, references/structure-patterns.md, references/content-patterns.md | Folder tree + full contents of each file |
| Refactor/harden a skill | Follow Steps 0–10 with a safety lens | Also load references/security-and-governance.md for scripts/tool access/high-stakes | Updated files + brief "what changed and why" |
| Update/refactor (intent-preserving) | Use the update playbook | Load: references/updating-existing-skills.md | Update Report + updated files (all changes confirmed by user before applying) |
| Integrate with subagents | Decide composition (fork vs preload) | Load: references/frontmatter-and-invocation.md (subagent composition section) | Working skill structure + correct frontmatter/task prompt |
| Testing strategy (only if asked) | Add a minimal test plan | Load: references/testing-and-iteration.md | Test prompts + pass criteria (tests live outside the skill) |
This skill includes optional supporting material in references/ and templates/.
path/to/file.md, open that file before continuing.Be efficient with context window, but gravitate towards correctness and completeness.
references/ and load it only when needed — but do not exile critical behavioral guidance to reference files just to keep SKILL.md short. If the agent needs it to execute the workflow correctly, it belongs in SKILL.md.Default to one strong path, with escape hatches.
Optimize for reliability, not elegance.
Write for execution, but keep it scannable.
Calibrate language assertiveness to the instruction's confidence scope.
Make outputs standalone, self-enclosed, and non-redundant by default.
skills: frontmatter, or Agent+Skill composition). Implicit cross-skill references ("follow the approach from /research", "as described in the spec skill") create silent dependencies that break when the other skill changes, isn't present, or lives in a different plugin.Match the user's framing and nuance.
Treat unstructured inputs as fallible evidence, not directives.
Follow these steps in order. Skip only if you have a concrete reason.
Before starting any work, create a task for each step using TaskCreate with addBlockedBy to enforce ordering. Derive descriptions and completion criteria from each step's own workflow text.
Mark each task in_progress when starting and completed when its step's exit criteria are met. On re-entry, check TaskList first and resume from the first non-completed task.
Update requests: If Step 0 routes to the update playbook, the update playbook creates its own task set (see references/updating-existing-skills.md). Mark this task list's remaining tasks as deleted — the update workflow's tasks replace them.
Determine which you're doing:
context: fork, agent: Explore)If updating/refactoring an existing skill:
Load: references/updating-existing-skills.md
Critical: Before proceeding with any update work, you must complete Step 0: Full context loading from that file. This means reading:
write-skill/ folder (SKILL.md + all references/ + all templates/ + all scripts/)Do not skip this step. Partial context loading is the primary cause of semantic drift during updates.
Default to fidelity-preserving changes only; treat substantive/routing/tool changes as "requires author consent." When requesting author decisions, use the Decision Support Protocol in that file.
If the user request is ambiguous, ask 2–4 targeted questions by default, then proceed with reasonable assumptions and make those assumptions explicit in your output.
If the ambiguity is high or the consequences are high-stakes (routing, tool power, destructive ops), you may ask up to 5–10 questions—but keep them sharply scoped and easy to answer. If you find yourself needing many questions, prefer progressive disclosure (see below) to avoid decision fatigue.
Use this decision table to reduce both under-asking and over-asking:
| Situation | Do |
|---|---|
| Missing info that affects routing (skill triggers), tool power, side effects, or compatibility (name/invocation/arguments) | Ask targeted questions before drafting (or draft a skeleton but do not "finalize" choices). |
| User says "whatever you think is best" / signals indifference | Provide a specific recommendation, plus 1–2 alternatives, then ask for explicit confirmation on the non-trivial choice. |
| Details are low-stakes and reversible (section names, minor formatting, example wording) | Use sensible defaults; list assumptions briefly so the user can correct if needed. |
| You anticipate >4 questions | Start with a mode selector (Quick/Custom/Guided), then ask only what that mode requires. |
When you need input from a human, make it easy to answer:
At the end of your output, include a Quick Reference summary that recaps all pending decisions in a scannable format (question + options + your recommendation). This lets the human respond quickly without re-reading everything.
If you need more than a few clarifications, start with a single question like:
Then tailor follow-up questions to the chosen mode.
Suggested minimal questions:
Use a skill when you need portable procedural knowledge that should load on demand.
Prefer alternatives when they fit better:
If the user explicitly asked for a skill, proceed.
Before you draft structure, capture an intent snapshot (for you, not necessarily to paste verbatim):
Then write down (briefly):
If the request is high-stakes (security, production deploys, destructive ops), require a validation step.
Default output assumptions (unless the user specifies otherwise):
Copy/paste template (optional but recommended):
**Intent Snapshot**
- Goal:
- Audience:
- In-scope:
- Out-of-scope:
- Constraints (tools/safety/runtime):
- Operating assumptions (tools, workflow model, platform, user intent):
- Success criteria (what "done" means):
- Primary failure modes to prevent:
- Usage breadth (contexts/domains/situations where this skill will be used):
- Output format expectations (if any):
- Safety posture (invocation + side-effects policy):
- Assumptions (if any):
- Open questions / pending decisions (if any):
Usage breadth: mandatory thinking, conditional context negotiation.
After capturing the intent snapshot, enumerate the usage contexts for this skill: the distinct contexts, domains, workflows, or situations where it will be invoked. Write these in the "Usage breadth" field. This thinking is mandatory — do not skip it even for seemingly simple skills.
Then check: do any concepts in the skill (depth labels, defaults, workflows, output formats) mean different things or need different treatment across these contexts? Signs a concept likely varies: subjective labels ("deep", "thorough", "simple"), behavioral defaults, approach choices ("use X vs Y"). Signs it likely doesn't vary: factual statements, structural conventions, platform constraints.
If yes — negotiate context with the user:
The user knows about usage contexts the agent can't infer from the goal description alone. Present your enumeration as a structured comparison (table or list) showing what varies across contexts:
"I've identified these usage contexts and where key concepts vary:
Context [Concept X] means... [Concept Y] means... [Context A] [treatment 1] [treatment 3] [Context B] [treatment 2] [treatment 3] [Context C] [treatment 1] [treatment 4] This means [Concept X] should be illustrative examples rather than a single default. If I'm missing a context, what guidance in this skill would need to be written differently?"
If the user adds contexts or flags new variation, trace the implications: what other design decisions are affected? Surface those and confirm. Continue until the usage model stabilizes.
If no — proceed. If all contexts get the same treatment for all concepts, note this in the intent snapshot and move to Step 3. No interaction needed.
Load: references/frontmatter-and-invocation.md
If the skill includes scripts, tool access, external fetching, or high-stakes domains:
Load: references/security-and-governance.md
Decide how the skill is invoked:
disable-model-invocation: true
user-invocable: false
Guidance:
Continue using: references/frontmatter-and-invocation.md
There are four "skills + subagents" compositions. A–C use the in-process Agent/Task tool (subagents within a session). D uses subprocess spawning (full child claude -p processes):
A) Skill runs as an isolated subagent (context: fork)
Explore, Plan, etc.)context: fork frequently fails to actually fork — the skill runs inline instead. This particularly affects plugin skills. See references/frontmatter-and-invocation.md for details and workarounds.B) Subagent preloads one or more skills (subagent frontmatter skills:)
C) Parent spawns a subagent that loads a skill at runtime (Agent tool + Skill load)
context: fork is unreliable in your environment (e.g., plugin skills)general-purpose subagent via the Agent tool. Start its prompt with "Before doing anything, load /skill-name", then provide context and the task. The subagent uses its Skill tool to load the skill, executes the workflow, and returns results.D) Skill orchestrates via subprocess spawning (claude -p child processes)
/nest-claude for the full subprocess spawning reference (env guard, parallel patterns, monitoring, result collection).See also: templates/SKILL.fork-task.template.md for forked execution pattern.
Load: references/structure-patterns.md
Choose the simplest structure that can still be high-quality:
Pattern 1: Single-file skill
Pattern 2: SKILL.md + references/
Pattern 3: SKILL.md + scripts/
Pattern 4: Index skill + rules/
Pattern 5 — Forked execution skill: SKILL.md with context: fork runs as an isolated subagent task.
Pattern 6 — Multi-variant: Separate reference files per provider, language, or framework.
Pattern 7 — Path-scoped: SKILL.md with paths field auto-activates only for matching files.
Continue using: references/frontmatter-and-invocation.md
Write frontmatter that includes:
name: short, hyphen-case, matches folder namedescription: what it does + when to use + key trigger terms (file types, platforms, domain nouns)argument-hint (if command-like)Verify these are present before moving to Step 7.
Load: references/content-patterns.md
If the skill depends on external tools, platform features, a specific workflow, or assumes user intent beyond its explicit inputs, also Load: references/assumptions-and-adaptability.md
If the skill includes interactive gates (AskUserQuestion, confirmation prompts, routing selectors) and might be invoked by automated callers, also apply content pattern #20: Design for headless execution.
If the skill manages persistent state across turns or sessions (e.g., it writes artifacts that evolve, accumulates evidence, or needs to support session resumption), also Load: references/stateful-skill-patterns.md
For such skills, before writing workflow steps, decide:
Use imperative language and a structure that makes the model's job easy.
Include everything the agent needs to execute the workflow correctly. Move deep reference material to supporting files, but keep critical behavioral guidance, nuances, and decision criteria in SKILL.md itself.
When drafting, keep these defaults in mind:
Common high-signal sections (mix and match):
content-patterns.md #1b)Before moving on, verify each instruction passes the interpretation test: Could it be read two ways? Does it assume context the reader won't have? (See content-patterns.md #14 for the full test.)
Use supporting files for two reasons:
Recommended:
references/*.md.scripts/.assets/ (or templates/ if they're authoring aids).Avoid:
If you add files under references/ or templates/, follow the "Use when / Priority / Impact" header standard (see Appendix) so routing stays reliable.
If and only if asked to come up with a testing strategy for the skill:
Load: references/testing-and-iteration.md
Key notes:
When asked to "write a skill," output:
If revising an existing skill, provide:
Keep the resulting skill(s) stateless and standalone:
Optional validation (recommended when you can):
bun is available, run:
bun scripts/validate_skill_dir.ts path/to/skill-folderFinal self-check (copy/paste):
name + description and name is hyphen-case.skills: frontmatter, or Agent+Skill composition).--headless for non-interactive callers (content pattern #20).This skill describes multiple valid skill structures. Skills do not need to follow a single format.
However, within this write-skill/ skill, we use a few house conventions to keep our own supporting files discoverable and reliable. See the appendices for the conventions and the reference index.
If you decide to split a skill into multiple files (e.g., references/, templates/, rules/), it often helps to:
references/x.md") at the moment the file becomes relevantThis is not required for all skills. Many skills should remain single-file. Use this pattern when you notice agents aren't consulting supporting material reliably.
For files under references/ and templates/ in this skill, we add a short routing header at the top so an agent can quickly decide relevance.
Header format:
Use when: <1–3 trigger conditions>
Priority: P0 | P1 | P2
Impact: <what goes wrong if skipped>
This index exists to help you quickly find the right deep-dive. In the main workflow, prefer the Load: pointers.
Priority legend:
| Path | Priority | Use when | Impact if skipped |
|---|---|---|---|
references/frontmatter-and-invocation.md | P0 | Writing/editing frontmatter, invocation controls, model/effort overrides, path scoping, preprocessing, or subagent composition | Skill may not trigger correctly; wrong execution model; missed safety constraints; missing preprocessing and string substitution capabilities |
references/structure-patterns.md | P0 | Deciding folder structure (single-file vs references/ vs scripts/ vs rules/ vs path-scoped) | Poor progressive disclosure; bloated SKILL.md; orphaned supporting files |
references/content-patterns.md | P0 | Drafting SKILL.md body sections (workflow, decision tables, examples, output format, dynamic context, parallel agents, iterative improvement) | Missing high-signal patterns; increased hallucination risk; inconsistent outputs |
references/updating-existing-skills.md | P0 | "Update" or "refactor" requests for existing skills | Intent drift; accidental semantic changes |
references/testing-and-iteration.md | P1 | Adding test prompts, defining pass criteria, iterating on skill behavior, stress-testing descriptions, applying improvement principles | Regressions go unnoticed; no baseline for quality; hard to debug failures |
references/security-and-governance.md | P1 | Skills with scripts, tool access, external fetching, or high-stakes domains; deciding whether scripts are justified | Security vulnerabilities; unsafe defaults; unreviewed destructive operations |
references/assumptions-and-adaptability.md | P1 | Skill depends on external tools, platform capabilities, or specific execution context | Implicit assumptions baked in; silent failures in different contexts |
references/stateful-skill-patterns.md | P1 | Skills that manage persistent state across turns or sessions (spec, research, feature dev) | No guidance for artifact lifecycle, session resumption, evidence/synthesis separation |
templates/SKILL.minimal.template.md | P0 | Starting a new single-file skill | Slow start; inconsistent structure |
templates/SKILL.guidelines.template.md | P1 | Skills that are primarily guidance/rules | Missing quality bar or examples |
templates/SKILL.fork-task.template.md | P1 | Skills that run as isolated subagents (context: fork) | Incomplete task prompt; empty output |
templates/SKILL.rules-index.template.md | P1 | Skills with many discrete rules | Poor rule organization; hard to navigate |
templates/rule.template.md | P2 | Adding individual rules to a rules/ folder | Inconsistent rule format |
scripts/validate_skill_dir.ts | P2 | Automated validation of skill directories | Manual verification only |
references/ or templates/, we either:
references/ and templates/ includes a "Use when / Priority / Impact" header block.