Skill
Community

creating-agents

Install
1
Install the plugin
$
npx claudepluginhub nilpath/nilpath-marketplace --plugin claude-code-tools

Want just this skill?

Then install: npx claudepluginhub u/[userId]/[slug]

Description

Expert guidance for creating Claude Code subagents and multi-agent workflows. Use when designing new subagents, configuring agent tools/permissions, implementing orchestration patterns, or troubleshooting agent delegation.

Tool Access

This skill uses the workspace's default tool permissions.

Supporting Assets
View in Repository
examples/real-world-agents.md
references/anti-patterns.md
references/official-spec.md
references/orchestration-patterns.md
references/tool-permissions.md
templates/code-reviewer.md
templates/debugger.md
templates/domain-expert.md
templates/researcher.md
workflows/audit-existing-agent.md
workflows/create-code-writer-agent.md
workflows/create-read-only-agent.md
workflows/create-research-agent.md
Skill Content

Creating Claude Code Agents

Expert guidance for designing and implementing Claude Code subagents based on Anthropic's official specification and industry best practices.

Core principles

Subagents solve three fundamental problems:

1. Context Preservation

Main conversation context is precious. Subagents isolate verbose operations (test runs, documentation fetches, log analysis) and return only summaries.

Without subagents: Running tests consumes 50K+ tokens in your main context With subagents: Test output stays in subagent context; you get a 500-token summary

2. Parallelization

Launch multiple subagents simultaneously for independent tasks:

Research the authentication, database, and API modules in parallel using separate subagents

Each explores its area independently, then Claude synthesizes findings.

3. Specialization

A single agent handling everything becomes a "jack of all trades, master of none." As instruction complexity increases, reliability decreases. Subagents enable focused expertise with minimal tool access.

How Delegation Works

Claude automatically delegates based on each subagent's description field. Write clear descriptions that include:

  • What it does: "Reviews code for quality and security"
  • When to use it: "Use proactively after code changes"
  • Trigger keywords: Include terms users might say

Good description:

description: Reviews code for quality, security, and best practices. Use proactively after code changes or when user mentions review, audit, or code quality.

Bad description:

description: Helps with code

Agent Design Principles

1. Single Responsibility

Each subagent should excel at ONE specific task. Don't create a "helper" agent that does everything.

Good: code-reviewer, test-runner, doc-researcher Bad: general-helper, code-assistant, utility-agent

2. Minimal Tool Access

Grant only the tools necessary for the task:

RoleRecommended Tools
Reviewer/AuditorRead, Grep, Glob
ResearcherRead, Grep, Glob, WebFetch, WebSearch
ImplementerRead, Write, Edit, Bash, Glob, Grep
Domain ExpertRead, Grep, Glob + domain-specific

See references/tool-permissions.md for detailed guidance.

3. Clear System Prompts

The markdown body becomes the subagent's system prompt. Be specific about:

  • When to use which approach
  • Output format expectations
  • What NOT to do (constraints)

4. Model Selection

  • haiku: Fast, cheap - ideal for read-only exploration
  • sonnet: Balanced - good for most tasks
  • opus: Most capable - use for complex reasoning
  • inherit: Uses main conversation model (default)

YAML Frontmatter Reference

FieldRequiredDescription
nameYesUnique identifier (lowercase, hyphens)
descriptionYesWhen Claude should delegate
toolsNoTools the agent can use (inherits all if omitted)
disallowedToolsNoTools to explicitly deny
modelNoModel to use (default: inherit)
permissionModeNoPermission handling mode
skillsNoSkills to preload into context
hooksNoLifecycle hooks for this agent

See references/official-spec.md for complete specification.

Orchestration Patterns

Fan-Out (Parallel Research)

Multiple agents explore different areas simultaneously:

Use subagents to research authentication patterns, database schema, and API design in parallel

Best for: Independent research tasks, codebase exploration, documentation gathering.

Pipeline (Sequential Processing)

Chain agents where each builds on previous results:

First use code-reviewer to find issues, then use debugger to fix them

Best for: Review-then-fix workflows, multi-stage processing.

Orchestrator-Worker

A lead agent decomposes tasks and delegates to specialists:

Analyze this feature request and delegate implementation to appropriate specialists

Best for: Complex features, large-scale refactoring.

See references/orchestration-patterns.md for detailed patterns.

Common Anti-Patterns

Avoid these mistakes when creating agents:

Anti-PatternProblemSolution
Vague descriptionClaude doesn't know when to delegateInclude specific trigger keywords
Over-broad toolsSecurity risk, unfocused behaviorGrant minimal necessary permissions
No verificationCan't tell if agent succeededInclude verification steps in prompt
Premature complexityMulti-agent when single sufficesStart simple, add agents as needed
Generic namingHard to discover and delegateUse specific, task-focused names

See references/anti-patterns.md for detailed guidance.

What Would You Like To Do?

  1. Create a new agent - Step-by-step workflow guides
  2. Use a template - Copy and customize ready-made agents
  3. Audit an existing agent - Check against best practices
  4. Learn design patterns - Understand orchestration and anti-patterns

1. Create a New Agent

Choose by agent type:

2. Use a Template

Ready-to-use agents you can customize:

3. Audit an Existing Agent

4. Learn Design Patterns

Testing Your Agent

1. Manual Testing

# Test delegation
Use the [agent-name] to [task description]

# Test automatic delegation (if description says "use proactively")
[Task that should trigger the agent]

2. Verify Tool Access

Check the agent has exactly the tools it needs, no more:

Use [agent-name] to describe what tools you have access to

3. Test with Different Models

If you set model: haiku for speed, ensure instructions are clear enough:

  • Haiku needs explicit, step-by-step instructions
  • Sonnet/Opus can infer more from context

4. Check Edge Cases

  • What happens if the agent can't find what it needs?
  • Does it fail gracefully or loop indefinitely?
  • Are error messages helpful?

Storage Locations

LocationScopeUse Case
.claude/agents/Current projectTeam-shared, version-controlled
~/.claude/agents/All your projectsPersonal agents
--agents CLI flagCurrent sessionTesting, automation
Plugin agents/Where plugin enabledDistributed via plugin

Higher-priority locations override lower when names conflict.

Real-World Examples

See examples/real-world-agents.md for battle-tested agents including:

  • Security auditor
  • Performance analyzer
  • Documentation generator
  • Test writer
  • Migration assistant

Success Criteria

A well-designed agent:

  • Has a single responsibility - one job, minimal tools
  • Has a specific description with trigger keywords (what AND when)
  • Uses minimal tool access appropriate for its role
  • Includes verification steps in its system prompt
  • Handles failures gracefully with useful error messages
  • Has been tested with explicit invocation and automatic delegation

References

Sources

Research for this skill was conducted from multiple authoritative sources:

Stats
Stars0
Forks0
Last CommitJan 30, 2026

Similar Skills