npx claudepluginhub technickai/ai-coding-config --plugin ai-coding-configWant just this agent?
Then install: npx claudepluginhub u/[userId]/[slug]
When creating custom agent files in `.claude/agents/`, the YAML frontmatter format
Creating Claude Code Agents
When creating custom agent files in .claude/agents/, the YAML frontmatter format
matters.
Valid Frontmatter Format
---
name: agent-name
# prettier-ignore
description: "Use when reviewing for X, checking Y, or verifying Z - include all semantic triggers"
model: opus
---
Critical constraints:
- Single line only - Claude Code doesn't parse block scalars (
>or|) correctly - Use
# prettier-ignore- Add before description to allow longer, richer triggers - Use quotes - Always quote descriptions to handle special characters like colons
Description Philosophy
Agents are LLM-triggered. Descriptions should match against user requests to enable Claude Code to auto-select the right agent. Use "Use when..." format with rich semantic triggers.
Do: Match user language
Think about what users will say:
- "review the code"
- "check if this is production ready"
- "debug this error"
- "test in the browser"
Include those exact phrases in your descriptions.
Do: Include variations
# prettier-ignore
description: "Use when reviewing for production readiness, fragile code, error handling, resilience, reliability, or catching bugs before deployment"
This triggers on: "review for production", "check fragile code", "error handling review", "catch bugs", etc.
Don't: Describe what it does
Bad: "A code reviewer that analyzes production readiness and error handling patterns"
This is technical documentation, not a semantic trigger.
Example Agent
---
name: test-runner
# prettier-ignore
description: "Use when running tests, checking test results, or verifying tests pass before committing"
model: haiku
---
I run tests using the specified test runner (bun, pnpm, pytest, etc) and return a terse
summary with pass count and failure details only. This preserves your context by
filtering verbose test output to just what's needed for fixes.
[Rest of agent prompt...]
Reviewer Agent Structure
Reviewer agents (agents that analyze code and report findings) should include two additional sections:
Review Signals
Replace prose "What I Look For" sections with scannable bullet lists. These prime the LLM to pattern-match against specific signals.
## Review Signals
These patterns warrant investigation:
**Category name**
- Specific pattern to look for
- Another specific pattern
- Concrete example of the smell
**Another category**
- Pattern
- Pattern
Bullet structure primes the LLM to look for these specific signals. Each bullet becomes a micro-prompt. Not exhaustive—representative signals that train attention in the right direction.
Handoff
Reviewer agents typically run as subagents called by an orchestrator (like multi-review). Add context so the agent understands its output goes to another LLM, not a human.
## Handoff
You're a subagent reporting to an orchestrating LLM (typically multi-review). The
orchestrator will synthesize findings from multiple parallel reviewers, deduplicate
across agents, and decide what to fix immediately vs. decline vs. defer.
Optimize your output for that receiver. It needs to act on your findings, not read a
report.
Do NOT prescribe output format in the Handoff section. Given context about who receives the output and what they'll do with it, the agent can determine the most effective format itself. Avoid over-specification.
Similar Agents
Agent for managing AI prompts on prompts.chat - search, save, improve, and organize your prompt library.
Agent for managing AI Agent Skills on prompts.chat - search, create, and manage multi-file skills for Claude Code.
Use this agent when a major project step has been completed and needs to be reviewed against the original plan and coding standards. Examples: <example>Context: The user is creating a code-review agent that should be called after a logical chunk of code is written. user: "I've finished implementing the user authentication system as outlined in step 3 of our plan" assistant: "Great work! Now let me use the code-reviewer agent to review the implementation against our plan and coding standards" <commentary>Since a major project step has been completed, use the code-reviewer agent to validate the work against the plan and identify any issues.</commentary></example> <example>Context: User has completed a significant feature implementation. user: "The API endpoints for the task management system are now complete - that covers step 2 from our architecture document" assistant: "Excellent! Let me have the code-reviewer agent examine this implementation to ensure it aligns with our plan and follows best practices" <commentary>A numbered step from the planning document has been completed, so the code-reviewer agent should review the work.</commentary></example>