npx claudepluginhub dwmkerr/claude-toolkit --plugin toolkitThis skill is limited to using the following tools:
Build rigorous evaluations for AI agents using Anthropic's proven patterns.
Builds evaluation frameworks for agent systems to test performance systematically, validate context engineering choices, and measure improvements over time.
Builds evaluation frameworks for LLM agents using multi-dimensional rubrics, LLM-as-judge, and analysis of token usage, tool calls, and model performance drivers.
Runs AgentV evaluations to benchmark AI agents, optimize prompts/skills via eval-driven iteration, compare outputs across providers, and analyze results.
Share bugs, ideas, or general feedback.
Build rigorous evaluations for AI agents using Anthropic's proven patterns.
You MUST read the reference files for detailed guidance:
YAML Templates:
Annotated Examples:
| Term | Definition |
|---|---|
| Task | Single test with defined inputs and success criteria |
| Trial | One attempt at a task (run multiple for consistency) |
| Grader | Logic that scores agent performance; tasks can have multiple |
| Transcript | Complete record of a trial (outputs, tool calls, reasoning) |
| Outcome | Final state in environment (not just what agent said) |
| Evaluation harness | Infrastructure that runs evals end-to-end |
| Agent harness | System enabling model to act as agent (scaffold) |
| Evaluation suite | Collection of tasks measuring specific capabilities |
| Type | Methods | Best For |
|---|---|---|
| Code-based | String match, unit tests, static analysis, state checks | Fast, cheap, objective verification |
| Model-based | Rubric scoring, assertions, pairwise comparison | Nuanced, open-ended tasks |
| Human | SME review, A/B testing, spot-check sampling | Gold standard calibration |
See Grader Types for detailed comparison.
| Type | Question | Target Pass Rate |
|---|---|---|
| Capability | "What can this agent do well?" | Start low, hill-climb |
| Regression | "Does it still handle what it used to?" | Near 100% |
Capability evals with high pass rates "graduate" to regression suites.
| Metric | Measures | Use When |
|---|---|---|
| pass@k | At least 1 success in k attempts | One success matters (coding) |
| pass^k | All k attempts succeed | Consistency essential (customer-facing) |
Example: 75% per-trial success rate
tracked_metrics:
- type: transcript
metrics: [n_turns, n_toolcalls, n_total_tokens]
- type: latency
metrics: [time_to_first_token, output_tokens_per_sec, time_to_last_token]
Based on Demystifying evals for AI agents by Anthropic (January 2026).