From evals-skills
Guides analysis of LLM pipeline traces to identify, categorize, and prioritize failure modes. Use for new eval projects, pipeline changes, metric drops, or incidents.
npx claudepluginhub hamelsmu/evals-skills --plugin evals-skillsThis skill uses the workspace's default tool permissions.
Guide the user through reading LLM pipeline traces and building a catalog of how the system fails.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Guide the user through reading LLM pipeline traces and building a catalog of how the system fails.
Capture the full trace: input, all intermediate LLM calls, tool uses, retrieved documents, reasoning steps, and final output.
Target: ~100 traces. This is roughly where new traces stop revealing new kinds of failures. The number depends on system complexity.
From real user data (preferred):
From synthetic data (when real data is sparse):
Present each trace to the user. For each one, ask: did the system produce a good result? Pass or Fail.
For failures, note what went wrong. Focus on the first thing that went wrong in the trace — errors cascade, so downstream symptoms disappear when the root cause is fixed. Don't chase every issue in a single trace.
Write observations, not explanations. "SQL missed the budget constraint" not "The model probably didn't understand the budget."
Template:
| Trace ID | Trace | What went wrong | Pass/Fail |
|----------|-------|-----------------|-----------|
| 001 | [full trace] | Missing filter: pet-friendly requirement ignored in SQL | Fail |
| 002 | [full trace] | Proposed unavailable times despite calendar conflicts | Fail |
| 003 | [full trace] | Used casual tone for luxury client; wrong property type | Fail |
| 004 | [full trace] | - | Pass |
Heuristics:
After reviewing 30-50 traces, start grouping similar notes into categories. Don't wait until all 100 are done — grouping early helps sharpen what to look for in the remaining traces. The categories will evolve. The goal is names that are specific and actionable, not perfect.
When to split vs. group:
Split these (different root causes):
Group these (same root cause):
LLM-assisted clustering (use only after the user has reviewed 30-50 traces):
Here are failure annotations from reviewing LLM pipeline traces.
Group similar failures into 5-10 distinct categories.
For each category, provide:
- A clear name
- A one-sentence definition
- Which annotations belong to it
Annotations:
[paste annotations]
Always review LLM-suggested groupings with the user. LLMs cluster by surface similarity (e.g., grouping "app crashes" and "login is slow" because both mention login).
Aim for 5-10 categories that are:
Go back through all traces and apply binary labels (pass/fail) for each failure category. Each trace gets a column per category. Use whatever tool the user prefers — spreadsheet, annotation app (see build-review-interface), or a simple script.
failure_rates = labeled_df[failure_columns].sum() / len(labeled_df)
failure_rates.sort_values(ascending=False)
The most frequent failure category is where to focus first.
Work through each category with the user in this order:
Can we just fix it? Many failures have obvious fixes that don't need an evaluator at all:
If a clear fix resolves the failure, do that first. Only consider an evaluator for failures that persist after fixing.
Is an evaluator worth the effort? Not every remaining failure needs one. Building and maintaining evaluators has real cost. Ask the user:
Reserve evaluators for failures the user will iterate on repeatedly. Start with the highest-frequency, highest-impact category.
For failures that warrant an evaluator: prefer code-based checks (regex, parsing, schema validation) for anything objective. Use write-judge-prompt only for failures that require judgment. Critical requirements (safety, compliance) may warrant an evaluator even after fixing the prompt, as a guardrail.
Expect 2-3 rounds of reviewing and refining categories. After each round:
Stop reviewing when new traces aren't revealing new kinds of failures. Roughly: ~100 traces reviewed with no new failure types appearing in the last 20. The exact number depends on system complexity.
When production volume is high, use a mix:
| Strategy | When to Use | Method |
|---|---|---|
| Random | Default starting point | Sample uniformly from recent traces |
| Outlier | Surface unusual behavior | Sort by response length, latency, tool call count; review extremes |
| Failure-driven | After guardrail violations or user complaints | Prioritize flagged traces |
| Uncertainty | When automated judges exist | Focus on traces where judges disagree or have low confidence |
| Stratified | Ensure coverage across user segments | Sample within each dimension |