From nutmeg
Reviews football data code and visualisations for correctness, conventions, visuals, and edge cases. Dispatches data and chart reviewers after charts, pipelines, or analyses.
npx claudepluginhub withqwerty/plugins --plugin nutmegThis skill is limited to using the following tools:
Dispatch specialised reviewers to check football data code and visualisations for correctness, convention compliance, and edge cases.
Brainstorms football data visualizations, chart types, and designs for match reports, player profiles, team dashboards. Uses web search for examples; adapts to user prefs (Python/R/JS, mplsoccer/d3/campos).
Reviews code for quality issues: architecture conformance, anti-patterns, performance, maintainability. Read-only analysis, never modifies code.
Verifies visual output of slides, charts, documents, UI via render-vision-fix loop: renders PNG/PDF, scores with Gemini vision (0-10), iterates defect fixes until >=9.5.
Share bugs, ideas, or general feedback.
Dispatch specialised reviewers to check football data code and visualisations for correctness, convention compliance, and edge cases.
Read and follow docs/accuracy-guardrail.md before answering any question about provider-specific facts.
Read .nutmeg.user.md. If it doesn't exist, tell the user to run /nutmeg first.
Look at what the user wants reviewed. Read the relevant files. Then decide which reviewers to dispatch:
| Signal | Dispatch |
|---|---|
| Code processes football data (fetching, filtering, transforming, computing metrics) | data-reviewer agent |
| Code renders a chart or visualisation | chart-reviewer agent (Mode 1: Code Review) |
| User provides a URL or says "check how it looks" | chart-reviewer agent (Mode 2: Visual Inspection) |
| Chart has filters, tooltips, state, or dynamic data | chart-reviewer agent (Mode 3: Interactive Edge Cases) |
Code imports @withqwerty/campos-* (React + campos) | chart-reviewer agent (Mode 4: React + Campos) — pass skills/_shared/campos-bridge.md in context |
| Code does both data processing AND chart rendering | Both agents in parallel |
Always dispatch at least one. If unclear, dispatch both — redundant findings are better than missed issues.
Detection for Mode 4: grep the reviewed files for @withqwerty/campos- or from "@withqwerty/campos. Any match activates Mode 4 alongside Mode 1.
Spawn agents in parallel when dispatching multiple. Each agent receives:
Review the football data code in [FILE_PATHS].
The user is working with [PROVIDER] data in [LANGUAGE].
They built: [DESCRIPTION]
Their concern: [WHAT_THEY_SAID]
Follow the full review checklist in your agent prompt. Use search_docs to verify
provider-specific facts (coordinate systems, qualifier IDs, event types).
Review the chart code in [FILE_PATHS].
Mode(s): [Code Review / Visual Inspection / Interactive Edge Cases]
The user is building: [DESCRIPTION]
Their concern: [WHAT_THEY_SAID]
Stack: [LANGUAGE + LIBRARIES from profile]
[If visual inspection: URL or instructions to render]
Load skills/brainstorm/references/chart-canon.md for convention checking.
After both agents report back:
If the chart-reviewer's code review finds potential rendering issues but can't confirm without seeing the output, suggest:
"The code review found [N] potential rendering issues. Want me to visually inspect the chart? I'll need a URL or you can run it locally."
Don't require visual inspection — many users can't easily serve their chart locally. Code review alone catches most issues.
If findings are found:
If no findings: