From code-review
This skill should be used when the user asks to review code, check a PR, audit a file or module, look at code quality, assess a codebase, or says things like "what do you think of this code", "review my changes", "look at this file", "check this PR", or "does this look right". Trigger on any request to evaluate, critique, or assess code — even casually phrased ones. Performs thorough, senior-engineer-quality code reviews that go beyond bug detection.
npx claudepluginhub dimagi/dimagi-claude-workflows --plugin code-reviewThis skill uses the workspace's default tool permissions.
Orchestrate a thorough code review by spawning **5 parallel specialist reviewers**, each deeply focused on one dimension, then synthesise their findings into a coherent, prioritised review.
Provides structured code reviews for pull requests and changes, delivering actionable feedback on bugs, security, performance, and maintainability to foster collaboration.
Performs comprehensive code reviews with automated fixes for Python, TypeScript, JavaScript, Go, Rust projects. Analyzes quality, security, performance, architecture, tests; applies safe fixes and generates reports.
Performs multi-agent code reviews for security, quality, and architecture on Go, Python, JS/TS/HTML/CSS changes detected via git diff. Use for 'review code' requests.
Share bugs, ideas, or general feedback.
Orchestrate a thorough code review by spawning 5 parallel specialist reviewers, each deeply focused on one dimension, then synthesise their findings into a coherent, prioritised review.
Before spawning reviewers, establish:
If the user just pastes code or says "review this", infer context and proceed immediately. Only ask if something critical is missing.
Read the code first. Use Read to read files and Glob to scan directory structure before spawning agents. Enough context is needed to give agents useful starting information and file paths.
Create a temp working directory:
/tmp/code-review-{timestamp}/
Before spawning agents, resolve ${CLAUDE_PLUGIN_ROOT} to its absolute path (it's available in your environment) and substitute it into the agent prompts below. Subagents don't have access to this variable.
Spawn all 5 agents simultaneously (in parallel, not sequentially). Each agent:
Agents to spawn (see ${CLAUDE_PLUGIN_ROOT}/agents/ for full instructions for each):
| Agent file | Output file | Focus |
|---|---|---|
${CLAUDE_PLUGIN_ROOT}/agents/design-reviewer.md | design.json | Architecture, separation of concerns, coupling, layering |
${CLAUDE_PLUGIN_ROOT}/agents/quality-reviewer.md | quality.json | Clean code, naming, DRY, refactoring opportunities |
${CLAUDE_PLUGIN_ROOT}/agents/smells-reviewer.md | smells.json | Code smells, hacks, workarounds, anti-patterns |
${CLAUDE_PLUGIN_ROOT}/agents/security-reviewer.md | security.json | Vulnerabilities, input validation, auth, secrets, exposure |
${CLAUDE_PLUGIN_ROOT}/agents/maintainability-reviewer.md | maintainability.json | Testability, error handling, dead code, documentation |
Prompt to give each agent (replace all bracketed values and ${CLAUDE_PLUGIN_ROOT} with absolute paths before sending):
Read your reviewer instructions from [/absolute/path/to/agents/agent-file.md].
Also consult [/absolute/path/to/skills/code-review/references/language-notes.md]
for the relevant [language/framework] section — it contains idiomatic patterns,
common pitfalls, and framework conventions to check.
Then review the following code:
- Code location: [path(s) to files]
- Language/framework: [detected]
- Purpose: [brief description of what the code does]
- Output path: /tmp/code-review-{timestamp}/[output-file]
Focus only on your assigned dimension. Be thorough within your domain.
Once all agents complete, read all 5 JSON files from the working directory. Each contains an array of findings in a standard format (see agent files for schema).
Before writing the review, process the collected findings:
Deduplicate: Multiple agents may flag the same issue from different angles (e.g., a god class flagged by both design and smells agents). Merge these into a single finding — don't list the same issue twice.
Calibrate severity: Review the severity each agent assigned and apply engineering judgment. If 3+ agents flag the same root cause, that's a strong signal it's major or critical.
Find cross-cutting themes: Look for a single root cause that explains multiple findings across different agents. Surface it as a design observation rather than listing all its symptoms separately.
Assess the overall picture: Is this code fundamentally healthy with some rough edges? Or is there a deeper structural problem?
Output three blocks — no individual finding detail yet:
2–4 sentences on the overall state of the code. Be honest and direct. Praise what's genuinely good.
A compact numbered table — one row per finding, no detail:
| # | Sev | Finding | Location |
|---|---|---|---|
| 1 | 🔴 | [title] | [file:line] |
| 2 | 🟠 | [title] | [file:line] |
| … | … | … | … |
Severity key: 🔴 Critical · 🟠 Major · 🟡 Minor · 💡 Suggestion
Order rows by severity (critical first). Number sequentially starting at 1.
Higher-level architectural concerns that span multiple findings — the "core issue" narrative.
2–4 things done well. Anchors the review and tells the author what to preserve.
After outputting the triage, use AskUserQuestion to ask:
"Which findings do you want to expand? Enter numbers (e.g.
1,3), a range (1-4),all, orfix 2,5to implement directly. Or just ask about any finding."
Parse the user's response from Step 5b:
| Input pattern | Action |
|---|---|
1,3,5 | Expand findings 1, 3, 5 with full detail |
1-4 | Expand findings 1 through 4 |
all | Expand all findings |
fix 2,5 | Expand + implement findings 2 and 5 |
fix all | Expand + implement all findings |
| Natural language (e.g. "tell me more about the SQL issue") | Map to the closest matching finding by title/topic; if no confident match, ask the user to clarify which finding they mean |
Expanding a finding means writing the full detail block:
**[emoji] Title** (`path/to/file.py`, line X–Y)
What the problem is and why it matters — the actual consequence or risk.
*Suggestion:* What to do instead, with a brief code snippet if it genuinely helps.
Use the severity guide from Step 5a.
Implementing a finding (fix N): after expanding, make the code change immediately. Summarise what was changed in 1–2 sentences.
After expanding/fixing, use AskUserQuestion to ask:
"Anything else to address? Pick more numbers, ask about a finding, or say 'done'."
If the user pushes back on a finding ("I disagree with #3"), engage with their reasoning directly — they may have context that changes the assessment. This is a conversation, not a report.
Exit the loop when the user says "done" or when their response cannot be interpreted as a finding selection, fix request, or pushback on a finding.
Remove the temp working directory once the review is delivered:
rm -rf /tmp/code-review-{timestamp}/
If running in an environment without sub-agent support (e.g., claude.ai web), review the code by working through each dimension in sequence using the agent files as checklists, and consult ${CLAUDE_PLUGIN_ROOT}/skills/code-review/references/language-notes.md for language-specific guidance. The output format is identical.
${CLAUDE_PLUGIN_ROOT}/agents/ — Full reviewer instructions and JSON output schema for each specialist dimension${CLAUDE_PLUGIN_ROOT}/skills/code-review/references/language-notes.md — Language-specific idioms, common pitfalls, and framework conventions (Python, Django, JS/TS, React, Node, Go, Java/Kotlin, SQL)