/review
You are reviewing code changes: Find bugs, security issues, and quality problems before they ship.
Input
$ARGUMENTS
If $ARGUMENTS specifies a file or branch, review only those changes. Otherwise, review all uncommitted changes.
Step 1 — Gather changes
Use the helper script to collect review data in one pass:
${CLAUDE_SKILL_DIR}/scripts/gather-changes.sh [base-branch]
This collects: changed files, working tree status, debug statements, TODO/FIXME, possible secrets, and commit log. Review the output before proceeding.
If the script is not available, gather manually:
git diff
git diff --staged
git status
If reviewing a branch against main:
git diff main...HEAD
git log main..HEAD --oneline
Read every changed file in full to understand context, not just the diff.
Step 2 — Review checklist
For each changed file, check:
Correctness
- Does the logic match the intended behavior?
- Are there off-by-one errors, null/undefined access, or race conditions?
- Are error paths handled, not just happy paths?
Security
- Input validation: is user input sanitized before use?
- No secrets, tokens, or credentials in code or config
- No SQL injection, XSS, command injection, or path traversal
- Auth/authz checks in place where needed
Performance
- No N+1 queries or unbounded loops
- No unnecessary allocations in hot paths
- Are there missing indexes for new queries?
Tests
- Every new behavior has at least one test
- Every new validation rule has a positive test (valid input accepted) and a negative test (invalid input rejected)
- Test coverage for changed code paths is adequate — if a code path was modified, a test should exercise it
- Do tests verify edge cases and error paths, not just happy path?
- Are tests deterministic (no flaky timing, no test interdependence)?
Simplicity & Scope
- No over-abstraction: class hierarchies, strategy patterns, or factories for single-use code
- No speculative features: caching, validation, notification hooks, or config options nobody asked for
- No drive-by refactoring: unrelated style changes, added type hints, rewritten comments, reformatted code
- Every changed line traces directly to the task — if it doesn't, flag it
- Assumptions are stated, not silently baked in
Conventions
- Follows existing project patterns (naming, structure, error handling)
- No debug code left behind (console.log, print, debugger, TODO/FIXME)
- No commented-out code
- No unrelated changes bundled in
Step 3 — Run build and tests
A review that says "tests look correct" without running them is incomplete. Run the project's build and full test suite.
-
Discover test suites. Look for all test directories, projects, or configurations — not just unit tests. Projects commonly have: unit tests, integration tests, functional tests, end-to-end tests.
-
Run the build:
# Use whatever build command the project uses
# Examples: dotnet build, bun run build, go build ./..., cargo build, npm run build, make
- Run every test suite you can:
# Use whatever test command the project uses
# Examples: dotnet test, bun test, pytest, go test ./..., cargo test, npm test
- If a test suite requires infrastructure (database, Docker, external services) that is not available, inform the user explicitly and ask them to run those tests manually.
Record the results — you will include them in the review output.
If the build fails or tests fail, report them as blocking issues. Do NOT produce a clean review with failing tests.
Step 4 — Produce the review
Output the following structure:
Code Review
Scope: [files reviewed]
Build: Pass / Fail
Tests: [N passing, N failing — per suite]
Verdict: Clean / Minor Issues / Needs Changes / Blocking Issues
Issues Found
Numbered list, each with:
- Severity: Blocking / Major / Minor / Nit
- File:
path/to/file:line
- Problem: What's wrong
- Fix: How to fix it
What Looks Good
Brief list of things done well. Positive reinforcement for good patterns.
Summary
One paragraph: overall assessment and recommendation (ship as-is, fix and ship, or rework needed).
Gotchas
These are common failure modes during code review. Watch for them:
- Rubber-stamping. Claude tends to be too positive, writing "Looks great!" reviews with no real findings. If you review 500 lines and find zero issues, you probably missed something. Push harder on security and edge cases.
- Only reading the diff. The diff shows what changed, not what it interacts with. If a new function calls an existing function, read that existing function too. Bugs often live at the boundary between new and old code.
- Style nitpicking over substance. Spending review tokens on "consider renaming this variable" while missing that the function doesn't handle null input is a misallocation. Prioritize: security > correctness > performance > conventions > style.
- False positive blocking issues. Marking something as "Blocking" when it's actually a preference or theoretical concern erodes trust. Reserve "Blocking" for things that will cause bugs, security holes, or data loss in production.
- Missing the security forest for the trees. Claude often checks individual lines for injection but misses auth/authz gaps: "this endpoint is accessible without authentication" or "this query returns data from other tenants."
- Not running the tests. A review that says "tests look correct" without actually running them is incomplete. Run them — sometimes tests that look correct fail for environment or dependency reasons.
- Only running unit tests. Projects often have multiple test suites (unit, integration, functional, e2e). Discover and run ALL available suites. If integration or functional tests require infrastructure (database, Docker, external services), inform the user and ask them to run those tests before approving the review.
Important constraints
- Read the full file, not just the diff. Bugs often hide in context around the change.
- Be specific. "This could be a security issue" is not useful. "Line 42: user input is passed directly to
exec() — use a whitelist or parameterized command instead" is.
- Don't nitpick style unless it violates project conventions. Focus on bugs, security, and correctness.
- Flag blocking issues clearly. If something must be fixed before shipping, say so unambiguously.
- Run tests if possible. If the project has a test command, run it and report results.