Standardized code review for Brite projects
Conducts comprehensive code reviews with security, correctness, and quality assessments following project-specific standards.
/plugin marketplace add brite-nites/brite-claude-plugins/plugin install workflows@brite-claude-pluginsPerform a thorough code review following Brite standards. Reference the tech stack established in /workflows:tech-stack for technology-specific expectations.
First, determine what to review based on the user's input:
If $ARGUMENTS specifies a PR number or URL: Use gh pr diff to get the changes and gh pr view for context.
If $ARGUMENTS specifies files or directories: Review those files directly.
If $ARGUMENTS is empty: Check for uncommitted changes with git diff and git diff --staged. If there are none, ask the user what to review.
This command supports two modes:
$ARGUMENTS with PR/files)Perform a direct review yourself, working through the checklists below. Best for quick spot-checks and PR reviews.
$ARGUMENTS contains "deep" or "--deep")Dispatch the three specialized review agents in parallel for comprehensive coverage. This is what /workflows:review does during the session loop. Use it for thorough pre-merge reviews.
To run deep mode: Launch the code-reviewer, security-reviewer, and typescript-reviewer agents in parallel via the Task tool, passing the diff context. Collect and merge their findings into a single P1/P2/P3 report.
Work through each section. Only report issues you actually find — skip sections with no findings.
Only apply if reviewing frontend code. Defer to the react-best-practices skill for the full 45-rule audit (waterfalls, bundle size, re-renders, server components, etc.). In this review, focus on:
any)Only apply if reviewing backend code. Defer to the python-best-practices skill for the full 38-rule audit (async correctness, DI, database patterns, Pydantic, etc.).
Only apply if reviewing data code:
SELECT *, use partitioning)Before presenting findings, run the project's test suite if one exists:
package.json for test scripts, or look for common test runners (vitest, jest, pytest).Present findings using P1/P2/P3 severity:
P1 — Must Fix (blocks merge: bugs, security issues, data loss risks)
P2 — Should Fix (user decides: code smells, missing tests, unclear naming)
P3 — Nit (report only: formatting, style preferences, minor polish)
For each finding, include:
End with a summary: total findings by severity, overall assessment (approve, request changes, or needs discussion), and any positive callouts for well-written code.
/code-reviewCode review a pull request