npx claudepluginhub brite-nites/brite-claude-plugins --plugin workflows# Code Review Perform a thorough code review following Brite standards. Reference the tech stack established in `/workflows:tech-stack` for technology-specific expectations. ## Determine Review Scope First, determine what to review based on the user's input: **If `$ARGUMENTS` specifies a PR number or URL:** Use `gh pr diff` to get the changes and `gh pr view` for context. **If `$ARGUMENTS` specifies files or directories:** Review those files directly. **If `$ARGUMENTS` is empty:** Check for uncommitted changes with `git diff` and `git diff --staged`. If there are none, ask the user wh...
/code-reviewReviews local uncommitted changes or GitHub PRs (pass PR number/URL) for security vulnerabilities, code quality issues, and best practices. Generates severity-based report with file locations, descriptions, and fixes.
/code-reviewPerforms automated multi-agent code review on current GitHub pull request: audits CLAUDE.md, bugs, git history, prior PRs, code comments; scores issue confidence.
/code-reviewReviews uncommitted changes for security vulnerabilities, code quality issues, and best practices using git diff. Generates report with severity levels, locations, descriptions, fixes; blocks commits on critical/high issues.
/code-reviewPerforms comprehensive code quality review of repository structure, code, security, performance, architecture, testing, and documentation. Produces prioritized issues and actionable recommendations with examples.
/code-reviewPerforms comprehensive code quality review of repository structure, code, security, performance, architecture, testing, and documentation. Produces prioritized issues and actionable recommendations with examples.
/code-reviewPerforms holistic pre-push code review against design docs, analyzing diff and broader codebase for alignment, consistency, risks, and issues. Produces categorized findings with recommendations.
Perform a thorough code review following Brite standards. Reference the tech stack established in /workflows:tech-stack for technology-specific expectations.
First, determine what to review based on the user's input:
If $ARGUMENTS specifies a PR number or URL: Use gh pr diff to get the changes and gh pr view for context.
If $ARGUMENTS specifies files or directories: Review those files directly.
If $ARGUMENTS is empty: Check for uncommitted changes with git diff and git diff --staged. If there are none, ask the user what to review.
This command supports two modes:
$ARGUMENTS with PR/files)Perform a direct review yourself, working through the checklists below. Best for quick spot-checks and PR reviews.
$ARGUMENTS contains "deep" or "--deep")Launch the selected review agents in parallel for comprehensive coverage. This is what /workflows:review does during the session loop. Use it for thorough pre-merge reviews.
To run deep mode: Use the same dynamic agent selection algorithm as /workflows:review Step 4 — Tier 1 (always: code-reviewer, security-reviewer, performance-reviewer), Tier 2 (stack-detected: typescript-reviewer, python-reviewer, data-reviewer), Tier 3 (opt-in/conditional: architecture-reviewer, test-quality-reviewer, accessibility-reviewer). All review agents run on Opus (model specified in agent files). Deep mode always uses thorough depth (Tier 1 + Tier 2 + standard Tier 3 logic). To use fast or comprehensive depth, run /workflows:review directly with the depth keyword. Launch all selected agents in parallel via the Task tool, passing the diff context. Agents include confidence scores (1-10) with each finding. Collect and merge their findings into a single P1/P2/P3 report. Apply cross-agent deduplication: when multiple agents flag the same file:line, keep the finding with the higher confidence score; on ties, use specialization order (security-reviewer > data-reviewer > performance-reviewer > architecture-reviewer > test-quality-reviewer > python-reviewer > typescript-reviewer > accessibility-reviewer > code-reviewer). Apply the same confidence threshold filtering as /workflows:review Step 5 (>= 7 included, low-confidence P2/P3 filtered, borderline P1s marked for human review). Note: deep mode does NOT include Diff Triage or per-finding Validation — those are /workflows:review-only features.
Work through each section. Only report issues you actually find — skip sections with no findings.
Only apply if reviewing frontend code. Defer to the react-best-practices skill for the full 45-rule audit (waterfalls, bundle size, re-renders, server components, etc.). In this review, focus on:
any)Only apply if reviewing backend code. Defer to the python-best-practices skill for the full 38-rule audit (async correctness, DI, database patterns, Pydantic, etc.).
Only apply if reviewing data code:
SELECT *, use partitioning)Before presenting findings, run the project's test suite if one exists:
package.json for test scripts, or look for common test runners (vitest, jest, pytest).Present findings using P1/P2/P3 severity:
P1 — Must Fix (blocks merge: bugs, security issues, data loss risks)
P2 — Should Fix (user decides: code smells, missing tests, unclear naming)
P3 — Nit (report only: formatting, style preferences, minor polish)
For each finding, include:
End with a summary: total findings by severity, overall assessment (approve, request changes, or needs discussion), and any positive callouts for well-written code.