From optimus
Reviews git changes, PRs/MRs, or branch diffs against coding guidelines using 5-7 parallel agents for bugs, security/logic errors, violations, test coverage, and simplification. Deep mode iteratively auto-fixes.
npx claudepluginhub oprogramadorreal/optimus-claude --plugin optimusThis skill uses the workspace's default tool permissions.
Analyze local git changes (or a PR/MR) against the project's coding guidelines, using 5 to 7 parallel review agents for comprehensive coverage. High-signal findings only: bugs, logic errors, security issues, guideline violations. Excludes style concerns, subjective suggestions, and linter-catchable issues.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Analyze local git changes (or a PR/MR) against the project's coding guidelines, using 5 to 7 parallel review agents for comprehensive coverage. High-signal findings only: bugs, logic errors, security issues, guideline violations. Excludes style concerns, subjective suggestions, and linter-catchable issues.
Extract from the user's arguments:
deep flag (present/absent)harness keyword after deep (present/absent)Examples:
/optimus:code-review → local changes, normal mode/optimus:code-review src/auth → scope to path, normal mode/optimus:code-review --pr 42 or /optimus:code-review #42 → PR mode, normal/optimus:code-review deep → local changes, deep mode (8 iterations)/optimus:code-review deep "focus on src/auth" → scoped, deep mode/optimus:code-review deep harness → harness mode (present command and stop)/optimus:code-review deep harness "focus on src/auth" → harness mode, scopedRead $CLAUDE_PLUGIN_ROOT/skills/init/references/multi-repo-detection.md for workspace detection. If a multi-repo workspace is detected:
.git/, so git commands must target individual repos)Read $CLAUDE_PLUGIN_ROOT/skills/init/references/prerequisite-check.md and apply the prerequisite check (CLAUDE.md + coding-guidelines.md existence, fallback logic).
If the system prompt contains HARNESS_MODE_ACTIVE, read $CLAUDE_PLUGIN_ROOT/references/harness-mode.md and follow its single-iteration execution protocol. The reference covers progress file reading, state initialization, scope and file-list rules, Step 3 / Step 4 overrides under harness mode, and the Step 9 apply/output protocol. Then proceed through Step 3, Step 4, and Step 5 — skip only the Step 2 user confirmation.
If HARNESS_MODE_ACTIVE is NOT in the system prompt, continue with the standard interactive flow below.
If the harness keyword was detected in Step 1, read the Skill-Triggered Invocation section of $CLAUDE_PLUGIN_ROOT/references/harness-mode.md and follow its steps. Pass:
skill_name = code-reviewscope = scope text from Step 1 argument parsingmax_iterations = not specified (use harness default)The reference protocol presents the command and stops. Do not proceed to Step 3 or any remaining steps.
If the deep flag was detected in Step 1 (without harness), activate deep mode. Deep mode loops review-fix cycles (Steps 5–9) until zero new findings remain or 8 iterations are reached, then presents a single consolidated report with all fixes already applied as local changes.
Before proceeding, check whether a test command is available (from .claude/CLAUDE.md). If no test command exists, deep mode's auto-apply loop has no safety net — fall back to normal mode and warn: "Deep mode requires a test command for safe auto-apply. Falling back to normal mode — re-run /optimus:init to set up test infrastructure first." Then continue with the standard single-pass flow.
If a test command is available, warn the user:
Deep mode runs up to 8 iterative review-fix passes. Each iteration is a full multi-agent review cycle — credit and time consumption multiplies with iteration count. Fixes are applied automatically at each iteration without per-change approval. Low test coverage increases the chance of undetected breakage; consider running
/optimus:unit-testfirst to strengthen the safety net. Each iteration also accumulates context — on large codebases, output quality may degrade in later iterations.Test command:
[test command from CLAUDE.md]
Then use AskUserQuestion — header "Deep mode", question "Proceed with deep mode?":
Tell the user: Tip: For large codebases or extended sessions, re-run with /optimus:code-review deep harness to launch the external harness with fresh context per iteration.
If the user did not invoke with deep, skip this step.
If the user selects Normal mode, continue with the standard single-pass flow. Record the user's choice as a deep-mode flag for subsequent steps. If deep mode is confirmed, initialize iteration-count to 1, total-fixed to 0, total-reverted to 0, and accumulated-findings to an empty list. Each entry in accumulated-findings tracks: file (with line), category (Bug, Security, Guideline Violation, Code Quality, Test Coverage Gap, Contract Quality), guideline (the specific project rule, or "General: bug/security/contract quality"), summary (one-sentence description of the issue), fix description (brief description of the fix applied or attempted), iteration (which iteration discovered it), and status (updated through apply/test phases).
Detect and gather the changes to review. Use the scope/focus instructions parsed in Step 1.
Run the following git commands to gather all local changes:
# Staged changes
git diff --cached --stat
git diff --cached
# Unstaged changes to tracked files
git diff --stat
git diff
# Untracked files
git status --short
$CLAUDE_PLUGIN_ROOT/skills/pr/references/platform-detection.md and use the Platform Detection Algorithm section to determine if the project is GitHub, GitLab, or unknown.gh pr view --json number,state,baseRefName 2>/dev/null — only use baseRefName if state equals "OPEN"; if state is not "OPEN", treat as "no open PR"glab mr view --output json 2>/dev/null — only use target_branch if state equals "opened"; if state is not "opened", treat as "no open MR". If the command fails, treat as no open MR — unless the failure appears to be an auth or connectivity error, in which case inform the user before falling back$CLAUDE_PLUGIN_ROOT/skills/pr/references/default-branch-detection.md<base-branch>. Run git log --oneline origin/<base-branch>..HEADWhen the user says "review PR #42", passes --pr, #123, or a PR URL:
Platform detection — read $CLAUDE_PLUGIN_ROOT/skills/pr/references/platform-detection.md and use the Platform Detection Algorithm section (including the Signal Conflict Resolution rule). If platform is unknown → inform the user and ask them to specify.
GitHub projects:
gh is available by running gh --version. If not available, inform the user that PR review requires the GitHub CLI (gh) and offer to review the branch diff insteadgh pr view <N> --json state,isDraft,title,body,baseRefName,headRefName to get PR metadatatitle and body fields as pr-description for use in Steps 5 and 6 (author intent context)gh pr diff <N> to get the actual diffGitLab projects:
glab is available by running glab --version. If not available, inform the user: "This project uses GitLab. PR/MR review requires the GitLab CLI (glab). You can use branch diff mode instead: /optimus:code-review changes since origin/main." Offer to review the branch diff as a fallback.glab mr view <N> --output json to get MR metadatatitle and description fields as pr-description for use in Steps 5 and 6 (author intent context)glab mr diff <N> to get the actual diffWhen the user says "review changes since main" or a similar reference:
git diff <ref>...HEAD for the diffgit diff --name-only <ref>...HEAD for the file listWhen the user specifies a path (e.g., "review src/auth"):
git diff -- <path> and git diff --cached -- <path>Present a brief summary before proceeding:
## Review Scope
- Mode: Local changes / PR #N / Branch diff since <ref>
- Files changed: [N]
- Lines: +[added] / -[removed]
If more than 50 files or 3000 lines are changed, warn the user and suggest narrowing the scope (e.g., specific path or directory).
If a multi-repo workspace was detected in Step 1, resolve prerequisites per-repo:
.claude/CLAUDE.md and .claude/docs/ independently (not the workspace root)Read $CLAUDE_PLUGIN_ROOT/skills/init/references/constraint-doc-loading.md for the full document loading procedure (single project and monorepo layouts, scoping rules).
These files define the review criteria. Every guideline-related finding must be justified by what these docs establish — never impose external preferences.
Apply the "Submodule Exclusion" rule from $CLAUDE_PLUGIN_ROOT/skills/init/references/constraint-doc-loading.md — exclude submodule directories from the review.
Before proceeding to the review, present a brief summary:
Proceed immediately to Step 5 — do not wait for user confirmation.
Launch every applicable agent as a general-purpose Agent tool call in a single message so they run in parallel. The full fan-out is the design — do not reduce the count to save tokens or time. See the agent overview below for which agents always run and which activate conditionally.
Each agent receives the list of changed file paths (from Step 3 in normal/interactive mode, or from scope_files.current in harness mode when pre-populated by the harness).
Read the agent prompt files from $CLAUDE_PLUGIN_ROOT/skills/code-review/agents/ for individual agent prompts. Read $CLAUDE_PLUGIN_ROOT/skills/code-review/agents/shared-constraints.md for the shared quality bar, exclusion rules, and false positive guidance applying to all agents.
If a pr-description was captured in Step 3 and its body is non-empty, prepend the PR/MR context block to every agent prompt before the file list. Read $CLAUDE_PLUGIN_ROOT/skills/code-review/agents/context-blocks.md for the template, truncation rule, and guardrail language.
If deep mode is active and iteration-count > 1, prepend the iteration context block to every agent prompt before the file list (after the PR/MR context block, if present). Read $CLAUDE_PLUGIN_ROOT/skills/code-review/agents/context-blocks.md for the template and format.
| Agent | Role | Prompt file |
|---|---|---|
| 1 — Bug Detector | Null access, off-by-one, race conditions, resource leaks, type mismatches | bug-detector.md |
| 2 — Security & Logic | SQL injection, XSS, hardcoded secrets, missing auth, security-relevant API violations | security-reviewer.md |
| 3 — Guideline Compliance A | Explicit violations of project docs with exact rule citations | guideline-reviewer.md |
| 4 — Guideline Compliance B | Same task as Agent 3 — independent review reduces false negatives | guideline-reviewer.md |
| 5 — Code Simplifier | Unnecessary complexity, naming, dead code, pattern violations | code-simplifier.md |
| 6 — Test Guardian | Test coverage gaps, structural barriers to testability | test-guardian.md |
| 7 — Contracts Reviewer | Backward compatibility, type safety, contract versioning, encapsulation | contracts-reviewer.md |
Agents 1–5 always run. Agent 6 (Test Guardian) runs when test infrastructure is detected (.claude/docs/testing.md or subproject docs/testing.md exists). Agent 7 (Contracts Reviewer) runs when changed files include contract-related paths (see activation rules below). Each agent returns a structured list of findings, bounded by the Finding Cap rule in $CLAUDE_PLUGIN_ROOT/references/shared-agent-constraints.md. Guideline agents (3–4) are constructed dynamically based on Step 4's doc loading results (single project vs monorepo paths).
Agent 7 activates when any changed file from Step 3 matches at least one of these patterns:
Directory patterns — file path contains any of: api/, routes/, controllers/, endpoints/, handlers/, graphql/, proto/, grpc/
File patterns — file name matches any of: *.dto.*, *.schema.*, *.contract.*, openapi.*, swagger.*, *.proto, *.graphql, *.gql
If no changed file matches, skip Agent 7 entirely (zero cost).
Wait for all launched agents to complete before proceeding to Step 6.
Independently verify each finding to filter false positives. Apply the verification protocol from $CLAUDE_PLUGIN_ROOT/skills/init/references/verification-protocol.md — treat agent-reported findings as claims that require independent evidence, not as ground truth.
For each finding from Step 5:
For each unique file that has findings, check recent git history for deliberate changes:
git log --no-merges --format="%h %s" -5 -- <file>
If a recent commit message clearly indicates deliberate code introduction (e.g., "fix null check", "add input validation", "harden auth flow") and a finding suggests removing or reverting that code → reduce the finding's confidence by one level (High → Medium, Medium → Low → drop).
For uninformative commit messages (fewer than 15 characters, or generic like "fix", "update", "changes"), run git show <sha> -- <file> to examine the actual diff for intent patterns: added null checks, validation logic, error handling, or security measures. Apply the same confidence reduction if the diff shows deliberate defensive code that a finding wants to remove.
If a pr-description was captured in Step 3 (PR/MR mode), use it as an additional intent signal during validation:
This is a soft adjustment only — it never hard-filters a finding. It reduces the chance of undoing deliberate previous work while still allowing genuinely problematic code to be flagged. The PR/MR description and git history are complementary signals — neither alone can suppress a finding.
Skip gracefully if git log fails or returns no results (e.g., shallow clone, newly created file, or file outside the repository).
Assign confidence:
Only findings with High or Medium confidence proceed to Step 7.
Merge validated findings from Steps 5–6. Deduplicate: if two agents flagged the same file and line range for the same category, keep the more detailed version. For guideline findings flagged by both Agents 3 and 4, merge into one finding and note "confirmed by independent review".
After deduplication, check for cross-agent contradictions — findings that target the same code region but recommend opposite directions (e.g., "add more validation" vs. "simplify this validation"). Keep the higher-severity finding and drop the other. When severities are equal, keep the security/correctness finding — security requirements justify proportionate complexity.
Before presenting findings, write a concise summary (2–4 sentences) of what the reviewed changes accomplish. Describe the intent and effect of the changes — what was added, modified, or removed and why. Base this on the diff and the agents' analysis. This lets the user verify the review understood their changes correctly.
Maximum 15 findings across all sources, prioritized by severity then confidence. If more issues exist, note the count (e.g., "15 of ~24 findings shown") and suggest re-running with a narrower scope or using /optimus:code-review deep for exhaustive review.
Deep mode: Instead of presenting the output format below, append this iteration's validated findings to accumulated-findings. For each appended finding, record the current iteration-count as the finding's iteration number, and preserve the agent's guideline citation and issue description as the finding's guideline and summary fields. Deduplicate against previous iterations: if a finding matches an existing entry by file + line range + category, skip it if the existing entry is marked "(fixed)". If the existing entry is marked "(persistent — fix failed)", annotate the new entry as "(persistent — fix failed)". If the existing entry is marked "(reverted — test failure)", keep the new entry as "(reverted — attempt 2)" so Step 9 retries the fix once more; only promote to "(persistent — fix failed)" if it is reverted again. Then proceed directly to Step 9.
Normal mode: Present findings using the output format below, then proceed to Step 8.
## Code Review
### Summary
- Scope: [local changes / PR #N / branch diff since X]
- Files reviewed: [N]
- Lines changed: +[A] / -[R]
- Findings: [N] (Critical: [N], Warning: [N], Suggestion: [N])
- Docs used: [list of docs loaded]
- Agents: bug-detector, security-reviewer, guideline-A, guideline-B, code-simplifier[, test-guardian][, contracts-reviewer]
- Verdict: CHANGES LOOK GOOD / ISSUES FOUND
### Change Summary
[2–4 sentences describing what the changes do — their intent, what was added/modified/removed, and the overall effect. Keep it factual and concise.]
### Findings
**[N]. [Finding title]** (Critical/Warning/Suggestion — [Bug/Security/Guideline/Quality/Test Gap/Contract])
- **File:** `file:line`
- **Category:** [Bug | Security | Guideline Violation | Code Quality | Test Coverage Gap | Contract Quality]
- **Guideline:** [which project guideline, or "General: bug/security/contract quality"]
- **Issue:** [concrete description]
- **Current:**
[code snippet — max 5 lines]
- **Suggested:**
[fix or recommendation — max 5 lines]
[Findings ordered: Critical → Warning → Suggestion, each sorted by file path]
### No Issues Found
[If applicable: "The changes follow project guidelines. No bugs, security issues, or guideline violations detected."]
For PR mode, include full-SHA code links:
https://github.com/owner/repo/blob/[full-sha]/path#L[start]-L[end]git remote get-url origin (e.g., https://gitlab.company.com), then use: https://[gitlab-host]/owner/repo/-/blob/[full-sha]/path#L[start]-L[end]Deep mode: Skip this step — proceed directly to Step 9.
If the verdict is CHANGES LOOK GOOD (no findings), skip this step — do not present any action prompt. Go directly to the recommendation in the "Important" section below.
If the verdict is ISSUES FOUND, use AskUserQuestion to present actions. The options depend on the review mode determined in Step 3:
Use AskUserQuestion — header "Action", question "How would you like to proceed with the review findings?":
Use AskUserQuestion — header "Action", question "How would you like to proceed with the review findings?":
Write the review summary to a secure temp file: TMPFILE=$(mktemp "${TMPDIR:-/tmp}/review-summary-XXXXXX.md"). Always clean up after the posting attempt (whether it succeeds or fails): rm -f "$TMPFILE".
For GitHub PRs: gh pr comment <N> --body-file "$TMPFILE"
For GitLab MRs: glab api -X POST "projects/:id/merge_requests/<N>/notes" -F body=@"$TMPFILE" — this avoids shell metacharacter issues that glab mr note --message "$(cat ...)" would have with code snippets in the summary
Normal mode: Skip this step.
If zero findings were added to accumulated-findings in this iteration (Step 7 found nothing new), deep mode has converged. Report: "Deep mode complete — no new findings on iteration [N]." Skip to the Consolidated report below.
Apply all validated findings from this iteration using Edit or MultiEdit, skipping any annotated "(persistent — fix failed)" (these have already failed in a prior iteration). For each fix, record which file was modified and what the pre-edit content was (you will need this for revert if tests fail).
Run the project's test command (from .claude/CLAUDE.md). Follow the verification protocol from $CLAUDE_PLUGIN_ROOT/skills/init/references/verification-protocol.md — run tests fresh, read complete output, report actual results with evidence.
accumulated-findings. Add the count of applied fixes to total-fixed.total-fixed and failed fixes to total-reverted.After applying fixes and running tests, check termination conditions in order:
iteration-count equals 8 → cap reached. Report: "Deep mode reached the iteration cap (8). Remaining findings may exist — continue in a fresh conversation: re-run /optimus:code-review deep, or narrow scope with /optimus:code-review deep \"focus on <area>\"."For all four conditions above, present the iteration report immediately after the termination/continuation message. This report is informational and non-blocking — no user prompt follows:
#### Iteration [N] — Report
| # | File | What Changed | Reason | Guideline / Category | Status |
|---|------|-------------|--------|---------------------|--------|
[one row per finding attempted in THIS iteration from accumulated-findings where iteration == current]
Column definitions:
file:linefixed, reverted — test failure, reverted — attempt 2, or persistent — fix failedFor condition 4 (continue), after presenting the iteration report also show the progress summary: "Iteration [N] of up to 8 — [total-fixed] findings fixed so far, [total-reverted] reverted. Starting next pass..." If the next iteration will be 3 or higher, append to the progress summary: "Note: context is accumulating — if output quality degrades, consider finishing remaining findings in a fresh conversation." Then increment iteration-count and return to Step 5 for the next analysis pass. When returning to Step 5, re-gather the current diff (the codebase has changed due to applied fixes) and focus agents on files that had findings in any previous iteration plus any newly modified files.
After the loop ends (by convergence, termination, or cap), present the consolidated report in two parts.
Part 1 — Cumulative summary table:
## Code Review — Deep Mode Cumulative Report
**Summary:**
- Total iterations: [N]
- Total findings fixed: [N]
- Total findings reverted (test failures): [N]
- Total findings persistent (fix failed): [N]
- Final test status: pass / fail / not available
**All Changes:**
| # | Iter | File | What Changed | Reason | Guideline / Category | Status |
|---|------|------|-------------|--------|---------------------|--------|
[one row per finding from accumulated-findings, across all iterations, ordered by iteration then sequence]
Column definitions match the per-iteration report table, plus:
The summary statistics provide a quick overview; the detailed table provides full auditability of every change attempted across all iterations.
Part 2 — Detailed findings:
After the cumulative table, present ALL accumulated-findings using the same detailed output format from Step 7 (with Summary block, Change Summary, and individual Findings with code snippets). Add these fields to the Summary block:
- Total iterations: [N]
- Total findings fixed: [N]
- Total findings reverted (test failures): [N]
- Total findings persistent (fix failed): [N]
Mark each finding's status: "(fixed)", "(reverted — test failure)", "(reverted — attempt 2)", or "(persistent — fix failed)".
git diff before committing)After the review is complete, recommend the next step based on the outcome:
/optimus:commit to commit the fixes/optimus:commit to commit the accumulated fixes, then consider /optimus:unit-test to strengthen test coverage/optimus:pr to create a pull request (skip this if already reviewing a PR/MR)Tell the user:
/optimus:code-review deep to iterate automatically — it fixes, tests, and repeats until clean (max 8 passes). Requires a test command in .claude/CLAUDE.md.