From sdd
Orchestrates a parallel multi-perspective code review using agent teams. Spawns specialized reviewers (security, performance, tests, conventions) that work simultaneously and challenge each other's findings. Use when user says "team review", "parallel review", "deep review", "multi-perspective review", "thorough code review", or when reviewing large or critical changes. Do NOT use for quick reviews of small changes — use /review instead. Do NOT use when agent teams are not enabled.
npx claudepluginhub robertraf/rob-agent-workflow --plugin sddThis skill uses the workspace's default tool permissions.
You are orchestrating a **parallel multi-perspective code review** using agent teams. Multiple reviewers work simultaneously, each with a specialized lens, then findings are synthesized into a single report.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
You are orchestrating a parallel multi-perspective code review using agent teams. Multiple reviewers work simultaneously, each with a specialized lens, then findings are synthesized into a single report.
$ARGUMENTS
If $ARGUMENTS specifies a PR number, branch, or file paths, review those changes. If empty, review all uncommitted changes.
Determine what to review:
# For uncommitted changes
git diff
git diff --staged
git status
# For a branch
git diff main...HEAD
git log main..HEAD --oneline
# For a PR
gh pr diff <number>
Read every changed file in full. Count the number of changed files and lines to determine team size.
Create an agent team with specialized reviewers. Adjust the number based on change scope:
For small-to-medium changes (< 500 lines): 3 teammates
Create an agent team to review these changes with three specialized reviewers:
1. **Security reviewer** — Focus on input validation, authentication, authorization,
secrets exposure, injection vulnerabilities, and OWASP top 10.
2. **Correctness & performance reviewer** — Focus on logic errors, race conditions,
off-by-one errors, null access, N+1 queries, unbounded loops, and missing indexes.
3. **Tests & conventions reviewer** — Focus on test coverage gaps, flaky test patterns,
adherence to project conventions, and dead code.
For large changes (> 500 lines): 4-5 teammates, splitting security from correctness from performance.
Each reviewer must receive:
For critical codepaths (auth, payments, data mutations), require plan approval:
Require plan approval before the security reviewer starts.
Only approve plans that include checks for all OWASP top 10 categories.
Allow all reviewers to complete their analysis. Do NOT start synthesizing until all teammates report their findings.
Wait for all teammates to complete their review before proceeding.
If a reviewer gets stuck, redirect their approach or spawn a replacement.
Once all reviewers finish, the lead produces a unified report:
Scope: [files reviewed] Reviewers: [list of reviewer roles] Verdict: Clean / Minor Issues / Needs Changes / Blocking Issues
Issues that must be fixed before shipping. Numbered, with:
path/to/file:lineShould be fixed. Same format as above.
Nice to fix. Same format.
Issues identified by multiple reviewers or that span domains.
Patterns done well, noted by reviewers.
Overall assessment. Ship as-is, fix and ship, or rework needed.
After synthesizing the report:
Ask all reviewers to shut down, then clean up the team.
| Situation | Use |
|---|---|
| Quick review of small changes | /review |
| Large PR with many files | /team-review |
| Security-critical changes | /team-review |
| Pre-release audit | /team-review |
| Routine bug fix | /review |
| Changes spanning multiple domains | /team-review |
These are common failure modes during team reviews. Watch for them: