Run comprehensive code review with multiple perspectives
Performs comprehensive code review from multiple expert perspectives with iterative refinement.
/plugin marketplace add v1truv1us/ai-eng-system/plugin install ai-eng-system@ai-eng-marketplaceReview code changes: $ARGUMENTS
Phase 5 of Spec-Driven Workflow: Research → Specify → Plan → Work → Review
Take a deep breath and approach this review systematically. Examine code from multiple expert perspectives, identify issues across all quality dimensions, and provide actionable recommendations.
Code runs in production, and bugs cause outages. Security issues compromise data. Performance problems frustrate users. Poor code quality makes maintenance difficult. This review task is critical for catching issues before they reach production and maintaining codebase health.
I bet you can't provide comprehensive review without being overly critical or missing important issues. The challenge is balancing thoroughness with constructiveness, identifying real problems while avoiding noise. Success means review catches real issues, provides actionable guidance, and helps maintainers improve code quality.
/ai-eng/review src/
/ai-eng/review src/ --type=security --severity=high
/ai-eng/review . --focus=performance --verbose
# Ralph Wiggum iteration for thorough reviews
/ai-eng/review src/ --ralph --ralph-show-progress
# Ralph Wiggum with security focus and custom iterations
/ai-eng/review . --focus=security --ralph --ralph-max-iterations 15 --ralph-verbose
| Option | Description |
|---|---|
--swarm | Use Swarms multi-agent orchestration |
-t, --type <type> | Review type (full|incremental|security|performance|frontend) [default: full] |
-s, --severity <severity> | Minimum severity level (low|medium|high|critical) [default: medium] |
-f, --focus <focus> | Focused review (security|performance|frontend|general) |
-o, --output <file> | Output report file [default: code-review-report.json] |
-v, --verbose | Enable verbose output |
--ralph | Enable Ralph Wiggum iteration mode for persistent review refinement |
--ralph-max-iterations <n> | Maximum iterations for Ralph Wiggum mode [default: 10] |
--ralph-completion-promise <text> | Custom completion promise text [default: "Review is comprehensive and all findings addressed"] |
--ralph-quality-gate <command> | Command to run after each iteration for quality validation |
--ralph-stop-on-gate-fail | Stop iterations when quality gate fails [default: continue] |
--ralph-show-progress | Show detailed iteration progress |
--ralph-log-history <file> | Log iteration history to JSON file |
--ralph-verbose | Enable verbose Ralph Wiggum iteration output |
Load skills/prompt-refinement/SKILL.md and use phase: review to transform your prompt into structured TCRO format (Task, Context, Requirements, Output). If using --ralph, also load skills/workflow/ralph-wiggum/SKILL.md for iterative review refinement.
If you spawn reviewer subagents in parallel, include:
<CONTEXT_HANDOFF_V1>
Goal: Review changes for (focus area)
Files under review: (paths)
Constraints: (e.g., no code changes; read-only)
Deliverable: findings with file:line evidence
Output format: RESULT_V1
</CONTEXT_HANDOFF_V1>
Require:
<RESULT_V1>
RESULT:
FINDINGS: (bullets with severity)
EVIDENCE: (file:line)
RECOMMENDATIONS:
CONFIDENCE: 0.0-1.0
</RESULT_V1>
For each finding provide:
| Field | Description |
|---|---|
| Severity | critical, major, minor |
| Location | file:line |
| Issue | Description of the problem |
| Recommendation | Suggested fix |
End with overall assessment: APPROVE, CHANGES_REQUESTED, or NEEDS_DISCUSSION.
After completing review, rate your confidence in findings comprehensiveness (0.0-1.0). Identify any uncertainties about severity classifications, areas where review coverage may have been insufficient, or assumptions about code context. Note any perspectives that should have been applied or findings that may be false positives.
Run a review using:
bun run scripts/run-command.ts review "$ARGUMENTS" [options]
For example:
bun run scripts/run-command.ts review "src/" --type=security --severity=high --output=security-review.jsonbun run scripts/run-command.ts review "." --focus=performance --verboseWhen --ralph flag is enabled, the review process follows a persistent refinement cycle:
Iteration Process:
Review Quality Gate Examples:
# Check for critical findings
rg '"severity": "critical"' code-review-report.json
# Validate recommendations completeness
rg '"recommendation"' code-review-report.json | wc -l
# Check file coverage
rg '"location"' code-review-report.json | sort | uniq | wc -l
Iteration Metrics:
Example Progress Output:
🔄 Ralph Wiggum Review Iteration 4/10
🔍 Findings: 23 total (+4 this iteration)
🚨 Critical issues: 2 (new this iteration)
⚠️ Major issues: 8 (+1 this iteration)
📁 Files reviewed: 15 (+2 deep-dived this iteration)
✅ Quality gate: PASSED
🎯 Review coverage: 95% (improving)
Review-Specific Considerations:
Default Settings:
Best Practices:
Review code from multiple expert perspectives:
When spawning reviewer subagents, use the Context Handoff Protocol:
<CONTEXT_HANDOFF_V1>
Goal: Review changes for (focus area)
Files under review: (paths)
Constraints: (e.g., no code changes; read-only)
Deliverable: findings with file:line evidence
Output format: RESULT_V1
</CONTEXT_HANDOFF_V1>
For each finding provide:
| Field | Description |
|---|---|
| Severity | critical, major, minor |
| Location | file:line |
| Issue | Description of the problem |
| Recommendation | Suggested fix |
$ARGUMENTS