Run comprehensive parallel code review with all 5 specialized reviewers
Dispatches five specialized AI reviewers in parallel to comprehensively analyze code quality, business logic, security, tests, and nil-safety, then consolidates all findings into an actionable report.
/plugin marketplace add lerianstudio/ring/plugin install ring-default@ring[files-or-paths]Dispatch all 5 specialized code reviewers in parallel, collect their reports, and provide a consolidated analysis.
CRITICAL: Use a single message with 5 Task tool calls to launch all reviewers simultaneously.
Gather the required context first:
Then dispatch all 5 reviewers:
Task tool #1 (code-reviewer):
subagent_type: "ring-default:code-reviewer"
model: "opus"
description: "Review code quality and architecture"
prompt: |
WHAT_WAS_IMPLEMENTED: [summary of changes]
PLAN_OR_REQUIREMENTS: [original plan/requirements]
BASE_SHA: [base commit if applicable]
HEAD_SHA: [head commit if applicable]
DESCRIPTION: [additional context]
Task tool #2 (business-logic-reviewer):
subagent_type: "ring-default:business-logic-reviewer"
model: "opus"
description: "Review business logic correctness"
prompt: |
[Same parameters as above]
Task tool #3 (security-reviewer):
subagent_type: "ring-default:security-reviewer"
model: "opus"
description: "Review security vulnerabilities"
prompt: |
[Same parameters as above]
Task tool #4 (test-reviewer):
subagent_type: "ring-default:test-reviewer"
model: "opus"
description: "Review test quality and coverage"
prompt: |
[Same parameters as above]
Focus: Edge cases, error paths, test independence, assertion quality.
Task tool #5 (nil-safety-reviewer):
subagent_type: "ring-default:nil-safety-reviewer"
model: "opus"
description: "Review nil/null pointer safety"
prompt: |
[Same parameters as above]
LANGUAGES: [Go|TypeScript|both]
Focus: Nil sources, propagation paths, missing guards.
Wait for all five reviewers to complete their work.
Each reviewer returns:
Consolidate all issues by severity across all five reviewers.
When aggregating findings, detect and flag conflicting recommendations between reviewers:
| Conflict Type | Resolution | Priority |
|---|---|---|
| Security vs Performance | Security recommendation wins | CRITICAL |
| More tests vs Over-testing | Defer to test-reviewer for test scope | MEDIUM |
| More mocks vs Less mocks | Evaluate based on test-reviewer guidance | MEDIUM |
| Refactor vs Keep simple | Defer to code-reviewer for architecture decisions | MEDIUM |
Flagging Conflicts: When reviewers provide contradictory guidance:
Example:
⚠️ Conflict Detected:
- test-reviewer: "Add more mock isolation for external services"
- code-reviewer: "Current mocking approach is sufficient"
- Resolution: User decision required - see both perspectives above
Return a consolidated report in this format:
# Full Review Report
## VERDICT: [PASS | FAIL | NEEDS_DISCUSSION]
## Executive Summary
[2-3 sentences about overall review across all gates]
**Total Issues:**
- Critical: [N across all gates]
- High: [N across all gates]
- Medium: [N across all gates]
- Low: [N across all gates]
---
## Code Quality Review (Foundation)
**Verdict:** [PASS | FAIL]
**Issues:** Critical [N], High [N], Medium [N], Low [N]
### Critical Issues
[List all critical code quality issues]
### High Issues
[List all high code quality issues]
[Medium/Low issues summary]
---
## Business Logic Review (Correctness)
**Verdict:** [PASS | FAIL]
**Issues:** Critical [N], High [N], Medium [N], Low [N]
### Critical Issues
[List all critical business logic issues]
### High Issues
[List all high business logic issues]
[Medium/Low issues summary]
---
## Security Review (Safety)
**Verdict:** [PASS | FAIL]
**Issues:** Critical [N], High [N], Medium [N], Low [N]
### Critical Vulnerabilities
[List all critical security vulnerabilities]
### High Vulnerabilities
[List all high security vulnerabilities]
[Medium/Low vulnerabilities summary]
---
## Test Quality Review (Coverage)
**Verdict:** [PASS | FAIL]
**Issues:** Critical [N], High [N], Medium [N], Low [N]
### Critical Issues
[Untested core logic, tests testing mock behavior]
### High Issues
[Missing edge cases, test anti-patterns]
[Medium/Low issues summary]
---
## Nil-Safety Review (Pointer Safety)
**Verdict:** [PASS | FAIL]
**Issues:** Critical [N], High [N], Medium [N], Low [N]
### Critical Issues
[Direct panic paths, unguarded nil dereference]
### High Issues
[Conditional nil risks, missing ok checks]
[Medium/Low issues summary]
---
## Consolidated Action Items
**MUST FIX (Critical):**
1. [Issue from any gate] - `file:line`
2. [Issue from any gate] - `file:line`
**SHOULD FIX (High):**
1. [Issue from any gate] - `file:line`
2. [Issue from any gate] - `file:line`
**CONSIDER (Medium/Low):**
[Brief list]
---
## Next Steps
**If PASS:**
- ✅ All 5 reviewers passed
- ✅ Ready for next step (merge/production)
**If FAIL:**
- ❌ Fix all Critical/High/Medium issues immediately
- ❌ Add TODO(review) comments for Low issues in code
- ❌ Add FIXME(nitpick) comments for Cosmetic/Nitpick issues in code
- ❌ Re-run all 5 reviewers in parallel after fixes
**If NEEDS_DISCUSSION:**
- 💬 [Specific discussion points across gates]
After producing the consolidated report, provide clear guidance:
Critical/High/Medium Issues:
These issues MUST be fixed immediately:
1. [Issue description] - file.ext:line - [Reviewer]
2. [Issue description] - file.ext:line - [Reviewer]
Recommended approach:
- Dispatch fix subagent to address all Critical/High/Medium issues
- After fixes complete, re-run all 5 reviewers in parallel to verify
Low Issues:
Add TODO comments in the code for these issues:
// TODO(review): [Issue description]
// Reported by: [reviewer-name] on [date]
// Severity: Low
// Location: file.ext:line
Cosmetic/Nitpick Issues:
Add FIXME comments in the code for these issues:
// FIXME(nitpick): [Issue description]
// Reported by: [reviewer-name] on [date]
// Severity: Cosmetic
// Location: file.ext:line
If any reviewer fails during execution (timeout, error, incomplete output):
Task tool (retry failed reviewer):
model: "opus"
description: "Retry [reviewer-name] review"
prompt: [same parameters as original]
Signs that a reviewer produced incomplete output:
| Pattern | Detection Method | Action |
|---|---|---|
| Missing VERDICT | Output lacks "## VERDICT:" or "Verdict:" | Re-dispatch reviewer |
| Empty Issues section | "## Issues Found" followed by no content or "None" only | Verify this is intentional (PASS case) |
| Missing required sections | Check against output_schema in agent definition | Re-dispatch with explicit section reminder |
| Truncated output | Ends mid-sentence or lacks closing sections | Re-dispatch with smaller scope |
| Generic responses | Only contains boilerplate without file-specific analysis | Re-dispatch with explicit file list |
Validation Regex Patterns:
/^##?\s*VERDICT:?\s*(PASS|FAIL|NEEDS_DISCUSSION)/im/^##?\s*Issues Found/im/^##?\s*(Summary|Executive Summary)/imAction: Re-dispatch the reviewer with explicit instruction to include all required sections.
This command MUST load the skill for complete workflow execution.
Use Skill tool: requesting-code-review
The skill contains the complete workflow with: