Run comprehensive multi-agent quality review
Runs parallel multi-agent review combining code quality, security, and testing analysis to identify critical issues before deployment.
/plugin marketplace add webdevtodayjason/titanium-plugins/plugin install titanium-toolkit@titanium-pluginsYou are coordinating a comprehensive quality review of the codebase. This command launches multiple specialized review agents in parallel, aggregates their findings, and creates a detailed review report.
Orchestration Model: You launch 3 review agents simultaneously in separate context windows. Each agent has specialized skills and reviews from their domain expertise. They run in parallel for efficiency.
Review Agents & Their Skills:
Why Parallel: Review agents are independent - they don't need each other's results. Running in parallel saves 60-70% time compared to sequential reviews.
This review process:
Option A: Recent Changes (default)
git diff --name-only HEAD~1
Reviews files changed in last commit.
Option B: Current Branch Changes
git diff --name-only main...HEAD
Reviews all changes in current branch vs main.
Option C: Specific Files (if user specified)
# User might say: /titanium:review src/api/*.ts
Use the files/pattern user specified.
Option D: All Code (if user requested)
# Find all source files
find . -type f \( -name "*.ts" -o -name "*.js" -o -name "*.py" -o -name "*.rb" \) -not -path "*/node_modules/*" -not -path "*/venv/*"
Create list of files to review. Store in memory for agent prompts.
Example:
Files to review:
- src/api/auth.ts
- src/middleware/jwt.ts
- src/routes/users.ts
- tests/api/auth.test.ts
CRITICAL: Launch all three agents in a SINGLE message with multiple Task calls.
This enables parallel execution for faster reviews.
[Task 1]: @code-reviewer
Prompt: "Review all code changes for quality, readability, and best practices.
Focus on:
- Code quality and maintainability
- DRY principles
- SOLID principles
- Error handling
- Code organization
- Comments and documentation
Files to review: [list all modified files]
Provide findings categorized by severity:
- Critical: Must fix before deployment
- Important: Should fix soon
- Nice-to-have: Optional improvements
For each finding, specify:
- File and line number
- Issue description
- Recommendation"
[Task 2]: @security-scanner
Prompt: "Scan for security vulnerabilities and security best practices.
Focus on:
- Input validation
- SQL injection risks
- XSS vulnerabilities
- Authentication/authorization issues
- Secrets in code
- Dependency vulnerabilities
- HTTPS enforcement
- Rate limiting
Files to review: [list all modified files]
Provide findings with:
- Severity (Critical/High/Medium/Low)
- Vulnerability type
- File and line number
- Risk description
- Remediation steps
Severity mapping for aggregation:
- Critical → Critical (must fix)
- High → Important (should fix)
- Medium → Nice-to-have (optional)
- Low → Nice-to-have (optional)"
[Task 3]: @tdd-specialist
Prompt: "Check test coverage and test quality.
Focus on:
- Test coverage percentage
- Edge cases covered
- Integration tests
- Unit tests
- E2E tests (if applicable)
- Test quality and assertions
- Mock usage
- Test organization
Files to review: [list all test files and source files]
Provide findings on:
- Coverage gaps
- Missing test cases
- Test quality issues
- Recommendations for improvement"
All three agents will run in parallel. Wait for all to complete before proceeding.
Voice hooks will announce: "Review agents completed"
Gather results from all three agents:
🔴 Critical Issues (must fix before deployment):
🟡 Important Issues (should fix soon):
🟢 Nice-to-have (optional improvements):
Total findings:
- Critical: [X]
- Important: [Y]
- Nice-to-have: [Z]
By source:
- Code quality: [N] findings
- Security: [M] findings
- Test coverage: [P] findings
Use vibe-check to provide AI oversight of the review:
mcp__vibe-check__vibe_check(
goal: "Quality review of codebase changes",
plan: "Ran parallel review: @code-reviewer, @security-scanner, @tdd-specialist",
progress: "Review complete. Findings: [X] critical, [Y] important, [Z] minor.
Critical issues found:
[List each critical issue briefly]
Important issues found:
[List each important issue briefly]
Test coverage: approximately [X]%",
uncertainties: [
"Are there systemic quality issues we're missing?",
"Is the security approach sound?",
"Are we testing the right things?",
"Any architectural concerns?"
]
)
Process vibe-check response:
Write comprehensive report to .titanium/review-report.md:
# Quality Review Report
**Date**: [current date and time]
**Project**: [project name or goal if known]
**Reviewers**: @code-reviewer, @security-scanner, @tdd-specialist
## Executive Summary
- 🔴 Critical issues: [X]
- 🟡 Important issues: [Y]
- 🟢 Nice-to-have: [Z]
- 📊 Test coverage: ~[X]%
**Overall Assessment**: [Brief 1-2 sentence assessment]
---
## Critical Issues 🔴
### 1. [Issue Title]
**Category**: [Code Quality | Security | Testing]
**File**: `path/to/file.ext:line`
**Severity**: Critical
**Issue**:
[Clear description of what's wrong]
**Risk/Impact**:
[Why this is critical]
**Recommendation**:
```[language]
// Show example fix if applicable
[code example]
Steps to Fix:
[... repeat structure ...]
Category: [Code Quality | Security | Testing]
File: path/to/file.ext:line
Severity: Important
Issue: [Description]
Impact: [Why this matters]
Recommendation: [How to address it]
[... repeat structure ...]
Overall Coverage: ~[X]%
Files with Insufficient Coverage (<80%):
file1.ts - ~[X]% coveragefile2.ts - ~[Y]% coverageUntested Critical Functions:
functionName() in file.ts:lineanotherFunction() in file.ts:lineMissing Test Categories:
Recommendations:
Vulnerabilities Found: [X] Security Best Practices Violations: [Y]
Key Security Concerns:
Security Recommendations:
[Paste vibe-check assessment here]
Systemic Issues Identified: [Any patterns or systemic problems vibe-check identified]
Additional Recommendations: [Any suggestions from vibe-check that weren't captured by agents]
path/to/file.ext:linepath/to/file.ext:linepath/to/file.ext:linepath/to/file.ext:linepath/to/file.ext:lineTotal files: [X]
Source Files ([N] files):
Test Files ([M] files):
/titanium:review
---
## Step 7: Store Review in Pieces
mcp__Pieces__create_pieces_memory( summary_description: "Quality review findings for [project/files]", summary: "Comprehensive quality review completed by @code-reviewer, @security-scanner, @tdd-specialist.
Findings:
Test coverage: approximately [X]%
Security assessment: [summary - no vulnerabilities / minor issues / concerns found]
Code quality assessment: [summary - excellent / good / needs improvement]
vibe-check meta-review: [brief summary of vibe-check insights]
Key recommendations:
All findings documented in .titanium/review-report.md with file:line references and fix recommendations.", files: [ ".titanium/review-report.md", "list all reviewed source files", "list all test files" ], project: "$(pwd)" )
---
## Step 8: Present Summary to User
🔍 Quality Review Complete
📊 Summary:
📄 Full Report: .titanium/review-report.md
⚠️ Critical Issues (must fix):
[Issue 1 title]
File: path/to/file.ext:line
[Brief description]
[Issue 2 title]
File: path/to/file.ext:line
[Brief description]
[... list all critical issues ...]
💡 Top Recommendations:
🤖 vibe-check Assessment: [Brief quote or summary from vibe-check]
Would you like me to:
### Handle User Response
**If user wants fixes**:
- Address critical issues one by one
- After each fix, run relevant tests
- Re-run review to verify fixes
- Update review report
**If user wants GitHub issues**:
- Create issues for each critical and important finding
- Include all details from review report
- Provide issue URLs
**If user wants more details**:
- Read specific sections of review report
- Explain the issue and fix in more detail
**If user says continue**:
- Acknowledge and complete
- Remind that issues are documented in review report
---
## Error Handling
### If No Files to Review
⚠️ No files found to review.
This could mean:
Would you like to:
### If Review Agents Fail
❌ Review failed
Agent @[agent-name] encountered an error: [error]
Continuing with other review agents...
[Proceed with available results]
### If vibe-check Not Available
Note: vibe-check MCP is not available. Proceeding without meta-review.
To enable AI-powered meta-review:
---
## Integration with Workflow
**After /titanium:work**:
User: /titanium:work [... implementation completes ...] User: /titanium:review [... review runs ...]
**Standalone Usage**:
User: /titanium:review
**With File Specification**:
User: /titanium:review src/api/*.ts
**Before Committing**:
User: I'm about to commit. Can you review my changes? Claude: /titanium:review [... review runs on uncommitted changes ...]
---
## Voice Feedback
Voice hooks automatically announce:
- "Starting quality review" (at start)
- "Review agents completed" (after parallel execution)
- "Review complete: [X] issues found" (at end)
No additional voice calls needed.
---
## Example Outputs
### Example 1: No Issues Found
🔍 Quality Review Complete
📊 Summary:
✅ No critical or important issues found!
💡 Optional Improvements:
Code quality: Excellent Security: No vulnerabilities found Testing: Comprehensive coverage
📄 Full details: .titanium/review-report.md
### Example 2: Critical Issues Found
🔍 Quality Review Complete
📊 Summary:
⚠️ CRITICAL ISSUES (must fix):
SQL Injection Vulnerability
File: src/api/users.ts:45
User input concatenated directly into SQL query
Risk: Attacker could read/modify database
Missing Authentication Check
File: src/api/admin.ts:23
Admin endpoint has no auth middleware
Risk: Unauthorized access to admin functions
💡 MUST DO:
Would you like me to fix these critical issues now?
---
**This command provides comprehensive multi-agent quality review with actionable findings and clear priorities.**