Run all code review agents in parallel (bugs, coverage, maintainability, type-safety if typed, CLAUDE.md adherence, docs).
/plugin marketplace add doodledood/claude-code-plugins/plugin install vibe-workflow@claude-code-plugins-marketplaceOptional - specific files or directories to review. Leave empty to review all changes on current branchRun a comprehensive code review. First detect the codebase type, then launch appropriate agents.
Before launching agents, check if this is a typed language codebase:
TypeScript/JavaScript with types:
tsconfig.json exists, OR.ts/.tsx files in scopePython with type hints:
py.typed marker exists, ORmypy or pyright in pyproject.toml/setup.cfg, OR.py files (: str, -> None, Optional[, List[, etc.)Statically typed languages (always typed):
.java), Kotlin (.kt), Go (.go), Rust (.rs), C# (.cs), Swift (.swift), Scala (.scala)Quick detection commands:
# TypeScript
ls tsconfig.json 2>/dev/null || git ls-files '*.ts' '*.tsx' | head -1
# Python types
ls py.typed 2>/dev/null || grep -l "mypy\|pyright" pyproject.toml setup.cfg 2>/dev/null | head -1
# Other typed languages
git ls-files '*.java' '*.kt' '*.go' '*.rs' '*.cs' '*.swift' '*.scala' | head -1
Always launch these 5 core agents IN PARALLEL:
Conditionally launch (only if typed language detected):
Scope: $ARGUMENTS
If no arguments provided, all agents should analyze the git diff between the current branch and main/master branch.
After all review agents complete, launch an opus verification agent to reconcile and validate findings:
Purpose: The review agents run in parallel and are unaware of each other's findings. This can lead to:
Verification Agent Task:
Use the Task tool with model: opus to launch a verification agent with this prompt:
You are a Review Reconciliation Expert. Analyze the combined findings from all review agents and produce a final, consolidated report.
## Input
[Include all agent reports here]
## Your Tasks
1. **Identify Conflicts**: Find recommendations that contradict each other across agents. Resolve by:
- Analyzing which recommendation is more appropriate given the context
- Noting when both perspectives have merit (flag for user decision)
- Removing the weaker recommendation if clearly inferior
2. **Remove Duplicates**: Multiple agents may flag the same underlying issue. Consolidate into single entries, keeping the most detailed/actionable version.
3. **Filter Low-Confidence Issues**: Remove or downgrade issues that:
- Are vague or non-actionable ("could be improved" without specifics)
- Rely on speculation rather than evidence
- Would require significant effort for minimal benefit
- Are stylistic preferences not backed by project standards
4. **Validate Severity**: Ensure severity ratings are consistent and justified:
- Critical: Will cause production failures or data loss
- High: Significant bugs or violations that should block release
- Medium: Real issues worth fixing but not blocking
- Low: Nice-to-have improvements
5. **Flag Uncertain Items**: For issues where you're uncertain, mark them as "Needs Human Review" rather than removing them.
## Output
Produce a **Final Consolidated Review Report** with:
- Executive summary (overall code health assessment)
- Issues by severity (Critical → Low), deduplicated and validated
- Conflicts resolved (note any that need user decision)
- Items removed with brief reasoning (transparency)
- Recommended fix order (dependencies, quick wins first)
After presenting the final consolidated report, ask the user what they'd like to address:
header: "Next Steps"
question: "Would you like to address any of these findings?"
options:
- "Critical/High only (Recommended)" - Focus on issues that should block release
- "All issues" - Address everything including medium and low severity
- "Skip" - No fixes needed right now
Based on selection:
Skill("vibe-workflow:fix-review-issues", "--severity critical,high")Skill("vibe-workflow:fix-review-issues")