From superai-mcp
Use when performing code review with multiple AI models, comparing diff reviews, reviewing uncommitted changes, reviewing pull requests, or when the user says "review my code", "code review", "review this PR", "review uncommitted", "compare reviews". Produces aggregated multi-model feedback in a single call.
npx claudepluginhub babywbx/superai-mcp --plugin superai-mcpThis skill is limited to using the following tools:
Review code changes by broadcasting to multiple AI models in parallel. One call builds context once and fans out — faster and more consistent than sequential reviews.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Review code changes by broadcasting to multiple AI models in parallel. One call builds context once and fans out — faster and more consistent than sequential reviews.
# Review uncommitted changes with all CLIs
broadcast(
prompt="Review this diff for bugs, security issues, and improvements",
cd="/path/to/project",
review_uncommitted=True
)
# Review against a branch (e.g. PR review)
broadcast(
prompt="Review these changes for correctness and style",
cd="/path/to/project",
review_base="main"
)
# Review a specific commit
broadcast(
prompt="Review this commit",
cd="/path/to/project",
review_commit="abc1234"
)
# All three CLIs (default)
broadcast(prompt="...", cd="...", review_uncommitted=True)
# Specific targets
broadcast(prompt="...", cd="...", targets=["codex", "gemini"], review_uncommitted=True)
Recommendation: Use targets=["codex", "gemini"] for code review. Two perspectives are usually sufficient and faster than three.
| Mode | Parameter | Use Case |
|---|---|---|
| Uncommitted | review_uncommitted=True | Review working tree changes before commit |
| Branch diff | review_base="main" | Review all changes on a feature branch |
| Commit | review_commit="abc1234" | Review a specific commit (7-40 hex SHA) |
| File list | files=["src/foo.py"] | Review specific files (relative paths only) |
Modes are mutually exclusive. files can combine with review modes to add extra context.
broadcast(
prompt="Review this diff",
cd="/path/to/project",
review_uncommitted=True,
models={"gemini": "gemini-3.1-pro-preview"},
overrides={
"codex": {"reasoning_effort": "high"},
"claude": {"effort": "high"}
}
)
The response contains one result per target. Compare them:
broadcast instead of calling codex + gemini + claude separately — it builds context oncesystem_prompt="Focus on critical bugs and security issues only" to keep output focusedsession_id from each target's result if you want follow-up questions per modelsuccess field for each target — one target failing doesn't block others