From superai-mcp
Use when needing multi-model AI assistance, cross-model code review, model comparison, or any task involving codex/gemini/claude CLI tools via MCP. Triggers include "ask codex", "ask gemini", "compare models", "broadcast to all", "chain pipeline", "vote on best answer", "debate between models", or any multi-model collaboration task.
npx claudepluginhub babywbx/superai-mcp --plugin superai-mcpThis skill is limited to using the following tools:
Recommended one-line install with `skills.sh`:
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Recommended one-line install with skills.sh:
npx skills add https://github.com/babywbx/SuperAI-MCP --skill superai-mcp
Every call requires prompt + cd. Save session_id from response for follow-up. Always check the success field.
codex(prompt="review this file", cd="/path/to/project")
gemini(prompt="analyze this code", cd="/path/to/project", model="gemini-3.1-pro-preview")
claude(prompt="refactor this", cd="/path/to/project")
| Tool | Purpose | Key Use Case |
|---|---|---|
codex | OpenAI Codex CLI | Code generation, review |
gemini | Google Gemini CLI | Code review, analysis |
claude | Claude CLI (nested) | Code review, complex tasks |
broadcast | Parallel multi-model | Code review — send to all, compare |
batch | Parallel same-target | Send N prompts to 1 CLI concurrently |
chain | Sequential pipeline | Multi-step workflows (each step sees previous output) |
vote | Consensus with judge | Pick best answer from candidates |
debate | Alternating rounds | Explore trade-offs between two models |
list-models | Discover models | Find available model IDs from OpenRouter |
status | CLI health check | Verify availability + auth (+ optional quota) |
usage | Token accounting | Track cumulative usage, cache stats, optional reset/clear |
quota | Account-level quotas | Check rate limits before heavy work |
# Single tool — pass model directly
gemini(prompt="...", cd="...", model="gemini-3.1-pro-preview")
# Broadcast — per-target models via models dict (NOT top-level model)
broadcast(prompt="...", cd="...", targets=["codex","gemini"],
models={"gemini": "gemini-3.1-pro-preview"})
# Per-target overrides for any parameter
broadcast(prompt="...", cd="...",
overrides={"codex": {"timeout": 600}, "claude": {"effort": "high"}})
Short aliases bypass validation: flash, pro, sonnet, haiku, opus.
Send multiple prompts to the same CLI in one call (bypasses stdio serialization):
batch(target="codex", cd="...", prompts=[
"Review server.py error handling",
"Review runner.py timeout logic",
"Review parsers.py output parsing"
], model="o3-mini", max_concurrency=3)
Use options for target-specific params:
batch(target="codex", cd="...", prompts=["q1", "q2"],
options={"reasoning_effort": "high", "sandbox": "read-only"})
broadcast= 1 prompt → N targets.batch= N prompts → 1 target. Batch safety limits:max_concurrencyis capped at 20 and total prompt payload is capped at 20 MiB.
Chain — sequential pipeline, each step sees previous output:
chain(cd="...", steps=[
{"target": "gemini", "prompt": "Analyze this code"},
{"target": "claude", "prompt": "Refactor based on analysis"}
])
Vote — parallel candidates, judge picks best:
vote(prompt="...", cd="...", candidates=["codex","gemini"], judge="claude")
Only successful candidate outputs are judged; if all candidates fail, the tool returns success: false.
Debate — alternating rounds between two models:
debate(prompt="...", cd="...", side_a="codex", side_b="claude", rounds=3)
Codex — reasoning_effort (none/minimal/low/medium/high/xhigh), service_tier (e.g. "fast"), sandbox (read-only/workspace-write/danger-full-access), multi_agent (enable subagent spawning), agents_max_threads (0=default, max 20), agents_max_depth (0=default, max 5):
codex(prompt="...", cd="...", reasoning_effort="high", service_tier="fast")
codex(prompt="...", cd="...", multi_agent=True, agents_max_threads=4)
Gemini — approval_mode (default/auto_edit/yolo/plan), include_directories (extra workspace dirs):
gemini(prompt="...", cd="...", model="gemini-3.1-pro-preview",
approval_mode="yolo", include_directories=["/data/shared"])
Claude — effort (low/medium/high/max), permission_mode (overrides sandbox: acceptEdits/dontAsk/plan/auto), add_dirs, fallback_model, append_system_prompt, json_schema:
claude(prompt="...", cd="...", effort="max", permission_mode="plan")
claude(prompt="...", cd="...", json_schema='{"type":"object","properties":{"answer":{"type":"string"}}}')
The files parameter accepts both exact paths and glob patterns:
codex(prompt="review", cd="...", files=["src/**/*.py", "*.md", "config.toml"])
| Mistake | Fix |
|---|---|
Using model on broadcast | Use models dict for per-target; model overrides ALL targets |
Absolute paths in files param | Use relative paths or glob patterns within cd directory |
Not saving session_id | Store it from response for multi-turn conversations |
Calling gemini without model | Pass model="gemini-3.1-pro-preview" for latest |
| Running multiple reviews separately | Use broadcast — builds context once, fans out in parallel |
| Repeating identical prompts | Pass use_cache=True to reuse cached responses |
| No visibility during long tasks | Pass stream=True to get real-time response chunks |
| Reference | When to Read |
|---|---|
| references/parameters.md | Full parameter reference for all 12 tools |