From cross-verify
AI cross-verification skill that compares answers from Codex, Gemini, and Claude. Activated by requests like "cross verify", "cross check", "ask other AIs too", "multi AI comparison".
npx claudepluginhub devbrother2024/claude-plugins --plugin cross-verifyThis skill uses the workspace's default tool permissions.
Collects and cross-verifies answers from Codex CLI, Gemini CLI, and Claude (Lead) on a given question, then synthesizes a comprehensive analysis.
Orchestrates parallel analysis of coding problems across AI models (Claude, GPT, Gemini, Grok) via CLI tools or APIs, collects recommendations, and synthesizes optimal solution.
Queries external AI agents (Codex, Gemini, OpenCode, Claude headless) in parallel for code reviews, bug investigations, architecture decisions, security audits, and consensus on complex coding problems.
Invokes OpenAI Codex and Google Gemini CLIs via Bash for second opinions, code reviews, and alternative analysis. Useful when users request external AI verification or explicitly say 'ask codex' or 'ask gemini'.
Share bugs, ideas, or general feedback.
Collects and cross-verifies answers from Codex CLI, Gemini CLI, and Claude (Lead) on a given question, then synthesizes a comprehensive analysis.
At least one of the following CLI tools must be installed:
npm install -g @openai/codexnpm install -g @anthropic-ai/gemini or via Google's official installationgit diff (if there are changes)Before spawning agents, check which CLIs are available:
which codex 2>/dev/null && echo "CODEX_OK" || echo "CODEX_NOT_FOUND"
which gemini 2>/dev/null && echo "GEMINI_OK" || echo "GEMINI_NOT_FOUND"
Try creating a team first:
TeamCreate(team_name: "cross-verify", description: "AI cross-verification")
If TeamCreate fails (Agent Teams not enabled):
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in your settings.json env. Would you like me to enable it?"On success, spawn agents in a single message (parallel):
Codex reviewer (only if codex is available):
Agent(
description: "Run Codex CLI analysis",
prompt: "<composed prompt with context>",
name: "codex-reviewer",
subagent_type: "codex-reviewer",
team_name: "cross-verify",
model: "haiku",
mode: "dontAsk"
)
Gemini reviewer (only if gemini is available):
Agent(
description: "Run Gemini CLI analysis",
prompt: "<composed prompt with context>",
name: "gemini-reviewer",
subagent_type: "gemini-reviewer",
team_name: "cross-verify",
model: "haiku",
mode: "dontAsk"
)
If Teams is not available, spawn agents sequentially with run_in_background: true:
Agent(
description: "Run Codex CLI analysis",
prompt: "<composed prompt>",
name: "codex-reviewer",
subagent_type: "codex-reviewer",
model: "haiku",
mode: "dontAsk",
run_in_background: true
)
While agents are working, the Lead performs its own analysis on the same question.
After receiving all results, synthesize in this format:
## Cross-Verification Results
### Consensus (all models agree)
- High-confidence conclusions
### Unique Insights
- **Codex (GPT)**: Findings unique to this model
- **Gemini**: Findings unique to this model
- **Claude**: Findings unique to this model
### Conflicts (disagreements)
- Per item: each model's position + Lead's judgment
### Final Conclusion
- Synthesized recommendation
After output, send shutdown to teammates:
SendMessage(to: "codex-reviewer", message: {type: "shutdown_request"})
SendMessage(to: "gemini-reviewer", message: {type: "shutdown_request"})
| Scenario | Action |
|---|---|
| CLI not installed | Skip that model, proceed with remaining |
| API error (ModelNotFoundError, etc.) | Skip that model, note in results |
| Timeout (agent doesn't respond) | Synthesize with available results |
| Both CLIs fail | Compare Lead analysis against error context |
| TeamCreate fails | Show activation guide, fallback to sequential Agent |
| Request Type | Context to Include |
|---|---|
| Code review | git diff + relevant file contents |
| Architecture question | Project structure + key config files |
| Bug analysis | Error logs + related code |
| General technical question | Question only |