Consult external AIs (Gemini 2.5 Pro, OpenAI Codex, Claude) for second opinions. Use for debugging failures, architectural decisions, security validation, or need fresh perspective with synthesis.
Consults external AIs for second opinions on bugs, architecture, and security validation.
npx claudepluginhub secondsky/claude-skillsThis skill is limited to using the following tools:
examples/sample-consultation.mdreferences/ai-strengths.mdreferences/commands-reference.mdreferences/setup-guide.mdreferences/troubleshooting.mdreferences/usage-examples.mdscripts/setup-apis.shtemplates/GEMINI.mdtemplates/codex.mdtemplates/consultation-log-parser.shConsult external AIs for second opinions when Claude Code is stuck or making critical decisions.
This skill enables future Claude Code sessions to consult other AIs when:
Key innovation: Uses existing CLI tools (gemini, codex) instead of building MCP servers - much simpler and more maintainable.
Claude Code should automatically suggest using this skill when:
After 1 failed debugging attempt
Before architectural decisions
Security changes
When uncertain
User can explicitly request consultation with:
/consult-gemini [question] - Gemini 2.5 Pro with thinking, search, grounding/consult-codex [question] - OpenAI GPT-4 via Codex CLI (repo-aware)/consult-claude [question] - Fresh Claude subagent (free, fast)/consult-ai [question] - Router that asks which AI to use| AI | Tool | When to Use | Special Features | Cost |
|---|---|---|---|---|
| Gemini 2.5 Pro | gemini CLI | Web research, latest docs, thinking | Google Search, extended reasoning, grounding | ~$0.10-0.50 |
| OpenAI GPT-4 | codex CLI | Repo-aware analysis, code review | Auto-scans directory, OpenAI reasoning | ~$0.05-0.30 |
| Fresh Claude | Task tool | Quick second opinion, budget-friendly | Same capabilities, fresh perspective | Free |
For detailed AI comparison: Load references/ai-strengths.md when choosing which AI to consult for specific use cases.
Claude Code encounters bug/decision
↓
Suggests consultation (or user requests)
↓
User approves
↓
Execute appropriate slash command
↓
CLI command calls external AI
↓
Parse response
↓
Synthesize: Claude's analysis + External AI's analysis
↓
Present 5-part comparison
↓
Ask permission to implement
Every consultation must follow this format (prevents parroting):
End with: "Should I proceed with this approach?"
For complete installation guide: Load references/setup-guide.md when installing CLIs, configuring API keys, or setting up templates.
Quick setup:
bun add -g @google/generative-ai-cli (Gemini), bun add -g codex (Codex, optional)export GEMINI_API_KEY="...", export OPENAI_API_KEY="..."~/.claude/skills/multi-ai-consultantGEMINI.md, codex.md, .geminiignore to project rootgemini -p "test", codex exec - --yoloGet API keys:
For detailed examples: Load references/usage-examples.md when learning consultation workflows or seeing real-world scenarios.
Quick examples:
/consult-gemini for web-researched solution/consult-gemini for latest best practices/consult-codex for repo-aware consistency check/consult-claude for free fresh perspective5 detailed examples available:
For complete command reference: Load references/commands-reference.md when needing detailed syntax, options, or cost tracking information.
Quick command overview:
/consult-gemini Is this JWT secure by 2025 standards?/consult-codex Review for performance bottlenecks/consult-claude Am I missing something obvious?/consult-ai How should we structure this architecture?Templates customize AI behavior for consultations (auto-loaded from project root):
.gitignoreInstallation: Copy from ~/.claude/skills/multi-ai-consultant/templates/ to project root
Automatic protection:
.gitignore automatically.geminiignore for extra exclusions (.env*, *secret*, *credentials*)Privacy best practices:
.gitignore properly.geminiignore for extra safetygit status --ignoredFor detailed privacy configuration: Load references/setup-guide.md when setting up .geminiignore or privacy exclusions.
Every consultation logged to ~/.claude/ai-consultations/consultations.log
Log format: timestamp,ai,model,input_tokens,output_tokens,cost,project_path
View logs: consultation-log-parser.sh --summary
Example output:
Total consultations: 47
Gemini: 23 ($4.25), Codex: 12 ($1.85), Fresh Claude: 12 ($0.00)
Total cost: $6.10
For detailed cost tracking: Load references/commands-reference.md when viewing logs, calculating costs, or managing budgets.
For complete troubleshooting: Load references/troubleshooting.md when encountering errors or setup issues.
Top 5 issues:
gemini: command not found → Fix: bun add -g @google/generative-ai-cliexport GEMINI_API_KEY="...".env file sent → Fix: Add to .gitignore and .geminiignore~/.claude/skills/multi-ai-consultantTypical scenario (stuck on bug):
Total: ~20k tokens, 30-45 minutes
Same scenario:
/consult-gemini (~1k tokens)Total: ~8k tokens, 5-10 minutes
Savings: ~60% tokens, ~75% time
GEMINI.md/codex.md| Aspect | MCP Server | CLI Approach |
|---|---|---|
| Setup time | 4-6 hours | 60-75 minutes |
| Complexity | High (MCP protocol) | Low (bash + CLIs) |
| Maintenance | Update MCP SDK | Update CLI (rare) |
| Flexibility | Locked to AIs | Any AI with CLI |
| Debugging | MCP protocol | Standard bash |
| Dependencies | MCP SDK, npm | Just CLIs |
Winner: CLI approach - 80% less effort, same functionality
Load reference files when working on specific aspects of AI consultation:
Load when:
Load when:
Load when:
Load when:
Load when:
Found an issue?
Adding new AI?
commands/consult-newai.mdcommands/consult-ai.mdtemplates/newai.md (if CLI supports system instructions)Improving synthesis?
templates/GEMINI.md, templates/codex.mdplanning/multi-ai-consultant-*.mdcommands/*.mdtemplates/*scripts/*references/*.md (5 reference files)MIT License - See LICENSE file
Last Updated: 2025-11-07 Status: Production Ready Maintainer: Claude Skills Maintainers | maintainers@example.com