From majestic-llm
Consults external LLMs (OpenAI Codex, Google Gemini) via CLIs for second opinions on architecture, design decisions, model selection, and approach comparisons.
npx claudepluginhub majesticlabs-dev/majestic-marketplace --plugin majestic-llmThis skill is limited to using the following tools:
**Audience:** Developers seeking alternative AI perspectives on architecture and design decisions.
Invokes OpenAI Codex and Google Gemini CLIs via Bash for second opinions, code reviews, and alternative analysis. Useful when users request external AI verification or explicitly say 'ask codex' or 'ask gemini'.
Consults Gemini 2.5 Pro, OpenAI Codex, and Claude for second opinions on debugging failures, architectural decisions, security validation, and fresh perspectives.
Invokes OpenAI Codex and Google Gemini CLIs for adversarial code reviews, tie-breaking, and multi-model consensus on critical decisions like security and architecture.
Share bugs, ideas, or general feedback.
Audience: Developers seeking alternative AI perspectives on architecture and design decisions. Goal: Invoke external LLM CLIs in read-only/sandbox mode to get structured second opinions, then present results for comparison with Claude's perspective.
codex --version to verify)codex login)| Model | Use Case | Cost |
|---|---|---|
gpt-5.1-codex-mini | Fast, cost-effective (~4x more usage) | Low |
gpt-5.1-codex | Balanced, optimized for agentic tasks (default) | Medium |
gpt-5.1-codex-max | Maximum intelligence for critical decisions | High |
Parse model from prompt if specified (e.g., "using codex-mini, analyze..." or "model: gpt-5.1-codex-max").
Run Codex in read-only sandbox mode:
codex exec --sandbox read-only --model <model> "<prompt>"
Important flags:
--sandbox read-only - Prevents any code modifications--model <model> - Model to use (default: gpt-5.1-codex)For complex prompts, use heredoc:
codex exec --sandbox read-only --model gpt-5.1-codex "$(cat <<'EOF'
Context: [codebase context]
Question: [specific architectural question]
Please provide:
1. 2-3 alternative approaches
2. Trade-offs for each approach
3. Your recommendation with reasoning
EOF
)"
| Error | Resolution |
|---|---|
| CLI not found | Install: npm install -g @openai/codex then codex login |
| Auth failed | Run: codex login |
| Timeout (>2 min) | Report partial results or simplify the query |
gemini --version to verify)| Model | Use Case | Cost |
|---|---|---|
gemini-2.5-flash | Fast, cost-effective consulting | Low |
gemini-2.5-pro | Balanced reasoning | Medium |
gemini-3.0-pro-preview | Latest Gemini 3 Pro (default) | Medium |
Parse model from prompt if specified (e.g., "using flash, analyze..." or "model: gemini-3.0-pro-preview").
Run Gemini in sandbox mode:
gemini --sandbox --output-format text --model <model> "<prompt>"
Important flags:
--sandbox - Prevents any code modifications--output-format text - Returns plain text (vs json/stream-json)--model <model> - Model to use (default: gemini-3.0-pro-preview)For complex prompts, use heredoc:
gemini --sandbox --output-format text --model gemini-3.0-pro-preview "$(cat <<'EOF'
Context: [codebase context]
Question: [specific architectural question]
Please provide:
1. 2-3 alternative approaches
2. Trade-offs for each approach
3. Your recommendation with reasoning
EOF
)"
| Error | Resolution |
|---|---|
| CLI not found | Install: npm install -g @anthropic-ai/gemini-cli / See https://github.com/google-gemini/gemini-cli |
| Auth failed | Run gemini and follow authentication prompts |
| Timeout (>2 min) | Report partial results or simplify the query |
Create a focused prompt that:
Include relevant sections from CLAUDE.md/AGENTS.md and any project constraints mentioned in the user's request.
## [Codex/Gemini] Consulting Results
**Query:** [Original question/topic]
**Model:** [model used] (via [Codex/Gemini] CLI)
### Alternative Perspectives
#### Option 1: [Name]
- **Approach:** [Description]
- **Pros:** [Benefits]
- **Cons:** [Drawbacks]
#### Option 2: [Name]
- **Approach:** [Description]
- **Pros:** [Benefits]
- **Cons:** [Drawbacks]
#### Option 3: [Name]
- **Approach:** [Description]
- **Pros:** [Benefits]
- **Cons:** [Drawbacks]
### Recommendation
[LLM's preferred approach and reasoning]
### Key Insights
- [Insight 1 - something Claude might not have considered]
- [Insight 2]
- [Insight 3]
### Raw Output
<details>
<summary>Full Response</summary>
[Complete unedited response]
</details>