Query OpenAI Codex for root cause analysis (read-only, no edits)
Queries OpenAI Codex for root cause analysis in read-only mode with smart model selection.
/plugin marketplace add GGPrompts/my-plugins/plugin install ggprompts-full-toolkit@GGPrompts/my-pluginsBuild a focused debug prompt for Codex, then run it in read-only mode.
Default: gpt-5.2-codex (latest frontier agentic coding model)
Fallback: gpt-5.1-codex-mini (cheaper, stretch limits when approaching rate limits)
Available models:
gpt-5.2-codex - Latest frontier, best for complex debugging (default)gpt-5.1-codex-max - Deep+fast reasoning, flagshipgpt-5.1-codex-mini - Faster/cheaper, use to stretch limitsgpt-5.2 - General reasoning, broad knowledge (non-codex optimized)Heuristics:
gpt-5.2-codexgpt-5.1-codex-minigpt-5.2Override: User can append --model=latest, --model=max, --model=mini, or --model=general
# Standard (human-readable output)
codex exec -m gpt-5.2-codex --config model_reasoning_effort="high" --sandbox read-only "PROMPT"
# Automation (structured JSON output for parsing)
codex exec --json -m gpt-5.2-codex --config model_reasoning_effort="high" --sandbox read-only "PROMPT"
Model mapping for overrides:
--model=latest → gpt-5.2-codex (default)--model=max → gpt-5.1-codex-max--model=mini → gpt-5.1-codex-mini--model=general → gpt-5.2Reasoning effort levels: minimal | low | medium | high | xhigh
high for complex debuggingxhigh for extremely difficult problems (slower, more tokens)Notes:
--sandbox read-only - Codex can only analyze, not edit files (default for exec, but explicit is safer)timeout: 600000 (10 min) - run synchronously, not in background# Bug Investigation Request
## Issue
[Brief bug description]
## Context
- **Files:** path/to/file.ext:line
- **What's happening:** [observed behavior]
- **Expected:** [what should happen]
- **Tried:** [previous attempts]
## Code Snippet
[Small, relevant excerpt if helpful]
## Question
What is the root cause? Include:
1. Root cause analysis
2. Why prior attempts failed (if applicable)
3. Suggested fix approach
--model= overridecodex exec -m {{model}} --config model_reasoning_effort="high" --sandbox read-only "prompt"gpt-5.1-codex-miniIf you hit rate limits with gpt-5.2-codex:
# Fallback to mini model (explicitly recommended by OpenAI to stretch limits)
codex exec -m gpt-5.1-codex-mini --config model_reasoning_effort="high" --sandbox read-only "PROMPT"
**Model:** {{model}}
## Codex Analysis
[Codex's response - skip session info, only show the analysis]
---
**Summary:** [2-3 sentence summary of findings]
**Next:** Would you like me to implement this fix?
which codex / codex --versioncodex logintimeout: 900000 (15 min)--model=miniRemember: Codex analyzes, you implement. Always use --sandbox read-only.