From codex
Invokes Codex CLI for AI-driven code analysis, refactoring, and automated editing using GPT-5.2. Manages sandbox modes, full-auto runs, and session resuming.
npx claudepluginhub softaworks/agent-toolkit --plugin codexThis skill uses the workspace's default tool permissions.
1. Default to `gpt-5.2` model. Ask the user (via `AskUserQuestion`) which reasoning effort to use (`xhigh`,`high`, `medium`, or `low`). User can override model if needed (see Model Options below).
Executes Codex CLI (codex exec, resume) with gpt-5-codex models for code analysis, refactoring, and automated editing in sandboxed environments.
Invokes Codex CLI for AI-assisted code analysis, refactoring, and automated editing with OpenAI models, reasoning levels, and sandbox modes like read-only or workspace-write.
Executes Codex CLI (codex exec, resume) for code analysis, review, and automated editing with model selection (gpt-5.3-codex), sandbox modes, and git integration.
Share bugs, ideas, or general feedback.
gpt-5.2 model. Ask the user (via AskUserQuestion) which reasoning effort to use (xhigh,high, medium, or low). User can override model if needed (see Model Options below).--sandbox read-only unless edits or network access are necessary.-m, --model <MODEL>--config model_reasoning_effort="<high|medium|low>"--sandbox <read-only|workspace-write|danger-full-access>--full-auto-C, --cd <DIR>--skip-git-repo-checkcodex exec --skip-git-repo-check resume --last via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax: echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null. All flags have to be inserted between exec and resume.2>/dev/null to all codex exec commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.| Use case | Sandbox mode | Key flags |
|---|---|---|
| Read-only review or analysis | read-only | --sandbox read-only 2>/dev/null |
| Apply local edits | workspace-write | --sandbox workspace-write --full-auto 2>/dev/null |
| Permit network or broad access | danger-full-access | --sandbox danger-full-access --full-auto 2>/dev/null |
| Resume recent session | Inherited from original | echo "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null (no flags allowed) |
| Run from another directory | Match task needs | -C <DIR> plus other flags 2>/dev/null |
| Model | Best for | Context window | Key features |
|---|---|---|---|
gpt-5.2-max | Max model: Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
gpt-5.2 ⭐ | Flagship model: Software engineering, agentic coding workflows | 400K input / 128K output | 76.3% SWE-bench, adaptive reasoning, $1.25/$10.00 |
gpt-5.2-mini | Cost-efficient coding (4x more usage allowance) | 400K input / 128K output | Near SOTA performance, $0.25/$2.00 |
gpt-5.1-thinking | Ultra-complex reasoning, deep problem analysis | 400K input / 128K output | Adaptive thinking depth, runs 2x slower on hardest tasks |
GPT-5.2 Advantages: 76.3% SWE-bench (vs 72.8% GPT-5), 30% faster on average tasks, better tool handling, reduced hallucinations, improved code quality. Knowledge cutoff: September 30, 2024.
Reasoning Effort Levels:
xhigh - Ultra-complex tasks (deep problem analysis, complex reasoning, deep understanding of the problem)high - Complex tasks (refactoring, architecture, security analysis, performance optimization)medium - Standard tasks (refactoring, code organization, feature additions, bug fixes)low - Simple tasks (quick fixes, simple changes, code formatting, documentation)Cached Input Discount: 90% off ($0.125/M tokens) for repeated context, cache lasts up to 24 hours.
codex command, immediately use AskUserQuestion to confirm next steps, collect clarifications, or decide whether to resume with codex exec resume --last.echo "new prompt" | codex exec resume --last 2>/dev/null. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.codex --version or a codex exec command exits non-zero; request direction before retrying.--full-auto, --sandbox danger-full-access, --skip-git-repo-check) ask the user for permission using AskUserQuestion unless it was already given.AskUserQuestion.Requires Codex CLI v0.57.0 or later for GPT-5.2 model support. The CLI defaults to gpt-5.2 on macOS/Linux and gpt-5.2 on Windows. Check version: codex --version
Use /model slash command within a Codex session to switch models, or configure default in ~/.codex/config.toml.