From codex-skill
Provides second opinions, verifies implementations, and researches libraries, APIs, and code patterns using Codex CLI for technical validation.
npx claudepluginhub cathrynlavery/codex-skillThis skill uses the workspace's default tool permissions.
Expert software engineer providing second opinions and independent verification using the Codex CLI tool.
Delegates complex code generation, refactoring, architectural analysis, and review tasks to OpenAI's Codex CLI (GPT-5.3-codex models) via safe workflows with sandboxing and approvals. Activates on explicit triggers like 'use codex' or 'codex exec'.
Consults OpenAI Codex (GPT-5) via CLI for code investigation, debugging, or review. Runs read-only with full project access; activates on 'ask codex' phrases or /ask-codex.
Performs code reviews using OpenAI Codex CLI with GPT-5.2-Codex, detecting bugs, security flaws, and style issues. Supports Git diffs, PR comments, uncommitted changes, and CI/CD integration.
Share bugs, ideas, or general feedback.
Expert software engineer providing second opinions and independent verification using the Codex CLI tool.
Serve as Claude Code's technical consultant for:
codex exec --dangerously-bypass-approvals-and-sandbox "Your query here"
exec is REQUIRED for non-interactive/automated use--dangerously-bypass-approvals-and-sandbox enables full access--model <model> or -m <model>: Specify model (e.g., gpt-5.5, gpt-5.4, gpt-5.3-codex, gpt-5.3-codex-spark, gpt-5.1-codex-mini)-c model_reasoning_effort=<level>: Set reasoning effort (low, medium, high, xhigh) — use config override, NOT --reasoning-effort (flag doesn't exist)--full-auto: Enable full auto modegpt-5.5 — newest frontier agentic coding model; 400k context window, supports reasoning levels low/medium/high/xhigh. Use for the deepest analysis, novel architecture, or the hardest problems. Slower than 5.4, so reserve for when reasoning depth matters more than latency.gpt-5.4 (default) — previous frontier model; 1M context window (272k standard-price tier), text+image input. Capable enough for most plan reviews and verification tasks, noticeably faster than 5.5. Use as the standard workhorse.gpt-5.3-codex-spark — ultra-fast, ~1200 tok/s on Cerebras hardware (~15x faster than 5.3-codex); text-only, 128k context. Drop to this for trivial fact checks where speed dominates.gpt-5.3-codex — full 5.3 model, ~65 tok/s; 272k context. Alternative general-purpose option.gpt-5.2-codex, gpt-5.1-codex-max, gpt-5.1-codex-miniWhen to escalate to 5.5: complex multi-file architecture analysis, novel algorithmic problems, security-critical review, or any case where 5.4 gives a shallow answer. Use -m gpt-5.5 -c model_reasoning_effort=high (or xhigh for maximum depth).
When to drop to Spark: trivial fact checks, quick lookups, or when you need sub-second answers and 5.4's depth is overkill.
IMPORTANT: Codex is designed for thoroughness over speed:
codex exec --dangerously-bypass-approvals-and-sandbox "Context: [Project name] ([tech stack]). Relevant docs: @/CLAUDE.md plus package-level CLAUDE.md files. Task: <short task>. Repository evidence: <paths/lines from rg/git>. Constraints: [constraints]. Please return: (1) decisive answer; (2) supporting citations (paths:line); (3) risks/edge cases; (4) recommended next steps/tests; (5) open questions. List any uncertainties explicitly."
Always provide project context:
codex exec --dangerously-bypass-approvals-and-sandbox "Context: This is the [Project] monorepo, a [description] using [tech stack].
Key documentation is at @/CLAUDE.md
Note: Similar to how Codex looks for agent.md files, this project uses CLAUDE.md files in various directories:
- Root CLAUDE.md: Overall project guidance
- [Additional CLAUDE.md locations as relevant]
[Your specific question here]"
gpt-5.4)-m gpt-5.5 -c model_reasoning_effort=high-m gpt-5.3-codex-sparkBefore querying Codex:
rg <token> in repo for existing patternsCLAUDE.md (root, package, .claude/*) for normsgit log -p -- <file/dir> if history mattersAsk Codex for structured reply:
Prefer summaries and file/line references over pasting large snippets. Avoid secrets/env values in prompts.
After receiving Codex's response, verify: