Second opinion from a different model (OpenAI Codex). Use when reviewing plans, code reviewing, debugging hard problems, or when the user wants a second perspective.
From mxnpx claudepluginhub maxwolf-01/agents --plugin mxThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Run OpenAI's Codex CLI non-interactively from Claude Code. Different model family = different blind spots, different strengths.
/roast)_id=$(date +%s%N); codex exec -s read-only -c 'sandbox_permissions=["disk-full-read-access"]' -c 'model_reasoning_effort="xhigh"' -o "/tmp/codex-${_id}.md" "<prompt>" > "/tmp/codex-${_id}-log.md" 2>&1
-s read-only + disk-full-read-access — can read any file on disk, not write (no codebase conflicts)-o — final answer to file. _id = epoch-nanosecond timestamp-c 'model_reasoning_effort="xhigh"' — always xhigh-C <dir> — working directory (defaults to cwd, set when reviewing a different project)Run via Bash with run_in_background: true. You already have _id from the command — read /tmp/codex-${_id}.md directly. Log at /tmp/codex-${_id}-log.md if debugging.
Codex has zero context from your session. Everything it needs must be in the prompt or readable from the filesystem.
Give it orientation first:
./CLAUDE.md (project root) for project context, knowledge map, and conventionsagent/knowledge/ and agent/tasks/ when relevant — written records is more efficient than re-explaining what's already documentedAdd session context it can't get from files:
Request structured output:
GPT-5 is sensitive to contradictory instructions — more so than other models. Keep prompts clean and unambiguous.