OpenAI Codex CLI configuration guide for Claude. Use when users ask about: codex config, ~/.codex/config.toml, codex sandbox policies, codex approval modes, creating codex skills, codex mcp servers, codex model settings, local LLM providers, or integrating tools with Codex.
/plugin marketplace add GGPrompts/my-plugins/plugin install codexforclaude@my-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
examples/config-full-auto.tomlexamples/config-local-llm.tomlexamples/config-minimal.tomlreferences/config-toml.mdreferences/creating-skills.mdreferences/mcp-servers.mdreferences/model-providers.mdreferences/sandbox-approval.mdConfigure and extend OpenAI Codex CLI for optimal development workflows.
| Task | Command/Location |
|---|---|
| Config file | ~/.codex/config.toml |
| Add MCP server | codex mcp add <name> -- <command> |
| View MCP servers | /mcp in TUI |
| Skills location | ~/.codex/skills/ or .codex/skills/ |
| Create skill | Use $skill-creator in Codex |
Edit ~/.codex/config.toml for all settings:
# Model settings
model = "gpt-5-codex"
model_reasoning_effort = "medium" # minimal|low|medium|high|xhigh
# Security
sandbox_mode = "workspace-write" # read-only|workspace-write|danger-full-access
approval_policy = "on-failure" # untrusted|on-failure|on-request|never
# Trust specific projects
[projects."/path/to/project"]
trust_level = "trusted"
For detailed config options: See references/config-toml.md
Sandbox modes control file system access:
read-only - Can only read files, no writesworkspace-write - Write within project directory onlydanger-full-access - Full system access (use carefully)Approval policies control command execution:
untrusted - Only trusted commands run without approvalon-failure - Auto-run, ask approval only on failureson-request - Model decides when to asknever - Never ask (dangerous)For security deep dive: See references/sandbox-approval.md
codex mcp add context7 -- npx -y @upstash/context7-mcp
codex mcp add myserver --env API_KEY=xxx -- node server.js
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]
[mcp_servers.figma]
url = "https://mcp.figma.com/mcp"
bearer_token_env_var = "FIGMA_OAUTH_TOKEN"
For full MCP options: See references/mcp-servers.md
Skills extend Codex with specialized knowledge. Structure:
~/.codex/skills/my-skill/
├── SKILL.md # Required: instructions + metadata
├── scripts/ # Optional: executable code
├── references/ # Optional: detailed docs
└── assets/ # Optional: templates
Minimal SKILL.md:
---
name: my-skill
description: When to trigger this skill
---
Instructions for Codex when using this skill.
Quick start: Ask Codex to use $skill-creator to bootstrap a new skill.
For skill development guide: See references/creating-skills.md
# Use specific model
model = "gpt-5-codex"
# Reasoning effort (Responses API)
model_reasoning_effort = "high" # minimal|low|medium|high|xhigh
# Verbosity (GPT-5)
model_verbosity = "medium" # low|medium|high
# Use local provider (LM Studio/Ollama)
model_provider = "oss"
For local LLM setup: See references/model-providers.md
Enable experimental features in [features]:
[features]
# Stable
shell_tool = true
parallel = true
view_image_tool = true
# Beta
unified_exec = true
shell_snapshot = true
# Experimental
skills = true
tui2 = true
codex --full-auto "implement feature X"
# Equivalent to: -a on-request --sandbox workspace-write
codex --dangerously-bypass-approvals-and-sandbox "run tests"
[projects."/home/user/trusted-project"]
trust_level = "trusted"
| Aspect | Codex | Claude Code |
|---|---|---|
| Provider | OpenAI | Anthropic |
| Config | ~/.codex/config.toml | ~/.claude/settings.json |
| Skills | ~/.codex/skills/ | ~/.claude/skills/ + plugins |
| MCP | config.toml sections | .mcp.json or plugin |
| Agents | N/A | Plugin agents |
| Hooks | N/A | Plugin hooks |
references/config-toml.md - Complete config.toml referencereferences/sandbox-approval.md - Security policies deep divereferences/mcp-servers.md - MCP server configurationreferences/creating-skills.md - Skill development guidereferences/model-providers.md - Model and provider settingsThis skill should be used when the user asks to "create a hookify rule", "write a hook rule", "configure hookify", "add a hookify rule", or needs guidance on hookify rule syntax and patterns.
Create distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.