From ai-docs-guide
Query OpenAI Codex CLI configuration, features, and documentation. Use this skill proactively when the conversation involves: - Codex CLI installation, setup, or authentication - Codex CLI configuration (config.toml, AGENTS.md, profiles) - Codex models (GPT-5.4, GPT-5.3-Codex, Codex-Spark) - Codex CLI commands, flags, slash commands - Codex features (fast mode, web search, multi-agent, MCP, skills) - Codex speed/reasoning settings - Codex SDK, non-interactive mode, automation - Codex security, sandboxing, approval modes - Codex IDE extension or app configuration - Codex integrations (GitHub, Slack, Linear)
npx claudepluginhub psychquant/psychquant-claude-plugins --plugin ai-docs-guideThis skill is limited to using the following tools:
Query OpenAI Codex source code and official documentation for accurate, up-to-date information.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Query OpenAI Codex source code and official documentation for accurate, up-to-date information.
When the user asks about or the conversation involves:
Primary source: openai/codex GitHub repo — the definitive source for config schemas, valid values, CLI flags, protocol definitions, and SDK types.
Secondary source: WebFetch from https://developers.openai.com/codex/ — for conceptual guides, tutorials, and best practices.
You MUST query the source — never answer from memory!
| Query Type | Method | Example |
|---|---|---|
| Config valid values, schema | Repo: config.schema.json | "What values does reasoning_effort accept?" |
| CLI flags, options | Repo: main.rs or docs/ | "What flags does codex accept?" |
| Model definitions, defaults | Repo: models.json | "What's the default reasoning effort for gpt-5.4?" |
| Protocol types, enums | Repo: protocol src | "What ReasoningEffort variants exist?" |
| SDK types, interfaces | Repo: sdk/ | "What options does exec() accept?" |
| Conceptual guides, tutorials | WebFetch docs site | "How does sandboxing work?" |
| Pricing, enterprise, auth | WebFetch docs site | "How much does Codex cost?" |
Search for keywords across the repo:
gh search code "keyword" --repo openai/codex --limit 20
Read specific key files via raw URL (preferred — avoids base64/size limits):
# Config schema (all valid config keys and values) — 78KB
curl -sL https://raw.githubusercontent.com/openai/codex/main/codex-rs/core/config.schema.json
# Model definitions (models, defaults, reasoning efforts) — 251KB, too large for gh api contents
curl -sL https://raw.githubusercontent.com/openai/codex/main/codex-rs/core/models.json
# CLI entry point (flags, arguments)
curl -sL https://raw.githubusercontent.com/openai/codex/main/codex-rs/exec/src/main.rs
# In-repo documentation
gh api repos/openai/codex/contents/docs/ -q '.[].name'
curl -sL https://raw.githubusercontent.com/openai/codex/main/docs/config.md
IMPORTANT: Do NOT use
gh api repos/.../contents/FILE -q '.content' | base64 -dfor files > 100KB. GitHub API returns empty content for large files. Always usecurl -sLwith raw.githubusercontent.com.
Key files in openai/codex repo:
| File | Contains |
|---|---|
codex-rs/core/config.schema.json | Full config schema with all valid values |
codex-rs/core/models.json | Model definitions, defaults, supported efforts |
codex-rs/exec/src/main.rs | CLI entry point, flags, arguments |
codex-rs/protocol/src/config_types.rs | Rust config types and enums |
codex-rs/protocol/src/openai_models.rs | ReasoningEffort enum, model types |
sdk/typescript/src/threadOptions.ts | TypeScript SDK types |
sdk/typescript/src/exec.ts | SDK exec options, CLI arg mapping |
sdk/python/src/codex_app_server/ | Python SDK types |
codex-rs/docs/ | Internal protocol and interface docs |
docs/ | User-facing documentation (markdown) |
Prepend https://developers.openai.com to paths:
| Topic | URL Path |
|---|---|
| Codex overview | /codex/ |
| Quickstart | /codex/quickstart/ |
| CLI overview | /codex/cli/ |
| CLI features | /codex/cli/features/ |
| CLI reference | /codex/cli/reference |
| Slash commands | /codex/cli/slash-commands/ |
| Config basics | /codex/config/basics/ |
| Speed & fast mode | /codex/speed/ |
| Models | /codex/models/ |
| AGENTS.md | /codex/agents-md/ |
| MCP setup | /codex/mcp/ |
| Skills | /codex/skills/ |
| SDK | /codex/sdk/ |
| Non-interactive | /codex/non-interactive/ |
| Security | /codex/security/ |
| Best practices | /codex/learn/best-practices/ |
WebFetch("https://developers.openai.com/codex/cli/reference", "Extract documentation about...")
Extract relevant information and answer the user directly.
npm install -g @openai/codex # npm
brew install codex # Homebrew
~/.codex/config.toml — primary config (TOML format, supports profiles)
| Flag | Purpose |
|---|---|
-m, --model | Override model (e.g. gpt-5.4) |
-a, --ask-for-approval | untrusted / on-request / never |
-s, --sandbox | read-only / workspace-write / danger-full-access |
-i, --image | Attach image files |
-p, --profile | Load named config profile |
--full-auto | Low-friction auto mode |
--search | Enable web search |
--oss | Use local OSS model (Ollama) |
none | minimal | low | medium | high | xhigh
| Path | Purpose |
|---|---|
~/.codex/config.toml | User config |
~/.codex/AGENTS.md | Global agent instructions |
AGENTS.md | Repo-level instructions |
.agents/skills/ | Repo skills |
$HOME/.agents/skills | Global skills |
If topic is not covered above:
gh search code "topic" --repo openai/codexsite:developers.openai.com/codex <topic>https://developers.openai.com/codex/ for the main indexconfig.toml), not JSON — different from Claude Code