By boyand
Run multi-round Claude/Codex-powered code reviews for plans and implementations via bash commands. Auto-generate findings responses (ACCEPT/REJECT/FIX/WONTFIX/DEFER), check workflow status, view decision summaries, diagnose issues, and resolve plans with guided transitions.
npx claudepluginhub boyand/codex-review --plugin codex-reviewDiagnose workflow resolution, ownership, and Codex connectivity
Run a Codex implementation review round and auto-write findings responses
Run a Codex plan review round and auto-write findings responses
Show workflow state for the current Claude thread
Show finding decisions for the current phase and round
A Claude Code plugin for multi-round Codex reviews of plans and implementations.
I get the best results when Claude and Codex work together. Claude plans, Codex reviews the plan, Claude implements, Codex reviews the implementation. Two models checking each other's work.
The problem is that this loop falls apart fast without structure:
I wanted something I could reliably run — not a framework, just a thin repeatable workflow. Five commands:
/codex-review:plan — review the plan/codex-review:impl — review the implementation/codex-review:summary — see what was decided/codex-review:status — check the workflow state/codex-review:doctor — diagnose when things go wrongplan snapshots the current Claude plan into a canonical workflow artifact.impl reviews implementation against the approved plan.md.Install Codex CLI:
npm install -g @openai/codex
Install the plugin from a local checkout:
claude plugin add /path/to/codex-review
Or from GitHub:
claude plugin add github:boyand/codex-review
/codex-review:plan deeply review this plan
What happens:
artifacts/plan.md/codex-review:impl review the implementation deeply
What happens:
plan.mdplan/approval, impl auto-advances to implementation/codex-review:summary
Use it to see:
FIX vs NO-CHANGE vs OPEN/codex-review:doctor
Use it when:
Claude plans. Codex reviews. Claude implements. Codex reviews again.
This plugin makes that loop repeatable. It does not try to be a framework, a dashboard, or an agent runtime. It is a thin workflow around two operations — review the plan, review the implementation — and that constraint is the point.
Workflows live under:
.claude/codex-review/workflows/<workflow-id>/
Important files:
workflow.jsondecisions.tsvartifacts/plan.mdartifacts/plan-review-rN.mdartifacts/plan-findings-rN.mdartifacts/implement-review-rN.mdartifacts/implement-findings-rN.mdWhy this matters:
/codex-review:plan/codex-review:impl/codex-review:summary/codex-review:status/codex-review:doctorInternal transition commands still exist, but Claude runs them for you after your inline choice.
| Environment Variable | Default | Description |
|---|---|---|
CODEX_REVIEW_MODEL | gpt-5.3-codex | Codex model for reviewer runs |
CODEX_REVIEW_FLAGS | --sandbox=read-only | Flags for Codex reviewer runs |
CODEX_WORKER_FLAGS | --sandbox=workspace-write | Flags for Codex worker runs |
CODEX_CALL_TIMEOUT_SEC | 720 | Per-call Codex execution timeout in seconds |
Legacy aliases still accepted:
CODEX_REVIEW_LOOP_MODELCODEX_REVIEW_LOOP_FLAGSThis repo is optimized for:
The launch surface should stay lightweight. A strong README, a sharp demo, and predictable commands matter more than a large docs framework.
MIT
Autonomous Claude + Codex review loop. Plan a feature with adversarial pushback, or audit code, all in one window.
Share bugs, ideas, or general feedback.
Give Claude Code a second opinion using OpenAI Codex - automatic plan review via hooks
Cross-agent review workflow: Claude implements, Codex reviews
Symmetric two-AI peer review using OpenAI Codex CLI. Both AIs review independently in a blind pass, then debate per-issue with terminal states until convergence. Catches significantly more issues than single-pass validation.
Code review plugin with a standalone reviewer agent and two skill strategies: disposable subagents for one-shot reviews and persistent team members for iterative reviews
Use Codex from Claude Code to review code or delegate tasks.