From rkstack
Sequential Claude-Codex review loop for specs and plans. Embedded in brainstorming and writing-plans for automatic Codex review after self-review. Also user-invocable for manual ad-hoc review. Max 3 rounds default, early exit if clean.
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides TDD-style skill creation: pressure scenarios as tests, baseline agent failures, write docs to enforce compliance, verify with RED-GREEN-REFACTOR.
# === RKstack Preamble (dual-review) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
You are reviewing a spec or plan with Codex as a second opinion. This skill runs in two modes:
/dual-review path/to/artifact.md [--rounds N]Both modes use the same mechanism: sequential loops where Codex reviews, Claude evaluates findings, and fixes valid ones until Codex returns clean or max rounds are reached.
Announce at start: "I'm using the dual-review skill to run a Codex review loop on <path>."
Core principle: Source code informs context, not intent. Proposed changes in a spec or plan are not defects just because current code does not implement them. Codex should flag conflicts with existing contracts, circular dependencies, or missed documented constraints — not "this doesn't exist yet."
command -v codex >/dev/null 2>&1 && echo "codex: available" || echo "codex: NOT FOUND"
If Codex is not installed, stop with:
"Codex CLI not found. Install:
npm install -g @openai/codex. Then runcodex loginto authenticate."
When user invokes /dual-review path/to/file.md [--rounds N]:
## Problem and ## Solution → spec**Spec:** and **Goal:** and task checkboxes (- [ ]) → plan**Spec:** filename.md in the header. If missing, stop with:
"Plan must link its spec in the header. Add:
**Spec:** path/to/spec.md" Resolve spec path relative to the plan's directory. Verify spec file exists.
Run this loop for both embedded and standalone modes.
Inputs:
ARTIFACT_PATH — the spec or plan fileARTIFACT_TYPE — "spec" or "plan"MAX_ROUNDS — default 3SPEC_PATH — the linked spec file1. Read the document
Read ARTIFACT_PATH fully.
2. Build the Codex prompt
Codex runs with -C <repo-root> -s read-only and can read all project files. Tell it to read the files itself instead of pasting their contents.
Read the appropriate prompt template for review criteria:
skills/dual-review/spec-review-prompt.mdskills/dual-review/plan-review-prompt.mdFor specs, construct this prompt:
"Read and review the spec at <ARTIFACT_PATH>. Also read CLAUDE.md for project conventions and the first 50 lines of README.md for context. Read any source files the spec references.
<review criteria from spec-review-prompt.md>"
For plans, construct this prompt:
"Read and review the plan at <ARTIFACT_PATH>. Also read the linked spec at <SPEC_PATH>. Read CLAUDE.md for project conventions. Read any source files the plan references.
<review criteria from plan-review-prompt.md>"
3. Call Codex
The prompt is short — no temp file needed:
STDERR_FILE=$(mktemp /tmp/dual-review-err-XXXXXX.txt)
codex exec "<assembled prompt>" \
-C "$(git rev-parse --show-toplevel)" \
-s read-only \
-c 'model_reasoning_effort="medium"' \
2>"$STDERR_FILE"
Use timeout: 300000 on the Bash tool call. Codex reads the actual files from disk.
After the call, read stderr:
cat "$STDERR_FILE" 2>/dev/null; rm -f "$STDERR_FILE"
Error handling:
codex login to authenticate."/dual-review <path> --rounds 1 with a smaller scope."5. Parse findings
Extract each finding from Codex stdout. Each finding should have:
If Codex output doesn't follow the structured format exactly, treat the entire response as findings text and extract individual issues by paragraph or numbered item.
6. Evaluate each finding
For each finding, read the document section Codex references. If Codex cites source code, read that too. Then classify:
7. Fix valid findings
Edit the document file directly to address each valid finding.
8. Loop decision
After all rounds complete, output:
DUAL-REVIEW: <artifact_path>
Rounds: N of M [early exit — clean | max rounds reached]
ROUND 1:
Codex review: K findings
→ X valid (fixed)
→ Y rejected: [one-line reason per rejection]
→ Z unclear: [surfaced to user]
ROUND 2:
Codex review: J findings
→ ...
RESULT: total findings, fixed, rejected, unclear
If there are unclear findings, present them to the user with an AskUserQuestion before reporting the final result. Include your analysis of each unclear finding and a recommendation.
| Error | Action |
|---|---|
| Codex not installed | Stop with install instructions |
| Codex auth failed | "Run codex login to authenticate" |
| Codex timeout (5 min) | Report and suggest smaller scope |
| Codex malformed output | Report stderr, suggest re-run |
| Missing CLAUDE.md | Continue without it |
| Missing README.md | Continue without it |
| Source file not found | Skip from context |
| Spec file not found (plan mode) | Stop — plan must link a valid spec |
| Zero findings from Codex | Exit clean immediately |
--rounds N