Run expert review on a plan with parallel reviewer agents. Presents findings as Socratic questions. Use when asked to "review the plan", "get feedback on the design", "check this approach", or before implementation to validate architectural decisions. Optional argument: reviewer name (e.g., `/arc:review daniel-product-engineer` to use a specific reviewer)
Runs expert plan reviews using parallel AI agents who challenge each other and present findings as Socratic questions.
/plugin marketplace add howells/arc/plugin install arc@howellsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
<tool_restrictions>
EnterPlanMode — BANNED. Do NOT call this tool. This skill has its own structured process. Execute the steps below directly.ExitPlanMode — BANNED. You are never in plan mode.
</tool_restrictions><required_reading> Read these reference files NOW:
<rules_context> Check for project coding rules:
Use Glob tool: .ruler/*.md
Determine rules source:
.ruler/ exists: Read rules from .ruler/.ruler/ doesn't exist: Read rules from ${CLAUDE_PLUGIN_ROOT}/rules/Pass relevant core rules to each reviewer:
| Reviewer | Rules to Pass |
|---|---|
| daniel-product-engineer | react.md, typescript.md, code-style.md |
| lee-nextjs-engineer | nextjs.md, api.md |
| senior-engineer | code-style.md, typescript.md, react.md |
| architecture-engineer | stack.md, turborepo.md |
| simplicity-engineer | code-style.md |
| security-engineer | security.md, api.md, env.md |
| data-engineer | database.md, testing.md, api.md |
| senior-engineer | cloudflare-workers.md (if wrangler.toml exists) |
| accessibility-engineer | (interface rules only — already in agent prompt) |
| designer | design.md, colors.md, spacing.md, typography.md |
| </rules_context> |
<progress_context>
Use Read tool: docs/progress.md (first 50 lines)
Check for context on what led to the plan being reviewed. </progress_context>
<process> ## Phase 0: Check for Specific ReviewerIf argument provided (e.g., daniel-product-engineer):
${CLAUDE_PLUGIN_ROOT}/agents/review/{argument}.md${CLAUDE_PLUGIN_ROOT}/agents/review/ and ask user to pickAvailable reviewers:
daniel-product-engineer — Type safety, UI completeness, React patternslee-nextjs-engineer — Next.js App Router, server-first architecturesenior-engineer — Asymmetric strictness, review disciplinearchitecture-engineer — System design, component boundariessimplicity-engineer — YAGNI, minimalismperformance-engineer — Bottlenecks, scalabilitysecurity-engineer — Vulnerabilities, OWASPdata-engineer — Migrations, transactionsdesigner — Visual design quality, UX fundamentals, AI slop detectioncodex-reviewer — Independent second opinion via OpenAI Codex CLI (different AI model)gemini-reviewer — Independent second opinion via Google Gemini CLI (different AI model)Check if plan file path provided as argument:
Search strategy:
Check conversation context first — Look for Claude Code plan mode output
Search docs/plans/ folder — Look for plan files
Use Glob tool: docs/plans/*.md
Present options if multiple found:
If no plans found:
docs/plans/.Once plan located:
Skip if specific reviewer provided in Phase 0.
Detect project type for reviewer selection:
Use Grep tool on package.json:
"next" → nextjs"react" → reactUse Glob tool:
requirements.txt, pyproject.toml → pythonSelect reviewers based on project type:
TypeScript/React:
Next.js:
Python:
General/Unknown:
Conditional addition (all UI project types):
${CLAUDE_PLUGIN_ROOT}/agents/review/accessibility-engineer.md${CLAUDE_PLUGIN_ROOT}/agents/review/designer.md<team_mode_check> Skip if specific reviewer was provided in Phase 0 (single reviewer, no team needed).
Check if agent teams are available by attempting to detect team support in the current environment.
If teams are available, offer the user a choice:
Execution mode:
1. Team mode — Reviewers challenge each other's findings before you see them (higher quality, 3-5x token cost)
2. Standard mode — Independent reviewers, findings consolidated by skill (faster, lower cost)
Use AskUserQuestion with:
If teams are NOT available, proceed silently with standard mode. Do not mention teams to the user.
If team mode selected, read the team reference:
${CLAUDE_PLUGIN_ROOT}/references/agent-teams.md
</team_mode_check>
If specific reviewer from Phase 0: Spawn single reviewer agent.
If team mode selected: Run team review (see Team Execution below).
Otherwise: Spawn 3 reviewer agents in parallel:
Task [reviewer-1] model: sonnet: "Review this plan for [specialty concerns].
Plan:
[plan content]
Focus on: [specific area based on reviewer type]"
Task [reviewer-2] model: sonnet: "Review this plan for [specialty concerns]..."
Task [reviewer-3] model: sonnet: "Review this plan for [specialty concerns]..."
Only if user opted into team mode in Phase 2.5.
Create team arc-review-[plan-slug] with the 3 selected reviewers as teammates.
Round 1 — Initial Review:
Each reviewer performs their standard analysis independently (same prompts as standard mode).
Create team: arc-review-[plan-slug]
Teammates: [reviewer-1], [reviewer-2], [reviewer-3]
Each teammate reviews the plan through their domain lens.
Same prompts and focus areas as standard mode.
Round 2 — Cross-Review:
Each reviewer reads the others' findings and responds:
Resolution rules (from agent-teams reference):
Round 2 output: Pre-debated findings where each concern has either been confirmed by peers, refined through challenge, or dropped with stated rationale.
Wait for team to complete.
If team creation fails, fall back silently to standard parallel dispatch above.
<team_consolidation> If team mode was used, consolidation is simpler:
Transform findings into Socratic questions:
See ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md for approach.
Instead of presenting critiques:
Example transformations:
Present questions one at a time:
For each decision:
If plan came from a file:
git commit -m "docs: update <plan> based on review"## Review Summary
**Reviewed:** [plan name/source]
**Reviewers:** [list]
### Changes Made
- [Change 1]
- [Change 2]
### Kept As-Is
- [Decision 1]: [reason]
### Open Questions
- [Any unresolved items]
Show remaining arc:
/arc:ideate → Design doc (on main) ✓
↓
/arc:review → Review ✓ YOU ARE HERE
↓
/arc:implement → Plan + Execute task-by-task
Offer next steps based on what was reviewed:
If reviewed a design doc:
/arc:implement (which will create the plan internally)If reviewed an implementation plan:
/arc:implementKill orphaned subagent processes:
After spawning reviewer agents, some may not exit cleanly. Run cleanup:
${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-orphaned-agents.sh
</process>
<arc_log>
After completing this skill, append to the activity log.
See: ${CLAUDE_PLUGIN_ROOT}/references/arc-log.md
Entry: /arc:review — [Plan name] reviewed
</arc_log>
<success_criteria> Review is complete when:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.