From code-quality
Multi-agent plan review with independent fresh-context reviewers. Use when asked to "review plan", "review this plan", "plan review", or given a plan file path to review. Spawns 6 parallel specialized reviewers (feasibility, scope, dependencies, unknown unknowns, architect, security), verifies findings by re-reading the plan, and prints a structured terminal report. Designed for cross-session use: write a plan in session A, review in session B.
npx claudepluginhub wgordon17/personal-claude-marketplace --plugin code-qualityThis skill is limited to using the following tools:
Multi-agent plan review. Spawns 6 parallel reviewers (4 plan-specific + 2 domain) — feasibility,
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Multi-agent plan review. Spawns 6 parallel reviewers (4 plan-specific + 2 domain) — feasibility, scope & completeness, dependency ordering, unknown unknowns/spike detection, architect, and security — each required to read and analyze the plan independently. A verification agent then cross-checks findings against plan content. Results are categorized and printed as a structured terminal report.
Never modifies plan content. Exception: increments the review-cycle lifecycle counter after review completes.
/plan-review [<plan-file-path>]
<plan-file-path> — optional. Absolute or relative path to a plan .md file. If omitted, the
skill discovers the most recent plan in {memory_dir}/plans/.Extract from $ARGUMENTS:
If a plan file path was provided in $ARGUMENTS, use it directly. Skip discovery.
If no path was given:
Detect the memory directory using the convention in
code-quality/references/project-memory-reference.md (Directory Detection and Worktree
Resolution sections). If no validated memory directory is found, stop with:
"No memory directory found. Pass a plan file path explicitly: /plan-review <path>"
Scan {memory_dir}/plans/ for .md files. If the directory does not exist or contains no
.md files, stop with:
"No plan files found in {memory_dir}/plans/. Pass a plan file path explicitly: /plan-review <path>"
Primary: Branch-header matching — Parse each plan file's **Branch:** header field.
Match against the current git branch (git branch --show-current). If exactly one plan
matches, use it. If multiple match, use the most recent by unix timestamp in the filename.
This matches the discovery convention used by pr-review, swarm, and quality-gate.
Fallback: mtime sorting with user selection — If no Branch-header match is found (e.g.,
reviewing from main or a different branch), sort all plan files by modification time
(most recent first):
AskUserQuestion with each file's filename,
the value of its first **Goal:** or ## Goal line (if found), and its relative age.Read the selected plan file. Store as {plan_content} and {plan_file_path}.
Extract from the plan content:
{plan_goal} — value of **Goal:** header or first H2 that describes the objective{plan_domain} — inferred from file paths, tech stack mentions, or explicit **Cynefin Domain:** statement. If domain cannot be determined, use "Unknown".{plan_decisions} — content of any ## Decisions or ## Key Decisions section (not ## Trade-offs — that is captured separately by {plan_trade_offs}){plan_tasks} — count of ## Task N: headings (or - [ ] top-level task items){plan_files} — file paths from ## File Structure sections and task Files: blocks{plan_open_questions} — content of any ## Open Questions section{plan_trade_offs} — content of any ## Trade-offs sectionNote: These extractions assume a structured plan format. If the plan lacks these headings, reviewers will work with the full
{plan_content}and report on what they can infer.
If the plan file contains a ## Test Plan section, extract the **Test Plan:** path annotation from it.
Normalize the path (resolve .. components) and verify it falls within {memory_dir}/test-plans/.
If the normalized path escapes that directory, set {plan_test_plan} to empty string and log a
warning: "Warning: test plan path escapes {memory_dir}/test-plans/ boundary — setting {plan_test_plan} to empty string."
If the path is valid but the file does not exist, set {plan_test_plan} to empty string (graceful fallback — no warning).
If valid and readable, read the file and store its content as {plan_test_plan}.
If no ## Test Plan section exists in the plan file, set {plan_test_plan} to empty string.
From the repo root:
CLAUDE.md. If missing, use: "No CLAUDE.md found."CONTRIBUTING.md. If missing, use: "No CONTRIBUTING.md found."{memory_dir}/PROJECT.md. If missing, use: "No PROJECT.md found."Store as {claude_md_rules}, {contributing_md_rules}, and {project_context}.
Assemble these values — passed to reviewers in Phase 2:
{plan_content} = full plan file content{plan_file_path} = absolute path to the plan file{plan_goal} = extracted goal or empty string{plan_domain} = extracted domain or "Unknown"{plan_tasks} = task count integer{plan_files} = newline-separated list of file paths from the plan{plan_files_count} = count of unique file paths in {plan_files} (derived, not extracted){claude_md_rules} = CLAUDE.md content or placeholder{contributing_md_rules} = CONTRIBUTING.md content or placeholder{project_context} = PROJECT.md content or placeholderAdditional context for Unknown Unknowns reviewer:
{plan_open_questions} = extracted open questions or empty string{plan_trade_offs} = extracted trade-offs or empty string{plan_decisions} = extracted decisions or empty stringAdditional context for Scope & Completeness and Feasibility reviewers:
{plan_test_plan} = full test plan document content, or empty string if no test plan is linkedDetermine which of the 6 reviewers apply based on plan content.
Default: all 6 reviewers run. Skip a reviewer only if its domain has zero applicability:
| Reviewer | Skip condition |
|---|---|
| Feasibility | never skip |
| Scope & Completeness | never skip |
| Dependency & Ordering | skip if {plan_tasks} < 2 (no ordering to verify with a single task) |
| Unknown Unknowns | never skip (most critical reviewer) |
| Architect | never skip |
| Security | skip if {plan_files} contains no paths referencing auth, security, crypto, secrets, permissions, or API endpoints (case-insensitive check on file path strings and plan content keywords) |
Record which reviewers will run.
Read references/reviewer-prompts.md. For each applicable reviewer, locate the corresponding
prompt template, substitute all placeholders with actual values, and spawn an agent. Most
reviewers use model="sonnet"; the Unknown Unknowns Reviewer uses model="opus".
Spawn all applicable reviewers simultaneously (parallel Agent calls).
Agent(
description="Feasibility review of plan: {plan_file_path}",
model="sonnet",
prompt=<Feasibility Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Agent(
description="Scope & completeness review of plan: {plan_file_path}",
model="sonnet",
prompt=<Scope & Completeness Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Agent(
description="Dependency & ordering review of plan: {plan_file_path}",
model="sonnet",
prompt=<Dependency & Ordering Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Agent(
description="Unknown unknowns review of plan: {plan_file_path}",
model="opus",
prompt=<Unknown Unknowns Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Each plan-specific reviewer receives: {plan_file_path}, {plan_content}, {plan_goal},
{plan_files}, {claude_md_rules}, {contributing_md_rules}, {project_context}.
The Unknown Unknowns Reviewer additionally receives: {plan_open_questions}, {plan_trade_offs},
{plan_decisions}.
The Scope & Completeness Reviewer and Feasibility Reviewer additionally receive: {plan_test_plan}.
When {plan_test_plan} is empty string, omit the ## UAT Context section entirely from the
Feasibility and Scope & Completeness reviewer prompts — do not render the heading, TEST PLAN:
label, or empty placeholder.
Agent(
description="Architect review of plan: {plan_file_path}",
model="sonnet",
prompt=<Architect Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Agent(
description="Security review of plan: {plan_file_path}",
model="sonnet",
prompt=<Security Reviewer template from references/reviewer-prompts.md, placeholders substituted>
)
Domain reviewers receive the same inputs as plan-specific reviewers (minus the Unknown Unknowns extras).
After all agents complete, collect all findings into a consolidated list. Assign each finding a unique ID using the prefix for its reviewer:
| Reviewer | Finding prefix |
|---|---|
| Feasibility | feas- |
| Scope & Completeness | scope- |
| Dependency & Ordering | dep- |
| Unknown Unknowns | unk- |
| Architect | arch- |
| Security | sec- |
Preserve: description, location reference (plan section or task number), classification, evidence, source reviewer.
If no findings were reported by any reviewer, skip verification and proceed directly to Phase 4 with the 'no findings' output path.
Spawn a single Sonnet agent with ALL findings in one call. Do NOT spawn one agent per finding — that pattern is catastrophically slow. A single batched call takes ~15 seconds; per-finding agents with 15–20 findings would take 2–5 minutes.
Build the findings JSON array:
[
{
"id": "feas-1",
"reviewer": "Feasibility",
"description": "...",
"location": "Task 3, step 2",
"classification": "needs-fix | needs-input",
"evidence": "..."
},
...
]
Agent(
description="Finding verification for plan: {plan_file_path}",
model="opus",
prompt=<Finding Verifier template from references/reviewer-prompts.md, placeholders substituted>
)
The verifier receives: {findings_json} (the array above), {plan_content},
{plan_file_path}.
The verifier re-reads the plan to check each finding against plan content. It returns a
JSON array with a verdict for each finding:
[{finding_id, verdict, investigation_summary, category, classification, options}, ...]
Verdicts: verified (finding accurately references plan content), false_positive (finding
misread or misrepresents the plan), needs_context (cannot confirm or deny — requires human
judgment).
Parse the verifier's response as JSON. If parsing fails, extract JSON from between the first
[ and last ] markers. If that also fails, include all findings with verdict unverified
and set {verification_note} to "⚠ Verification failed — all findings shown unverified".
If verification succeeded, set {verification_note} to empty string.
Finding fidelity check: Before categorizing, verify the verifier returned a verdict for
every submitted finding. For each finding ID in the original {findings_json}, check if a
matching finding_id exists in the verifier's response. Any finding without a returned verdict
is assigned verdict unverified with investigation_summary: "Verifier did not return a
verdict for this finding." This prevents silent finding loss during verification - the same
principle as the Fixer verification protocol in code-quality/references/finding-classification.md.
The verifier assigns each finding to a category based on its nature:
| Category | Examples |
|---|---|
| Research Gaps | Unvalidated assumptions, missing spikes, external dependencies not investigated |
| Feasibility | Steps that cannot be implemented as described, missing prerequisites |
| Scope | Missing requirements, scope creep, goal not fully addressed |
| Dependencies | Wrong task ordering, implicit dependencies, circular dependencies |
| Architecture | Design issues, pattern violations, unnecessary abstractions |
| Security | Missing security considerations, auth gaps, sensitive data handling |
| Specification | Ambiguous steps, missing detail, unclear success criteria |
Remove findings with verdict false_positive. Keep verified findings for the category
sections. Keep needs_context and unverified findings for the Needs Context section only
(they do NOT appear in category sections). Treat unverified as needs_context for display.
If any surviving findings (after filtering false positives) have classification needs-input,
present them to the user before producing the report. Do NOT skip this step - the skill must
not exit with unresolved needs-input items.
If zero needs-input findings remain after Phase 3, skip to Phase 4.
Present each needs-input finding individually via AskUserQuestion. Each finding gets its own
question with full context so the user can make an informed decision. Batch up to 4 findings
per AskUserQuestion call (the tool's question limit):
AskUserQuestion(questions=[
{
"question": "[{id}] [{Reviewer}] {description}\n\nLocation: {location}\nDecision needed: {input_needed}\n▸dp:file={plan_file},line=0,cat={Reviewer},skill=plan-review",
"header": "{id}",
"options": [
... (map each element from the finding's `options` array to {label, description}),
{"label": "Defer", "description": "Skip for now — user-deferred"}
],
"multiSelect": false
},
... (one question per finding, up to 4 per call)
])
If more than 4 needs-input findings exist, make multiple AskUserQuestion calls.
For each needs-input finding:
needs-fix with verdict verified.
Record the selected option label in the finding's suggested_fix field (this tells the
Fixer which approach to implement). Place the finding in its normal category section.user-deferred in the output.Every needs-input item gets a recorded user decision, not silent deferral.
Print the structured report. Use ━ (U+2501) for the divider lines.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PLAN REVIEW — {plan_file_path}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Goal: {plan_goal}
Domain: {plan_domain}
Tasks: {plan_tasks} | Files: {plan_files_count}
Findings: {verified_count} verified, {needs_context_count} needs context
(from {total_raw} raw findings — {false_positive_count} false positives removed)
RESEARCH GAPS
1. [{Reviewer}] {description} [{classification}]
{location}
Evidence: {evidence}
FEASIBILITY
...
SCOPE
...
DEPENDENCIES
...
ARCHITECTURE
...
SECURITY
...
SPECIFICATION
...
─── Needs Context ({needs_context_count}) ───
1. [{Reviewer}] {description} [{classification}]
{location}
Investigation: {investigation_summary}
─── Deferred ({deferred_count}) ───
1. [{Reviewer}] {description} [user-deferred]
{location}
{verification_note}
{skipped_note}
Reviewed by: {reviewer_list}
Total raw: {total_raw} | Verified: {verified_count} | False positives removed: {false_positive_count} | Needs context: {needs_context_count} | Deferred: {deferred_count}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{verification_note} — if verification JSON parsing failed, print the warning line. Otherwise omit.
{skipped_note} — if any reviewers were skipped, print a line such as "Skipped: Dependency & Ordering (single-task plan)" or "Skipped: Security (no auth/security paths detected)". Otherwise omit.
Category sections contain only verified findings. Group by category in order: RESEARCH GAPS
(first — highest leverage to fix before implementation) → FEASIBILITY → SCOPE → DEPENDENCIES →
ARCHITECTURE → SECURITY → SPECIFICATION. Within each category, sort by location. All findings
in category sections are needs-fix at this point — either originally or promoted from
needs-input via Phase 3.5 user confirmation. For the [Reviewer] tag, use the short
reviewer name: Feasibility,
Scope, Dependencies, Unknown Unknowns, Architect, Security.
Omit category sections with zero verified findings.
needs_context and unverified findings appear ONLY in the dedicated "Needs Context" section
at the bottom — they do NOT appear in category sections above. These are items the verifier
could not confirm or deny and require human judgment.
Findings confirmed by the user in Phase 3.5 are promoted to needs-fix and placed in their
normal category sections — they appear alongside other verified findings with no special
treatment. User-deferred findings appear in the "Deferred" section at the bottom.
Use this path only when verified_count == 0 AND needs_context_count == 0 AND
deferred_count == 0. If any count is > 0, use the "Findings Exist" path.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
PLAN REVIEW — {plan_file_path}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Goal: {plan_goal}
Domain: {plan_domain}
Tasks: {plan_tasks} | Files: {plan_files_count}
No verified issues found.
Checked for: {checked_areas}
{skipped_note}
Reviewed by: {reviewer_list}
Total raw: {total_raw} | Verified: 0 | False positives removed: {false_positive_count} | Needs context: 0 | Deferred: 0
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
| Condition | Action |
|---|---|
| No plan file path given and no memory dir | Error: "No memory directory found. Pass a plan file path explicitly: /plan-review <path>" |
No .md files in {memory_dir}/plans/ | Error: "No plan files found in {memory_dir}/plans/. Pass a plan file path explicitly: /plan-review <path>" |
| Plan file path given but file does not exist | Error: "Plan file not found: {path}" |
| Plan file exists but is empty | Error: "Plan file is empty: {path}" |
| Single plan file found | Auto-select with confirmation: "Using plan: {filename}" |
| Multiple plan files found | Present via AskUserQuestion with filename, goal line, relative age |
{plan_tasks} < 2 | Skip Dependency & Ordering reviewer; include in {skipped_note} |
| Security reviewer skipped | Include in {skipped_note}: "Security (no auth/security paths detected)" |
| All findings false positive | Output "no findings" report format (not an error) |
| Verification JSON parse fails | All findings get unverified verdict, routed to Needs Context section; {verification_note} warns in output |
Zero needs-input findings | Skip Phase 3.5, proceed directly to Phase 4 |
| AskUserQuestion unavailable | Treat all needs-input findings as needs_context in Phase 4 output (surface them, don't hide them) |
After producing the Phase 4 terminal output, increment the review-cycle lifecycle counter
on the plan file. This is the only plan file modification plan-review makes.
{plan_file_path} stored in Phase 0 (do not re-discover)**Iterations:** blockreview-cycle value N from the line matching - review-cycle: {N}0, 1, 12). If N is not a valid
integer, skip the increment silentlyreview-cycle by 1: use Edit to replace - review-cycle: {N} with
- review-cycle: {N+1}, where {N} is the actual integer read in step 3**Iterations:** block found, skip silentlyPrompt templates are in references/reviewer-prompts.md. Read that file and substitute
placeholders before passing to each Agent call. The templates are not executable — they are
documentation that Claude reads and fills in.
| Placeholder | Value | Used by |
|---|---|---|
{plan_content} | Full plan file content | All reviewers + Verifier |
{plan_file_path} | Absolute path to plan file | All reviewers + Verifier |
{plan_goal} | Extracted goal string | All reviewers |
{plan_tasks} | Integer task count | Phase 1 skip logic + Phase 4 terminal output |
{plan_files} | Newline-separated file paths from the plan | All reviewers |
{plan_files_count} | Count of unique paths in {plan_files} | Phase 4 terminal output |
{plan_domain} | Extracted domain or "Unknown" | Phase 4 terminal output |
{claude_md_rules} | CLAUDE.md content or "No CLAUDE.md found." | All reviewers |
{contributing_md_rules} | CONTRIBUTING.md content or "No CONTRIBUTING.md found." | All reviewers |
{project_context} | PROJECT.md content or "No PROJECT.md found." | All reviewers |
{plan_open_questions} | Extracted open questions or empty string | Unknown Unknowns only |
{plan_trade_offs} | Extracted trade-offs or empty string | Unknown Unknowns only |
{plan_decisions} | Extracted decisions (not trade-offs) or empty string | Unknown Unknowns only |
{plan_test_plan} | Full test plan document content or empty string | Scope & Completeness + Feasibility only |
{findings_json} | JSON array of all findings | Finding Verifier only |
{verification_note} | Warning when verification JSON parse fails, or empty string | Phase 4 terminal output |
{skipped_note} | List of skipped reviewers with reasons, or empty string | Phase 4 terminal output |
{reviewer_list} | Comma-separated names of reviewers that ran | Phase 4 terminal output |
{total_raw} | Total findings reported by all reviewers before verification | Phase 4 terminal output |
{verified_count} | Count of findings with verdict verified | Phase 4 terminal output |
{false_positive_count} | Count of findings with verdict false_positive (removed) | Phase 4 terminal output |
{needs_context_count} | Count of findings with verdict needs_context or unverified | Phase 4 terminal output |
{checked_areas} | Comma-separated list of reviewer areas that ran (excludes skipped) | Phase 4 "No Findings" output |
| Skill | Relationship |
|---|---|
incremental-planning | Creates plans that plan-review reviews. Run plan-review after incremental-planning produces a plan file. |
quality-gate | Complementary: plan-review is pre-implementation (plan quality), quality-gate is post-implementation (code quality). |
plan-adherence (agent) | Complementary: plan-review checks plan quality before coding; plan-adherence agent checks implementation fidelity after coding. |
swarm | plan-review should run before /swarm — catch gaps in the plan before spawning parallel implementers. |
roadmap | plan-review can review individual milestone plans that roadmap generates. |
code-quality:fix | Acts on plan findings. /fix edits the plan file only — never implements code from plan-review findings. For Research Gaps / Unknown Unknowns, /fix executes actual spikes — including invoking /deep-research for findings that name third-party technology research gaps. |