Verify the build matched the plan. Automated checks + walkthrough with you.
Verifies built deliverables against planned must-haves through automated checks and interactive user walkthroughs.
npx claudepluginhub sienklogic/towlineThis skill is limited to using the following tools:
templates/debugger-prompt.md.tmpltemplates/gap-planner-prompt.md.tmpltemplates/verifier-prompt.md.tmplYou are the orchestrator for /dev:review. This skill verifies that what was built matches what was planned. It runs automated three-layer checks against must-haves, then walks the user through a conversational UAT (user acceptance testing) for each deliverable. Your job is to present findings clearly and help the user decide what's good enough versus what needs fixes.
Reference: skills/shared/context-budget.md for the universal orchestrator rules.
Additionally for this skill:
.planning/config.json exists.planning/phases/{NN}-{slug}/When features.goal_verification is enabled and depth is "standard" or "comprehensive", the event-handler.js hook automatically queues verification after executor completion. The hook writes .planning/.auto-verify as a signal file. The build skill's orchestrator detects this signal and spawns the verifier agent.
This is additive: /dev:review can always be invoked manually regardless of auto-verification settings. If auto-verification already ran, /dev:review re-runs verification (useful for re-checking after fixes).
Parse $ARGUMENTS according to skills/shared/phase-argument-parsing.md.
| Argument | Meaning |
|---|---|
3 | Review phase 3 |
3 --auto-fix | Review phase 3, automatically diagnose and create gap-closure plans for failures |
3 --teams | Review phase 3 with parallel specialist verifiers (functional + security + performance) |
| (no number) | Use current phase from STATE.md |
Execute these steps in order.
$ARGUMENTS for phase number and --auto-fix flag.planning/config.jsonnode ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js config resolve-depth to get the effective feature/gate settings for the current depth. Store the result for use in later gating decisions..planning/phases/{NN}-{slug}/.planning/STATE.md.planning/.auto-verify signal file exists, read it and note the auto-verification was already queued. Delete the signal file after reading (one-shot, same pattern as auto-continue.js).Validation errors — use branded error boxes:
If no SUMMARY.md files:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Phase {N} hasn't been built yet.
**To fix:** Run `/dev:build {N}` first.
If no PLAN.md files:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Phase {N} has no plans.
**To fix:** Run `/dev:plan {N}` first.
Reference: skills/shared/config-loading.md for the tooling shortcut (phase-info) and config field reference.
Check if a VERIFICATION.md already exists from /dev:build's auto-verification step:
Look for .planning/phases/{NN}-{slug}/VERIFICATION.md
If it exists:
status: passed and no --auto-fix flag: skip to Step 4 (conversational UAT)status: gaps_found: present gaps and proceed to Step 4status: human_needed: proceed to Step 4If it does NOT exist: proceed to Step 3 (automated verification)
Depth profile gate: Before spawning the verifier, resolve the depth profile. If features.goal_verification is false in the profile, skip automated verification and proceed directly to Step 5 (Conversational UAT). Note to user: "Automated verification skipped (depth: {depth}). Proceeding to manual review."
If --teams flag is present OR config.parallelization.use_teams is true:
Create team output directory: .planning/phases/{NN}-{slug}/team/ (if not exists)
Display to the user: ◐ Spawning 3 verifiers in parallel (functional, security, performance)...
Spawn THREE verifier agents in parallel using Task():
Agent 1 -- Functional Reviewer:
.planning/phases/{NN}-{slug}/team/functional-VERIFY.md."Agent 2 -- Security Auditor:
.planning/phases/{NN}-{slug}/team/security-VERIFY.md."Agent 3 -- Performance Analyst:
.planning/phases/{NN}-{slug}/team/performance-VERIFY.md."Wait for all three to complete
Display to the user: ◐ Spawning synthesizer...
Spawn synthesizer:
.planning/phases/{NN}-{slug}/team/. Synthesize into a unified VERIFICATION.md. Merge pass/fail verdicts -- a must-have fails if ANY reviewer flags it. Combine gap lists. Security and performance findings go into dedicated sections."Proceed to UAT walkthrough with the unified VERIFICATION.md
If teams not enabled, proceed with existing single-verifier flow.
Reference: references/agent-teams.md
Display to the user: ◐ Spawning verifier...
Spawn a verifier Task() to run three-layer checks:
Task({
subagent_type: "dev:towline-verifier",
prompt: <verifier prompt>
})
Read skills/review/templates/verifier-prompt.md.tmpl and use its content as the verifier prompt.
Placeholders to fill before sending:
{For each PLAN.md file in the phase directory:} — inline each plan's must_haves frontmatter block{For each SUMMARY.md file in the phase directory:} — provide manifest table with file paths and status from frontmatter. The verifier reads full content from disk via Read tool.{NN}-{slug} — the phase directory name{N} — the phase number{date}, {count}, {phase name} — fill from contextWait for the verifier to complete.
Read the VERIFICATION.md frontmatter. Check the attempt counter.
If attempt >= 3 AND status: gaps_found: This phase has failed verification multiple times. Present escalation options instead of the normal flow:
Present the escalation context:
Phase {N}: {name} — Verification Failed ({attempt} attempts)
The same gaps have persisted across {attempt} verification attempts.
Remaining gaps: {count}
Use AskUserQuestion (pattern: multi-option-escalation from skills/shared/gate-prompts.md):
question: "Phase {N} has failed verification {attempt} times with {count} persistent gaps. How should we proceed?"
header: "Escalate"
options:
- label: "Accept gaps" description: "Mark as complete-with-gaps and move on"
- label: "Re-plan" description: "Go back to /dev:plan {N} with gap context"
- label: "Debug" description: "Spawn /dev:debug to investigate root causes"
- label: "Retry" description: "Try one more verification cycle"
complete-with-gaps, update ROADMAP.md to verified*, add a note in VERIFICATION.md about accepted gaps. Proceed to next phase./dev:plan {N} --gaps to create targeted fix plans./dev:debug with the gap details as starting context.Otherwise, present results normally:
Phase {N}: {name} — Verification Results
Status: {PASSED | GAPS FOUND | HUMAN NEEDED}
Attempt: {attempt}
Must-have truths: {passed}/{total}
Must-have artifacts: {passed}/{total}
Must-have key links: {passed}/{total}
{If all passed:}
All automated checks passed.
{If gaps found:}
Gaps found:
1. {gap description} — {failed layer}
2. {gap description} — {failed layer}
{If human needed:}
Items requiring your verification:
1. {item} — {why automated check couldn't verify}
Walk the user through each deliverable one by one. This is an interactive conversation, not an automated check.
For each plan in the phase:
Plan {plan_id}: {plan name}
What was built:
{Brief description from SUMMARY.md}
Key deliverables:
1. {artifact/truth 1}
2. {artifact/truth 2}
3. {artifact/truth 3}
Checking: "{truth statement}"
How to verify:
{Specific steps the user can take to check this}
{e.g., "Open http://localhost:3000 and click Login"}
{e.g., "Run `npm test` and check that auth tests pass"}
Does this work as expected? [pass / fail / skip]
Keep the conversation flowing:
Compile the UAT results and determine next steps.
If all automated checks and UAT items passed:
Update .planning/ROADMAP.md Progress table (REQUIRED — do this BEFORE updating STATE.md):
Tooling shortcut: Use the CLI for atomic ROADMAP.md and STATE.md updates:
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js roadmap update-status {phase} verified
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js state update status verified
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js state update last_activity now
.planning/ROADMAP.md## Progress tableStatus column to verifiedCompleted column to the current date (YYYY-MM-DD)Update .planning/STATE.md:
skills/shared/state-update.md (150 lines max).Update VERIFICATION.md with UAT results (append UAT section)
Present completion:
Use the branded output from references/ui-formatting.md:
If gates.confirm_transition is true in config AND features.auto_advance is NOT true:
skills/shared/gate-prompts.md):
question: "Phase {N} verified. Ready to move to Phase {N+1}?"
header: "Continue?"
options:
/dev:plan {N+1}If features.auto_advance is true AND mode is autonomous AND more phases remain:
Skill({ skill: "dev:plan", args: "{N+1}" })--auto-fixIf gaps were found and --auto-fix was specified:
Step 6a: Diagnose
Display to the user: ◐ Spawning debugger...
Spawn a debugger Task() to analyze each failure:
Task({
subagent_type: "dev:towline-debugger",
prompt: <debugger prompt>
})
Read skills/review/templates/debugger-prompt.md.tmpl and use its content as the debugger prompt.
Placeholders to fill before sending:
[Inline the VERIFICATION.md content] — provide file path; debugger reads via Read tool[Inline all SUMMARY.md files for the phase] — provide manifest table of file paths[Inline all PLAN.md files for the phase] — provide manifest table of file pathsStep 6b: Create Gap-Closure Plans
After receiving the root cause analysis, display to the user: ◐ Spawning planner (gap closure)...
Spawn the planner in gap-closure mode:
Task({
subagent_type: "dev:towline-planner",
prompt: <gap planner prompt>
})
Read skills/review/templates/gap-planner-prompt.md.tmpl and use its content as the gap planner prompt.
Placeholders to fill before sending:
[Inline VERIFICATION.md] — provide file path; planner reads via Read tool[Inline the debugger's root cause analysis] — keep inline (already in conversation context)[Inline all existing PLAN.md files for this phase] — provide manifest table of file paths[Inline CONTEXT.md if it exists] — provide file path; planner reads via Read tool{NN}-{slug} — the phase directory nameStep 6c: Validate gap-closure plans (conditional)
If features.plan_checking is true in config:
◐ Spawning plan checker.../dev:plan Step 6Step 6d: Present gap-closure plans to user
Auto-fix analysis complete.
Gaps found: {count}
Root causes identified: {count}
Gap-closure plans created: {count}
Plans:
{plan_id}: {name} — fixes: {gap description} ({difficulty})
{plan_id}: {name} — fixes: {gap description} ({difficulty})
Use AskUserQuestion (pattern: approve-revise-abort from `skills/shared/gate-prompts.md`):
question: "Approve these {count} gap-closure plans?"
header: "Approve?"
options:
- label: "Approve" description: "Proceed — I'll suggest the build command"
- label: "Review first" description: "Let me review the plans before approving"
- label: "Fix manually" description: "I'll fix these gaps myself"
- If "Approve": suggest `/dev:build {N} --gaps-only`
- If "Review first" or "Other": present the full plan files for inspection
- If "Fix manually": suggest relevant files to inspect based on gap details
#### Gaps Found WITHOUT `--auto-fix`
If gaps were found and `--auto-fix` was NOT specified:
1. List all gaps clearly
2. **Default to auto-fix** — offer it as the recommended action, not a hidden flag
Phase {N}: {name} — Gaps Found
{count} verification gaps need attention:
{gap description} Layer failed: {existence | substantiveness | wiring} Details: {what's wrong}
{gap description} ...
Use AskUserQuestion (pattern: multi-option-gaps from skills/shared/gate-prompts.md):
question: "{count} verification gaps need attention. How should we proceed?"
header: "Gaps"
options:
- label: "Auto-fix" description: "Diagnose root causes and create fix plans (recommended)"
- label: "Override" description: "Accept specific gaps as false positives"
- label: "Manual" description: "I'll fix these myself"
- label: "Skip" description: "Save results for later"
If user selects "Auto-fix": proceed with the same Steps 6a-6d as the --auto-fix flow above (diagnose, create gap-closure plans, validate, present). This is the default path.
If user selects "Override": present each gap and ask which ones to accept. For each accepted gap, collect a reason. Add to VERIFICATION.md frontmatter overrides list:
overrides:
- must_have: "{text}"
reason: "{user's reason}"
accepted_by: "user"
accepted_at: "{ISO date}"
After adding overrides, re-evaluate: if all remaining gaps are now overridden, mark status as passed. Otherwise, offer auto-fix for the remaining non-overridden gaps.
If user selects "Manual": suggest relevant files to inspect based on the gap details.
If user selects "Skip": save results and exit.
After conversational UAT, append UAT results to VERIFICATION.md:
## User Acceptance Testing
| # | Item | Automated | UAT | Final Status |
|---|------|-----------|-----|-------------|
| 1 | {must-have} | PASS | PASS | VERIFIED |
| 2 | {must-have} | PASS | FAIL | GAP |
| 3 | {must-have} | GAP | — | GAP |
| 4 | {must-have} | PASS | SKIP | UNVERIFIED |
UAT conducted: {date}
Items verified: {count}
Items passed: {count}
Items failed: {count}
Items skipped: {count}
If features.integration_verification: true AND this phase depends on prior phases:
After Step 3, also check cross-phase integration:
provides and requires from this phase and dependent phasesIf the verifier Task() fails, display:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Automated verification failed.
**To fix:** We'll do a manual walkthrough instead.
Fall back to manual UAT only (skip automated checks).
If plans have empty must_haves:
⚠ Plans don't have defined must-haves. UAT will be based on plan descriptions only.If user can't verify an item (e.g., needs server running, needs credentials):
If the debugger Task() fails, display:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Auto-diagnosis failed.
**To fix:** Create gap-closure plans based on the verification report alone.
Ask user: "Would you like to proceed with gap-closure plans without root cause analysis?"
| File | Purpose | When |
|---|---|---|
.planning/phases/{NN}-{slug}/VERIFICATION.md | Verification report | Step 3 (created or updated with UAT) |
.planning/phases/{NN}-{slug}/*-PLAN.md | Gap-closure plans | Step 6b (--auto-fix only) |
.planning/ROADMAP.md | Status → verified + Completed date | Step 6 |
.planning/STATE.md | Updated with review status | Step 6 |
After review completes, always present a clear next action:
If verified (not final phase):
Display the "Phase Complete" banner inline:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► PHASE {N} COMPLETE ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Phase {N}: {Name}**
{X} plans executed
Goal verified ✓
Then the branded "Next Up" block:
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Phase {N+1}: {Name}** — {Goal from ROADMAP.md}
`/dev:plan {N+1}`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `/dev:discuss {N+1}` — talk through details before planning
- `/dev:status` — see full project status
───────────────────────────────────────────────────────────────
If gaps remain:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► PHASE {N} GAPS FOUND ⚠
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Phase {N}: {name}** — {count} gaps remaining
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Fix gaps** — diagnose and create fix plans
`/dev:review {N} --auto-fix`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `/dev:plan {N} --gaps` — create fix plans manually
- Fix manually, then `/dev:review {N}`
───────────────────────────────────────────────────────────────
If final phase:
Display the "Milestone Complete" banner inline:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► MILESTONE COMPLETE 🎉
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{N} phases completed
All phase goals verified ✓
Then:
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Audit milestone** — verify cross-phase integration
`/dev:milestone audit`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `/dev:milestone complete` — archive this milestone and tag it
- `/dev:milestone new` — start planning next features
- `/dev:status` — see final project status
───────────────────────────────────────────────────────────────
For user-friendly interpretation of verification results, see references/reading-verification.md.
/dev:review after gap closure triggers fresh verificationActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user wants to "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content", or needs guidance on skill structure, progressive disclosure, or skill development best practices for Claude Code plugins.