Execute all plans in a phase. Spawns agents to build in parallel, commits atomically.
Executes planned development phases by spawning parallel agents, verifying outputs, and managing atomic commits.
npx claudepluginhub sienklogic/towlineThis skill is limited to using the following tools:
You are the orchestrator for /dev:build. This skill executes all plans in a phase by spawning executor agents. Plans are grouped by wave and executed in order — independent plans run in parallel, dependent plans wait. Your job is to stay lean, delegate ALL building work to Task() subagents, and keep the user's main context window clean.
Reference: skills/shared/context-budget.md for the universal orchestrator rules.
Additionally for this skill:
.planning/config.json exists.planning/phases/{NN}-{slug}/ contains PLAN.md filesParse $ARGUMENTS according to skills/shared/phase-argument-parsing.md.
| Argument | Meaning |
|---|---|
3 | Build phase 3 |
3 --gaps-only | Build only gap-closure plans in phase 3 |
3 --team | Use Agent Teams for complex inter-agent coordination |
| (no number) | Use current phase from STATE.md |
Execute these steps in order.
Reference: skills/shared/config-loading.md for the tooling shortcut and config field reference.
$ARGUMENTS for phase number and flags.planning/config.json for parallelization, model, and gate settings (see config-loading.md for field reference)node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js config resolve-depth to get the effective feature/gate settings for the current depth. Store the result for use in later gating decisions..planning/.active-skill with the content build (registers with workflow enforcement hook).planning/phases/{NN}-{slug}/.planning/STATE.md
config.models.complexity_map — adaptive model mapping (default: { simple: "haiku", medium: "sonnet", complex: "inherit" })gates.confirm_execute is true: use AskUserQuestion (pattern: yes-no from skills/shared/gate-prompts.md):
question: "Ready to build Phase {N}? This will execute {count} plans."
header: "Build?"
options:
/dev:plan {N} to review plansgit.branching_strategy is phase: create and switch to branch towline/phase-{NN}-{name} before any build work beginsgit rev-parse HEAD — store as pre_build_commit for use in Step 8-pre-c (codebase map update)Staleness check (dependency fingerprints): After validating prerequisites, check plan staleness:
dependency_fingerprints field (if present)skills/shared/gate-prompts.md):
question: "Plan {plan_id} may be stale — dependency phase {M} was re-built after this plan was created."
header: "Stale"
options:
/dev:plan {N} (recommended)"
If "Re-plan" or "Other": stop and suggest /dev:plan {N}
If "Continue anyway": proceed with existing plansdependency_fingerprints field: skip this check (backward compatible)Validation errors — use branded error boxes:
If no plans found:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Phase {N} has no plans.
**To fix:** Run `/dev:plan {N}` first.
If dependencies incomplete:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Phase {N} depends on Phase {M}, which is not complete.
**To fix:** Build Phase {M} first with `/dev:build {M}`.
Read configuration values needed for execution. See skills/shared/config-loading.md for the full field reference; build uses: parallelization.*, features.goal_verification, features.inline_verify, features.atomic_commits, features.auto_continue, features.auto_advance, planning.commit_docs, git.commit_format, git.branching_strategy.
Tooling shortcut: Instead of manually parsing each PLAN.md frontmatter, run:
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js plan-index <phase>
This returns a JSON object with plans (array with plan_id, wave, depends_on, autonomous, must_haves_count per plan) and waves (grouped by wave). Falls back to manual parsing if unavailable.
.planning/phases/{NN}-{slug}/*-PLAN.md--gaps-only flag: filter to only plans with gap_closure: true in frontmatterIf no plans match filters:
--gaps-only: "No gap-closure plans found. Run /dev:plan {N} --gaps first."Check for existing SUMMARY.md files from previous runs (crash recovery):
SUMMARY-*.md files in the phase directorycompleted: Skip this plan (already done)partial: Present to user — retry or skip?failed: Present to user — retry or skip?checkpoint: Resume from checkpoint (see Step 6e)If all plans already have completed SUMMARYs:
Use AskUserQuestion (pattern: yes-no from skills/shared/gate-prompts.md):
question: "Phase {N} has already been built. All plans have completed SUMMARYs. Re-build from scratch?"
header: "Re-build?"
options:
- label: "Yes" description: "Delete existing SUMMARYs and re-execute all plans"
- label: "No" description: "Keep existing build — review instead"
/dev:review {N}Group plans by wave number from their frontmatter. See references/wave-execution.md for the full wave execution model (parallelization, git lock handling, checkpoint manifests).
Validate wave consistency:
depends_on: []Before entering the wave loop, write .planning/phases/{NN}-{slug}/.checkpoint-manifest.json:
{
"plans": ["02-01", "02-02", "02-03"],
"checkpoints_resolved": [],
"checkpoints_pending": [],
"wave": 1,
"deferred": [],
"commit_log": [],
"last_good_commit": null
}
This file tracks execution progress for crash recovery and rollback. On resume after compaction, read this manifest to determine where execution left off and which plans still need work.
Update the manifest after each wave completes:
checkpoints_resolvedwave countercommit_log (array of { plan, sha, timestamp } objects)last_good_commit to the SHA of the last successfully verified commitCrash recovery check: Before entering the wave loop, check if .checkpoint-manifest.json already exists with completed plans from a prior run. If it does, reconstruct the skip list from its checkpoints_resolved array. This handles the case where the orchestrator's context was compacted or the session was interrupted mid-build.
Orphaned progress file check: Also scan the phase directory for .PROGRESS-* files. These indicate an executor that crashed mid-task. For each orphaned progress file:
plan_id, last_completed_task, and total_taskscheckpoints_resolved (not yet complete), inform the user:
Detected interrupted execution for plan {plan_id}: {last_completed_task}/{total_tasks} tasks completed.
checkpoints_resolved, the progress file is stale — delete it.For each wave, in order (Wave 1, then Wave 2, etc.):
For each plan in the current wave (excluding skipped plans):
Present plan narrative before spawning:
Display to the user before spawning:
◐ Spawning {N} executor(s) for Wave {W}...
Then present a brief narrative for each plan to give the user context on what's about to happen:
Wave {W} — {N} plan(s):
Plan {id}: {plan name}
{2-3 sentence description: what this plan builds, the technical approach, and why it matters.
Derive this from the plan's must_haves and first task's <action> summary.}
Plan {id}: {plan name}
{2-3 sentence description}
This is a read-only presentation step — extract descriptions from plan frontmatter must_haves.truths and the plan's task names. Do not read full task bodies for this; keep it lightweight.
State fragment rule: Executors MUST NOT modify STATE.md directly. The build skill orchestrator is the sole STATE.md writer during execution. Executors report results via SUMMARY.md only; the orchestrator reads those summaries and updates STATE.md itself.
Model Selection (Adaptive): Before spawning the executor for each plan, determine the model:
complexity and model attributesmodel attribute, use the most capable model among them (inherit > sonnet > haiku)config.models.complexity_map.{complexity} (defaults: simple->haiku, medium->sonnet, complex->inherit)config.models.executor is set (non-null), it overrides adaptive selection entirely — use that model for all executorsReference: references/model-selection.md for full details.
## Summary section from the PLAN.md (everything after the ## Summary heading to end of file). If no ## Summary section exists (legacy plans), fall back to reading the full PLAN.md content. Note: The orchestrator reads the full PLAN.md once for narrative extraction AND summary extraction; only the ## Summary portion is inlined into the executor prompt. The full PLAN.md stays on disk for the executor to Read..planning/CONTEXT.md (if exists).planning/STATE.md.planning/config.jsonConstruct the executor prompt:
You are the towline-executor agent. Execute the following plan.
<plan_summary>
[Inline only the ## Summary section from PLAN.md]
</plan_summary>
<plan_file>
.planning/phases/{NN}-{slug}/{plan_id}-PLAN.md
</plan_file>
<project_context>
Project root: {absolute path to project root}
Platform: {win32|linux|darwin}
Config:
commit_format: {commit_format from config}
tdd_mode: {tdd_mode from config}
atomic_commits: {atomic_commits from config}
Available context files (read via Read tool as needed):
- Config: {absolute path to config.json}
- State: {absolute path to STATE.md}
{If CONTEXT.md exists:}
- Project context (locked decisions): {absolute path to CONTEXT.md}
</project_context>
<prior_work>
Completed plans in this phase:
| Plan | Status | Commits | Summary File |
|------|--------|---------|-------------|
| {plan_id} | complete | {hash1}, {hash2} | {absolute path to SUMMARY.md} |
Read any SUMMARY file via Read tool if you need details on what prior plans produced.
</prior_work>
Execute all tasks in the plan sequentially. For each task:
0. Read the full plan file from the path in <plan_file> to get task details
1. Execute the <action> steps
2. Run the <verify> commands
3. Create an atomic commit with format: {commit_format}
4. Record the commit hash
After all tasks complete:
1. Write SUMMARY.md to .planning/phases/{NN}-{slug}/SUMMARY-{plan_id}.md
2. Run self-check (verify files exist, commits exist, verify commands still pass)
3. Return your SUMMARY.md content as your final response
If you hit a checkpoint task, STOP and return the checkpoint response format immediately.
Spawn strategy based on config:
If parallelization.enabled: true AND multiple plans in this wave:
max_concurrent_agents Task() calls in parallelrun_in_background: true for each executorTaskOutput with block: false and report statusIf parallelization.enabled: false OR single plan in wave:
Task({
subagent_type: "dev:towline-executor",
prompt: <executor prompt constructed above>
})
NOTE: The dev:towline-executor subagent type auto-loads the agent definition. Do NOT inline it.
Block until all executor Task() calls for this wave complete.
For each completed executor:
completed | partial | checkpoint | failedcommit_log: For each completed plan, append { plan: "{plan_id}", sha: "{commit_hash}", timestamp: "{ISO date}" } to the commit_log array. Update last_good_commit to the last commit SHA from this wave.Spot-check executor claims:
After reading each SUMMARY, perform a lightweight verification:
key_files list and verify they exist (ls)git log --oneline -n {commit_count} and confirm the count matches the claimed commitswc -l): warn if trivially smallself_check_failures: if present, warn the user:
"Plan {id} reported self-check failures: {list failures}. Inspect before continuing?"Read executor deviations:
After all executors in the wave complete, read all SUMMARY frontmatter and:
deferred items into a running list (append to .checkpoint-manifest.json deferred array)Build a wave results table:
Wave {W} Results:
| Plan | Status | Tasks | Commits | Deviations |
|------|--------|-------|---------|------------|
| {id} | complete | 3/3 | abc, def, ghi | 0 |
| {id} | complete | 2/2 | jkl, mno | 1 |
Skip if the depth profile has features.inline_verify: false.
To check: use the resolved depth profile. Only comprehensive mode enables inline verification by default.
When inline verification is enabled, each completed plan gets a targeted verification pass before the orchestrator proceeds to the next wave. This catches issues early — before dependent plans build on a broken foundation.
For each plan that completed successfully in this wave:
Read the plan's SUMMARY.md to get key_files (the files this plan created/modified)
Display to the user: ◐ Spawning inline verifier for plan {plan_id}...
Spawn a lightweight verifier:
Task({
subagent_type: "dev:towline-verifier",
model: "haiku",
prompt: "Targeted inline verification for plan {plan_id}.
Verify ONLY these files: {comma-separated key_files list}
For each file, check three layers:
1. Existence — does the file exist?
2. Substantiveness — is it more than a stub? (>10 lines, no TODO/FIXME placeholders)
3. Wiring — is it imported/used by at least one other file?
Report PASS or FAIL with a one-line reason per file.
Write nothing to disk — just return your results as text."
})
Note: This adds latency (~10-20s per plan for the haiku verifier). It's opt-in via features.inline_verify: true for projects where early detection outweighs speed.
If any executor returned failed or partial:
Handoff bug check (false-failure detection):
Before presenting failure options, check whether the executor actually completed its work despite reporting failure (known Claude Code platform bug where handoff reports failure but work is done):
status field
b. If status: complete AND frontmatter has commits entries:
status: partial or spot-checks fail: proceed with normal failure handling belowPresent failure details to the user:
Plan {id} {status}:
Task {N}: {name} - FAILED
Error: {verify output or error message}
Deviations attempted: {count}
Last verify output: {output}
Use AskUserQuestion (pattern: multi-option-failure from skills/shared/gate-prompts.md):
question: "Plan {id} failed at task {N} ({name}). How should we proceed?"
header: "Failed"
options:
- label: "Retry" description: "Re-spawn the executor for this plan"
- label: "Skip" description: "Mark as skipped, continue to next wave"
- label: "Rollback" description: "Undo commits from this plan, revert to last good state"
- label: "Abort" description: "Stop the entire build"
If user selects 'Retry':
If user selects 'Skip':
If user selects 'Rollback':
last_good_commit from .checkpoint-manifest.jsonlast_good_commit exists:
git reset --soft {last_good_commit}checkpoints_resolvedlast_good_commit: warn "No rollback point available (this was the first plan). Use abort instead."If user selects 'Abort':
/dev:build {N} to resume (completed plans will be skipped)"If any executor returned checkpoint:
Checkpoint in Plan {id}, Task {N}: {checkpoint type}
{checkpoint details — what was built, what is needed}
{For decision type: present options}
{For human-action type: present steps}
{For human-verify type: present what to verify}
Reference: references/continuation-format.md for the continuation protocol.
You are the towline-executor agent. Continue executing a plan from a checkpoint.
<plan_summary>
[Inline only the ## Summary section from PLAN.md]
</plan_summary>
<plan_file>
.planning/phases/{NN}-{slug}/{plan_id}-PLAN.md
</plan_file>
<completed_tasks>
| Task | Commit | Status |
|------|--------|--------|
| {task_name} | {hash} | complete |
| {task_name} | {hash} | complete |
| {checkpoint_task} | — | checkpoint |
</completed_tasks>
<checkpoint_resolution>
User response to checkpoint: {user's response}
Resume at: Task {N+1} (or re-execute checkpoint task with user's answer)
</checkpoint_resolution>
<project_context>
{Same lean context as original spawn — config key-values + file paths, not inlined bodies}
</project_context>
Continue execution from the checkpoint. Skip completed tasks. Process the checkpoint resolution, then continue with remaining tasks. Write SUMMARY.md when done.
After each wave completes (all plans in the wave are done, skipped, or aborted):
SUMMARY gate — verify before updating STATE.md:
Before writing any STATE.md update, verify these three gates for every plan in the wave:
--- delimiters and a status: field)Block the STATE.md update until ALL gates pass. If any gate fails:
Once gates pass, update .planning/STATE.md:
Tooling shortcut: Use the CLI for atomic STATE.md updates instead of manual read-modify-write:
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js state update plans_complete {N}
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js state update status building
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js state update last_activity now
STATE.md size limit: Follow the size limit enforcement rules in skills/shared/state-update.md (150 lines max — collapse completed phases, remove duplicated decisions, trim old sessions).
Event-driven auto-verify signal: Check if .planning/.auto-verify exists (written by event-handler.js SubagentStop hook). If the signal file exists, read it and delete it (one-shot). The signal confirms that auto-verification was triggered — proceed with verification even if the build just finished.
Skip if:
features.goal_verification: falsequick AND the total task count across all plans in this phase is fewer than 3To check: run node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js config resolve-depth and read profile["features.goal_verification"]. For the task-count check in quick mode, sum the task counts from all PLAN.md frontmatter must_haves (already available from Step 3 plan discovery).
This implements budget mode's "skip verifier for < 3 tasks" rule: small phases in quick mode don't need a full verification pass.
If skipping because features.goal_verification is false:
Note for Step 8f completion summary: append "Note: Automatic verification was skipped (goal_verification: false). Run /dev:review {N} to verify what was built."
If verification is enabled:
Display to the user: ◐ Spawning verifier...
Spawn a verifier Task():
Task({
subagent_type: "dev:towline-verifier",
prompt: <verifier prompt>
})
NOTE: The dev:towline-verifier subagent type auto-loads the agent definition. Do NOT inline it.
You are the towline-verifier agent. Verify that phase {N} meets its goals.
<verification_approach>
For each must-have from the phase's plans, perform a three-layer check:
Layer 1 — Existence: Does the artifact exist? (ls, grep for exports)
Layer 2 — Substantiveness: Is it more than a stub? (wc -l, grep for implementation)
Layer 3 — Wiring: Is it connected to the rest of the system? (grep for imports/usage)
See references/verification-patterns.md for detailed patterns.
</verification_approach>
<phase_plans>
[For each PLAN.md in the phase: inline the must_haves section from frontmatter]
</phase_plans>
<build_results>
Build summaries for verification (read full content via Read tool):
| Plan | Summary File | Status |
|------|-------------|--------|
{For each SUMMARY.md in the phase:}
| {plan_id} | {absolute path to SUMMARY.md} | {status from frontmatter} |
Read each SUMMARY file to check what was actually built against the must-haves.
</build_results>
<instructions>
1. For each must-have truth: run existence, substantiveness, and wiring checks
2. For each must-have artifact: verify the file exists and has real content
3. For each must-have key_link: verify the connection is made
Write your verification report to .planning/phases/{NN}-{slug}/VERIFICATION.md
Format:
---
status: "passed" | "gaps_found" | "human_needed"
phase: "{NN}-{slug}"
checked_at: "{date}"
must_haves_checked: {count}
must_haves_passed: {count}
must_haves_failed: {count}
---
# Phase Verification: {phase name}
## Results
| Must-Have | Layer 1 | Layer 2 | Layer 3 | Status |
|-----------|---------|---------|---------|--------|
| {truth} | PASS | PASS | PASS | PASSED |
| {truth} | PASS | FAIL | — | GAP |
## Gaps Found
{For each gap: what's missing, which layer failed, suggested fix}
## Passed
{For each pass: what was verified, how}
</instructions>
Use the Write tool to create VERIFICATION.md. Use Bash to run verification commands.
After all waves complete and optional verification runs:
8-pre. Re-verify after gap closure (conditional):
If --gaps-only flag was used AND features.goal_verification is true:
VERIFICATION.md (it reflects pre-gap-closure state)VERIFICATION.md that accounts for the gap-closure workfinal_status belowThis ensures that /dev:review after a --gaps-only build sees the updated verification state, not stale gaps from before the fix.
8-pre-b. Determine final status based on verification:
passed: final_status = "built"gaps_found: final_status = "built*" (built with unverified gaps)8-pre-c. Codebase map incremental update (conditional):
Only run if ALL of these are true:
.planning/codebase/ directory exists (project was previously scanned with /dev:scan)git diff --name-only {pre_build_commit}..HEAD shows >5 files changed OR package.json/requirements.txt/go.mod/Cargo.toml was modifiedIf triggered:
Record the pre-build commit SHA at the start of Step 1 (before any executors run) for comparison
Run git diff --name-only {pre_build_commit}..HEAD to get the list of changed files
Display to the user: ◐ Spawning codebase mapper (incremental update)...
Spawn a lightweight mapper Task():
Task({
subagent_type: "dev:towline-codebase-mapper",
model: "haiku",
prompt: "Incremental codebase map update. These files changed during the Phase {N} build:\n{diff file list}\n\nRead the existing .planning/codebase/ documents. Update ONLY the sections affected by these changes. Do NOT rewrite entire documents — make targeted updates. If a new dependency was added, update STACK.md. If new directories/modules were created, update STRUCTURE.md. If new patterns were introduced, update CONVENTIONS.md. Write updated files to .planning/codebase/."
})
Do NOT block on this — use run_in_background: true and continue to Step 8a. Report completion in Step 8f if it finishes in time.
8a. Update ROADMAP.md Progress table (REQUIRED — do this BEFORE updating STATE.md):
Tooling shortcut: Use the CLI for atomic ROADMAP.md table updates instead of manual editing:
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js roadmap update-plans {phase} {completed} {total}
node ${CLAUDE_PLUGIN_ROOT}/scripts/towline-tools.js roadmap update-status {phase} {final_status}
These return { success, old_status, new_status } or { success, old_plans, new_plans }. Falls back to manual editing if unavailable.
.planning/ROADMAP.md## Progress tablePlans Complete column to {completed}/{total} (e.g., 2/2 if all plans built successfully)Status column to the final_status determined in Step 8-pre8b. Update STATE.md:
8c. Commit planning docs (if configured):
Reference: skills/shared/commit-planning-docs.md for the standard commit pattern.
If planning.commit_docs is true:
docs({phase}): add build summaries and verification8d. Handle git branching:
If git.branching_strategy is phase:
git checkout main && git merge --squash towline/phase-{NN}-{name}skills/shared/gate-prompts.md):
question: "Phase {N} complete on branch towline/phase-{NN}-{name}. Squash merge to main?"
header: "Merge?"
options:
8e. Auto-advance / auto-continue (conditional):
If features.auto_advance is true AND mode is autonomous:
Chain to the next skill directly within this session. This eliminates manual phase cycling.
| Build Result | Next Action | How |
|---|---|---|
| Verification passed, more phases | Plan next phase | Skill({ skill: "dev:plan", args: "{N+1}" }) |
| Verification skipped | Run review | Skill({ skill: "dev:review", args: "{N}" }) |
| Verification gaps found | HARD STOP — present gaps to user | Do NOT auto-advance past failures |
| Last phase complete | HARD STOP — milestone boundary | Suggest /dev:milestone audit |
| Build errors occurred | HARD STOP — errors need human review | Do NOT auto-advance past errors |
After invoking the chained skill, it runs within the same session. When it completes, the chained skill may itself chain further (review→plan, plan→build) if auto_advance remains true. This creates the full cycle: build→review→plan→build→...
Else if features.auto_continue is true:
Write .planning/.auto-next containing the next logical command (e.g., /dev:plan {N+1} or /dev:review {N})
8f. Present completion summary:
Use the branded output templates from references/ui-formatting.md. Route based on status:
| Status | Template |
|---|---|
passed + more phases | "Phase Complete" template |
passed + last phase | "Milestone Complete" template |
gaps_found | "Gaps Found" template |
Before the branded banner, include the results table:
Results:
| Plan | Status | Tasks | Commits |
|------|--------|-------|---------|
| {id} | complete | 3/3 | 3 |
| {id} | complete | 2/2 | 2 |
{If verification ran:}
Verification: {PASSED | GAPS_FOUND}
{count} must-haves checked, {count} passed, {count} gaps
Total commits: {count}
Total files created: {count}
Total files modified: {count}
Deviations: {count}
Then present the appropriate branded banner:
If passed + more phases:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► PHASE {N} COMPLETE ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Phase {N}: {Name}**
{X} plans executed
Goal verified ✓
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Phase {N+1}: {Name}** — {Goal from ROADMAP.md}
`/dev:plan {N+1}`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `/dev:review {N}` — manual acceptance testing before continuing
- `/dev:status` — see full project status
───────────────────────────────────────────────────────────────
If passed + last phase:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► MILESTONE COMPLETE 🎉
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
{N} phases completed
All phase goals verified ✓
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Audit milestone** — verify requirements, cross-phase integration, E2E flows
`/dev:milestone audit`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `/dev:review` — manual acceptance testing
- `/dev:milestone complete` — skip audit, archive directly
───────────────────────────────────────────────────────────────
If gaps_found:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TOWLINE ► PHASE {N} GAPS FOUND ⚠
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**Phase {N}: {Name}**
Score: {X}/{Y} must-haves verified
Report: .planning/phases/{phase_dir}/VERIFICATION.md
### What's Missing
{Extract gap summaries from VERIFICATION.md}
───────────────────────────────────────────────────────────────
## ▶ Next Up
**Plan gap closure** — create additional plans to complete the phase
`/dev:plan {N} --gaps`
<sub>`/clear` first → fresh context window</sub>
───────────────────────────────────────────────────────────────
**Also available:**
- `cat .planning/phases/{phase_dir}/VERIFICATION.md` — see full report
- `/dev:review {N}` — manual testing before planning
───────────────────────────────────────────────────────────────
8g. Display USER-SETUP.md (conditional):
Check if .planning/phases/{NN}-{slug}/USER-SETUP.md exists. If it does:
Setup Required:
This phase introduced external setup requirements. See the details below
or read .planning/phases/{NN}-{slug}/USER-SETUP.md directly.
{Read and display the USER-SETUP.md content — it's typically short}
This ensures the user sees setup requirements prominently instead of buried in SUMMARY files.
If a Task() doesn't return within a reasonable time, display:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Executor agent timed out for Plan {id}.
**To fix:** Check `.planning/phases/{NN}-{slug}/` for partial SUMMARY.md, then retry or skip.
Treat as partial status. Present to user: retry or skip.
For commit conventions and git workflow details, see references/git-integration.md.
If multiple parallel executors create git lock conflicts:
⚠ Git lock conflicts detected with parallel execution. Consider reducing max_concurrent_agents to 1.If SUMMARY.md shows files not listed in the plan's files_modified:
If git.branching_strategy is phase but we're not on the phase branch:
git checkout -b towline/phase-{NN}-{name}| File | Purpose | When |
|---|---|---|
.planning/phases/{NN}-{slug}/.checkpoint-manifest.json | Execution progress for crash recovery | Step 5b, updated each wave |
.planning/phases/{NN}-{slug}/SUMMARY-{plan_id}.md | Per-plan build summary | Step 6 (each executor) |
.planning/phases/{NN}-{slug}/USER-SETUP.md | External setup requirements | Step 6 (executor, if needed) |
.planning/phases/{NN}-{slug}/VERIFICATION.md | Phase verification report | Step 7 |
.planning/codebase/*.md | Incremental codebase map updates | Step 8-pre-c (if codebase/ exists) |
.planning/ROADMAP.md | Plans Complete + Status → built or partial | Step 8a |
.planning/STATE.md | Updated progress | Steps 6f, 8b |
.planning/.auto-next | Next command signal (if auto_continue enabled) | Step 8e |
| Project source files | Actual code | Step 6 (executors) |
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.