Orchestrates parallel execution of phase build plans via subagents, manages task dependencies by waves, commits atomically. For PBR workflow after planning.
From pbrnpx claudepluginhub sienklogic/plan-build-run --plugin pbrThis skill is limited to using the following tools:
templates/continuation-prompt.md.tmpltemplates/executor-prompt.md.tmpltemplates/inline-verifier-prompt.md.tmplSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
STOP — DO NOT READ THIS FILE. You are already reading it. This prompt was injected into your context by Claude Code's plugin system. Using the Read tool on this SKILL.md file wastes ~7,600 tokens. Begin executing Step 1 immediately.
You are the orchestrator for /pbr:build. This skill executes all plans in a phase by spawning executor agents. Plans are grouped by wave and executed in order — independent plans run in parallel, dependent plans wait. Your job is to stay lean, delegate ALL building work to Task() subagents, and keep the user's main context window clean.
Reference: skills/shared/context-budget.md for the universal orchestrator rules.
Additionally for this skill:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js skill-section build "step-6" → returns that step's content as JSON. Use this when context budget is DEGRADING.Before ANY tool calls, display this banner:
╔══════════════════════════════════════════════════════════════╗
║ PLAN-BUILD-RUN ► BUILDING PHASE {N} ║
╚══════════════════════════════════════════════════════════════╝
Where {N} is the phase number from $ARGUMENTS. Then proceed to Step 1.
.planning/config.json exists.planning/phases/{NN}-{slug}/ contains PLAN.md filesParse $ARGUMENTS according to skills/shared/phase-argument-parsing.md.
| Argument | Meaning |
|---|---|
3 | Build phase 3 |
3 --gaps-only | Build only gap-closure plans in phase 3 |
3 --team | Use Agent Teams for complex inter-agent coordination |
3 --model opus | Use opus for all executor spawns in phase 3 (overrides config and adaptive selection) |
| (no number) | Use current phase from STATE.md |
3 --preview | Preview what build would do for phase 3 without executing |
If --preview is present in $ARGUMENTS:
Extract the phase slug from $ARGUMENTS (use the phase number to look up the slug, or pass the number directly — the CLI accepts partial slug matches).
Run:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js build-preview {phase-slug}
Capture the JSON output.
Render the following preview document (do NOT proceed to Step 2):
╔══════════════════════════════════════════════════════════════╗
║ DRY RUN — /pbr:build {N} --preview ║
║ No executor agents will be spawned ║
╚══════════════════════════════════════════════════════════════╝
PHASE: {phase}
## Plans
{for each plan: - {id} (wave {wave}, {task_count} tasks)}
## Wave Structure
{for each wave: Wave {wave}: {plan IDs} [parallel | sequential]}
## Files That Would Be Modified
{for each file in files_affected: - {file}}
(Total: {count} files)
## Estimated Agent Spawns
{agent_count} executor task(s)
## Critical Path
{critical_path joined with " → "}
## Dependency Chain
{for each entry in dependency_chain: - {id} (wave {wave}) depends on: {depends_on or "none"}}
STOP — do not proceed to Step 2.
Execute these steps in order.
Reference: skills/shared/config-loading.md for the tooling shortcut and config field reference.
$ARGUMENTS for phase number and flags.planning/config.json for parallelization, model, and gate settings (see config-loading.md for field reference)node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js config resolve-depth to get the effective feature/gate settings for the current depth. Store the result for use in later gating decisions..planning/.active-skill with the content build (registers with workflow enforcement hook).planning/phases/{NN}-{slug}/.planning/STATE.md
config.models.complexity_map — adaptive model mapping (default: { simple: "haiku", medium: "sonnet", complex: "inherit" })gates.confirm_execute is true: use AskUserQuestion (pattern: yes-no from skills/shared/gate-prompts.md):
question: "Ready to build Phase {N}? This will execute {count} plans."
header: "Build?"
options:
/pbr:plan {N} to review plansgit.branching is phase: create and switch to branch plan-build-run/phase-{NN}-{name} before any build work beginsgit rev-parse HEAD — store as pre_build_commit for use in Step 8-pre-c (codebase map update)Staleness check (dependency fingerprints): After validating prerequisites, check plan staleness:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js staleness-check {phase-slug}
Returns { stale: bool, plans: [{id, stale, reason}] }. If stale: true for any plan:
skills/shared/gate-prompts.md):
question: "Plan {plan_id} may be stale — {reason}"
options: ["Continue anyway", "Re-plan with /pbr:plan {N}"]stale: false: proceed silently.Validation errors — use branded error boxes:
If no plans found:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Phase {N} has no plans.
**To fix:** Run `/pbr:plan {N}` first.
If dependencies incomplete, use conversational recovery:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js suggest-alternatives missing-prereq {dependency-phase-slug}existing_summaries, missing_summaries, and suggested_action.skills/shared/gate-prompts.md) to offer:
/pbr:build {dependency-phase}If config validation fails for a specific field, use conversational recovery:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js suggest-alternatives config-invalid {field} {value}valid_values and suggested_fix.suggested_fix instruction.Init-first pattern: When spawning agents, pass the output of node plugins/pbr/scripts/pbr-tools.js init execute-phase {N} as context rather than having the agent read multiple files separately. This reduces file reads and prevents context-loading failures.
Read configuration values needed for execution. See skills/shared/config-loading.md for the full field reference; build uses: parallelization.*, features.goal_verification, features.inline_verify, features.atomic_commits, features.auto_continue, features.auto_advance, planning.commit_docs, git.commit_format, git.branching.
Tooling shortcut: Instead of manually parsing each PLAN.md frontmatter, run:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js plan-index <phase>
This returns a JSON object with plans (array with plan_id, wave, depends_on, autonomous, must_haves_count per plan) and waves (grouped by wave). Falls back to manual parsing if unavailable.
.planning/phases/{NN}-{slug}/*-PLAN.md--gaps-only flag: filter to only plans with gap_closure: true in frontmatterIf no plans match filters:
--gaps-only: "No gap-closure plans found. Run /pbr:plan {N} --gaps first."Check for existing SUMMARY.md files from previous runs (crash recovery):
SUMMARY-*.md files in the phase directorycompleted: Skip this plan (already done)partial: Present to user — retry or skip?failed: Present to user — retry or skip?checkpoint: Resume from checkpoint (see Step 6e)If all plans already have completed SUMMARYs:
Use AskUserQuestion (pattern: yes-no from skills/shared/gate-prompts.md):
question: "Phase {N} has already been built. All plans have completed SUMMARYs. Re-build from scratch?"
header: "Re-build?"
options:
- label: "Yes" description: "Delete existing SUMMARYs and re-execute all plans"
- label: "No" description: "Keep existing build — review instead"
/pbr:review {N}Group plans by wave number from their frontmatter. See references/wave-execution.md for the full wave execution model (parallelization, git lock handling, checkpoint manifests).
Validate wave consistency:
depends_on: []CRITICAL (hook-enforced): Initialize checkpoint manifest NOW before entering the wave loop.
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js checkpoint init {phase-slug} --plans "{comma-separated plan IDs}"
After each wave completes, update the manifest:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js checkpoint update {phase-slug} --wave {N} --resolved {plan-id} --sha {commit-sha}
This tracks execution for crash recovery and rollback. Read .checkpoint-manifest.json on resume to reconstruct which plans are complete.
Crash recovery check: Before entering the wave loop, check if .checkpoint-manifest.json already exists with completed plans from a prior run. If it does, reconstruct the skip list from its checkpoints_resolved array. This handles the case where the orchestrator's context was compacted or the session was interrupted mid-build.
Orphaned progress file check: Also scan the phase directory for .PROGRESS-* files. These indicate an executor that crashed mid-task. For each orphaned progress file:
plan_id, last_completed_task, and total_taskscheckpoints_resolved (not yet complete), inform the user:
Detected interrupted execution for plan {plan_id}: {last_completed_task}/{total_tasks} tasks completed.
checkpoints_resolved, the progress file is stale — delete it.For each wave, in order (Wave 1, then Wave 2, etc.):
For each plan in the current wave (excluding skipped plans):
Local LLM plan quality check (optional, advisory):
Before spawning executors for this wave, if config.local_llm.enabled is true, run a quick classification on each plan to catch stubs before wasting an executor spawn:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js llm classify PLAN ".planning/phases/{NN}-{slug}/{plan_id}-PLAN.md"
"stub" or "partial" with confidence >= 0.7: warn the user before spawning: "⚠ Plan {plan_id} classified as {classification} (confidence {conf}) — consider refining before building."Present plan narrative before spawning:
Display to the user before spawning:
◐ Spawning {N} executor(s) for Wave {W}...
Then present a brief narrative for each plan to give the user context on what's about to happen:
Wave {W} — {N} plan(s):
Plan {id}: {plan name}
{2-3 sentence description: what this plan builds, the technical approach, and why it matters.
Derive this from the plan's must_haves and first task's <action> summary.}
Plan {id}: {plan name}
{2-3 sentence description}
This is a read-only presentation step — extract descriptions from plan frontmatter must_haves.truths and the plan's task names. Do not read full task bodies for this; keep it lightweight.
State fragment rule: Executors MUST NOT modify STATE.md directly. The build skill orchestrator is the sole STATE.md writer during execution. Executors report results via SUMMARY.md only; the orchestrator reads those summaries and updates STATE.md itself.
Model Selection (Adaptive):
Before spawning the executor for each plan, determine the model:
0. If --model <value> was parsed from $ARGUMENTS (valid values: sonnet, opus, haiku, inherit), use that model for ALL executor Task() spawns in this run. Skip steps 1-4. The --model flag is the highest precedence override.
complexity and model attributesmodel attribute, use the most capable model among them (inherit > sonnet > haiku)config.models.complexity_map.{complexity} (defaults: simple->haiku, medium->sonnet, complex->inherit)config.models.executor is set (non-null), it overrides adaptive selection entirely — use that model for all executorsIf --model <value> is present in $ARGUMENTS, extract the value. Valid values: sonnet, opus, haiku, inherit. If an invalid value is provided, display an error and list valid values. Store as override_model.
Reference: references/model-selection.md for full details.
## Summary section from the PLAN.md (everything after the ## Summary heading to end of file). If no ## Summary section exists (legacy plans), fall back to reading the full PLAN.md content. Note: The orchestrator reads the full PLAN.md once for narrative extraction AND summary extraction; only the ## Summary portion is inlined into the executor prompt. The full PLAN.md stays on disk for the executor to Read..planning/CONTEXT.md (if exists).planning/STATE.md.planning/config.jsonConstruct the executor prompt by reading ${CLAUDE_SKILL_DIR}/templates/executor-prompt.md.tmpl and filling in all {placeholder} values:
{NN}-{slug} — phase directory (e.g., 02-authentication){plan_id} — plan being executed (e.g., 02-01){commit_format}, {tdd_mode}, {atomic_commits} — from loaded config{prior_work table rows} — one row per completed plan in this phaseUse the filled template as the Task() prompt.
Spawn strategy based on config:
If parallelization.enabled: true AND multiple plans in this wave:
max_concurrent_agents Task() calls in parallelrun_in_background: true for each executorTaskOutput with block: false and report statusIf parallelization.enabled: false OR single plan in wave:
Task({
subagent_type: "pbr:executor",
prompt: <executor prompt constructed above>
})
NOTE: The pbr:executor subagent type auto-loads the agent definition.
After executor completes, check for completion markers: `## PLAN COMPLETE`, `## PLAN FAILED`, or `## CHECKPOINT: {TYPE}`. Route accordingly — PLAN COMPLETE proceeds to next plan, PLAN FAILED triggers failure handling, CHECKPOINT triggers checkpoint flow. Do NOT inline it.
Path resolution: Before constructing the agent prompt, resolve ${CLAUDE_PLUGIN_ROOT} to its absolute path. Do not pass the variable literally in prompts — Task() contexts may not expand it. Use the resolved absolute path for any pbr-tools.js or template references included in the prompt.
Block until all executor Task() calls for this wave complete.
For each completed executor:
completed | partial | checkpoint | failed✓ Plan {id} complete — {brief summary from SUMMARY.md frontmatter description or first key_file}
Extract the brief summary from the SUMMARY.md frontmatter (use the description field if present, otherwise use the first entry from key_files).commit_log: For each completed plan, append { plan: "{plan_id}", sha: "{commit_hash}", timestamp: "{ISO date}" } to the commit_log array. Update last_good_commit to the last commit SHA from this wave.Spot-check executor claims:
CRITICAL (no hook): Before reading results or advancing to the next wave, run the spot-check CLI for each completed plan.
For each completed plan in this wave:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js spot-check {phaseSlug} {planId}
Where {phaseSlug} is the phase directory name (e.g., 49-build-workflow-hardening) and {planId} is the plan identifier (e.g., 49-01).
The command returns JSON: { ok, summary_exists, key_files_checked, commits_present, detail }
If ok is false for ANY plan: STOP. Do NOT advance to the next wave. Present the user with:
Spot-check FAILED for plan {planId}: {detail}
Choose an action:
Retry — Re-spawn executor for this plan
Continue — Skip this plan and proceed to next wave (may leave phase incomplete)
Abort — Stop the build entirely
Use AskUserQuestion with the three options. Route:
If ok is true for all plans:
self_check_failures: if present, warn the user: "Plan {id} reported self-check failures: {list}. Inspect before continuing?"## Self-Check: FAILED marker — if present, warn before next wavegit status for uncommitted changes)Read executor deviations: After all executors in the wave complete, read all SUMMARY frontmatter and:
deferred items into a running list (append to .checkpoint-manifest.json deferred array)Build a wave results table:
Wave {W} Results:
| Plan | Status | Tasks | Commits | Deviations |
|------|--------|-------|---------|------------|
| {id} | complete | 3/3 | abc, def, ghi | 0 |
| {id} | complete | 2/2 | jkl, mno | 1 |
Skip if the depth profile has features.inline_verify: false.
To check: use the resolved depth profile. Only comprehensive mode enables inline verification by default. When inline verification is enabled, each completed plan gets a targeted verification pass before the orchestrator proceeds to the next wave — catching issues early before dependent plans build on a broken foundation.
For each plan that completed successfully in this wave:
Read the plan's SUMMARY.md to get key_files (the files this plan created/modified)
Display to the user: ◐ Spawning inline verifier for plan {plan_id}...
Spawn Task({ subagent_type: "pbr:verifier", model: "haiku", prompt: ... }). Read ${CLAUDE_SKILL_DIR}/templates/inline-verifier-prompt.md.tmpl and fill in {NN}-{slug}, {plan_id}, and {comma-separated key_files list} (key_files from PLAN.md frontmatter). Use the filled template as the prompt value.
If verifier reports FAIL for any file:
If verifier reports all PASS: continue to next wave
Note: This adds latency (~10-20s per plan for the haiku verifier). It's opt-in via features.inline_verify: true for projects where early detection outweighs speed.
If any executor returned failed or partial:
Handoff bug check (false-failure detection):
Before presenting failure options, check whether the executor actually completed its work despite reporting failure (known Claude Code platform bug where handoff reports failure but work is done):
status field
b. If status: complete AND frontmatter has commits entries:
status: partial or spot-checks fail: proceed with normal failure handling belowPresent failure details to the user:
Plan {id} {status}:
Task {N}: {name} - FAILED
Error: {verify output or error message}
Deviations attempted: {count}
Last verify output: {output}
Use AskUserQuestion (pattern: multi-option-failure from skills/shared/gate-prompts.md):
question: "Plan {id} failed at task {N} ({name}). How should we proceed?"
header: "Failed"
options:
- label: "Retry" description: "Re-spawn the executor for this plan"
- label: "Skip" description: "Mark as skipped, continue to next wave"
- label: "Rollback" description: "Undo commits from this plan, revert to last good state"
- label: "Abort" description: "Stop the entire build"
If user selects 'Retry':
If user selects 'Skip':
If user selects 'Rollback': Run the rollback CLI:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js rollback .planning/phases/{NN}-{slug}/.checkpoint-manifest.json
Returns { ok, rolled_back_to, plans_invalidated, files_deleted, warnings }.
ok is true: display "Rolled back to commit {rolled_back_to}. {plans_invalidated.length} downstream plans invalidated."
Show any warnings. Continue to next wave or stop based on user preference.ok is false: display the error message. Suggest "Use abort instead."If user selects 'Abort':
/pbr:build {N} to resume (completed plans will be skipped)"If any executor returned checkpoint:
Checkpoint in Plan {id}, Task {N}: {checkpoint type}
{checkpoint details — what was built, what is needed}
{For decision type: present options}
{For human-action type: present steps}
{For human-verify type: present what to verify}
Reference: references/continuation-format.md for the continuation protocol.
Read ${CLAUDE_SKILL_DIR}/templates/continuation-prompt.md.tmpl and fill in:
{NN}-{slug}, {plan_id} — current phase and plan{plan_summary} — the ## Summary section from PLAN.md{task table rows} — one row per task with completion status{user's response} — the checkpoint resolution from Step 3{project context key-values} — config values + file pathsUse the filled template as the Task() prompt.
If config.ci.gate_enabled is true AND config.git.branching is not none:
git pushgh run list --branch $(git branch --show-current) --limit 1 --json databaseId -q '.[0].databaseId'
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js ci-poll <run-id> [--timeout <seconds>]
Returns { status, conclusion, url, next_action, elapsed_seconds }.next_action is "continue": proceed to next wavenext_action is "wait": re-run ci-poll after 15 seconds (repeat up to config.ci.wait_timeout_seconds)next_action is "abort" or status is "failed":
Show warning box and use AskUserQuestion: Wait / Continue anyway / AbortDEVIATION: CI gate bypassed for wave {N}After each wave completes (all plans in the wave are done, skipped, or aborted):
SUMMARY gate — verify before updating STATE.md: For every plan in the wave, run:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js summary-gate {phase-slug} {plan-id}
Returns { ok: bool, gate: string, detail: string }. Block STATE.md update until ALL plans return ok: true. If any fail, warn: "SUMMARY gate failed for plan {id}: {gate} — {detail}. Cannot update STATE.md."
Once gates pass, update .planning/STATE.md:
Tooling shortcut: Use the CLI for atomic STATE.md updates instead of manual read-modify-write:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js state update plans_complete {N}
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js state update status building
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js state update last_activity now
STATE.md size limit: Follow the size limit enforcement rules in skills/shared/state-update.md (150 lines max — collapse completed phases, remove duplicated decisions, trim old sessions).
Completion check: Before proceeding to next wave, confirm ALL of:
plans_complete updatedlast_activity timestamp refreshedTo verify programmatically: node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js step-verify build step-6f '["STATE.md updated","SUMMARY.md exists","commit made"]'
If any item fails, investigate before proceeding to Step 7.
Event-driven auto-verify signal: Check if .planning/.auto-verify exists (written by event-handler.js SubagentStop hook). If the signal file exists, read it and delete it (one-shot). The signal confirms that auto-verification was triggered — proceed with verification even if the build just finished.
Skip if:
features.goal_verification: falsequick AND the total task count across all plans in this phase is fewer than 3To check: run node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js config resolve-depth and read profile["features.goal_verification"]. For the task-count check in quick mode, sum the task counts from all PLAN.md frontmatter must_haves (already available from Step 3 plan discovery).
This implements budget mode's "skip verifier for < 3 tasks" rule: small phases in quick mode don't need a full verification pass.
If skipping because features.goal_verification is false:
Note for Step 8f completion summary: append "Note: Automatic verification was skipped (goal_verification: false). Run /pbr:review {N} to verify what was built."
If verification is enabled:
Display to the user: ◐ Spawning verifier...
Spawn a verifier Task():
Task({
subagent_type: "pbr:verifier",
prompt: <verifier prompt>
})
NOTE: The pbr:verifier subagent type auto-loads the agent definition. Do NOT inline it.
After verifier completes, check for completion marker: `## VERIFICATION COMPLETE`. Read VERIFICATION.md frontmatter for status.
Path resolution: Before constructing the agent prompt, resolve ${CLAUDE_PLUGIN_ROOT} to its absolute path. Do not pass the variable literally in prompts — Task() contexts may not expand it. Use the resolved absolute path for any pbr-tools.js or template references included in the prompt.
Use the same verifier prompt template as defined in /pbr:review: read ${CLAUDE_PLUGIN_ROOT}/skills/review/templates/verifier-prompt.md.tmpl and fill in its placeholders with the phase's PLAN.md must_haves and SUMMARY.md file paths. This avoids maintaining duplicate verifier prompts across skills.
Prepend this block to the verifier prompt before sending:
<files_to_read>
CRITICAL (no hook): Read these files BEFORE any other action:
1. .planning/phases/{NN}-{slug}/PLAN-*.md — must-haves to verify against
2. .planning/phases/{NN}-{slug}/SUMMARY-*.md — executor build summaries
3. .planning/phases/{NN}-{slug}/VERIFICATION.md — prior verification results (if exists)
</files_to_read>
After the verifier returns, read the VERIFICATION.md frontmatter and display the results:
passed: display ✓ Verifier: {X}/{Y} must-haves verified (where X = must_haves_passed and Y = must_haves_checked)gaps_found: display ⚠ Verifier found {N} gap(s) — see VERIFICATION.md (where N = must_haves_failed)After all waves complete and optional verification runs:
8-pre. Re-verify after gap closure (conditional):
If --gaps-only flag was used AND features.goal_verification is true:
VERIFICATION.md (it reflects pre-gap-closure state)VERIFICATION.md that accounts for the gap-closure workfinal_status belowThis ensures that /pbr:review after a --gaps-only build sees the updated verification state, not stale gaps from before the fix.
8-pre-b. Determine final status based on verification:
passed: final_status = "built"gaps_found: final_status = "built*" (built with unverified gaps)8-pre-c. Codebase map incremental update (conditional):
CRITICAL (no hook): Run codebase map update if conditions are met. Do NOT skip this step.
Only run if ALL of these are true:
.planning/codebase/ directory exists (project was previously scanned with /pbr:scan)git diff --name-only {pre_build_commit}..HEAD shows >5 files changed OR package.json/requirements.txt/go.mod/Cargo.toml was modifiedIf triggered:
Record the pre-build commit SHA at the start of Step 1 (before any executors run) for comparison
Run git diff --name-only {pre_build_commit}..HEAD to get the list of changed files
Display to the user: ◐ Spawning codebase mapper (incremental update)...
Spawn a lightweight mapper Task():
Task({
subagent_type: "pbr:codebase-mapper",
model: "haiku",
prompt: "Incremental codebase map update. These files changed during the Phase {N} build:\n{diff file list}\n\nRead the existing .planning/codebase/ documents. Update ONLY the sections affected by these changes. Do NOT rewrite entire documents — make targeted updates. If a new dependency was added, update STACK.md. If new directories/modules were created, update STRUCTURE.md. If new patterns were introduced, update CONVENTIONS.md. Write updated files to .planning/codebase/."
})
Do NOT block on this — use run_in_background: true and continue to Step 8a. Report completion in Step 8f if it finishes in time.
CRITICAL (no hook): Update ROADMAP.md progress table NOW. Do NOT skip this step. (roadmap-sync warns)
8a. Update ROADMAP.md Progress table (REQUIRED — do this BEFORE updating STATE.md):
Tooling shortcut: Use the CLI for atomic ROADMAP.md table updates instead of manual editing:
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js roadmap update-plans {phase} {completed} {total}
node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js roadmap update-status {phase} {final_status}
These return { success, old_status, new_status } or { success, old_plans, new_plans }. Falls back to manual editing if unavailable.
.planning/ROADMAP.md## Progress tablePlans Complete column to {completed}/{total} (e.g., 2/2 if all plans built successfully)Status column to the final_status determined in Step 8-preCRITICAL (no hook): Update STATE.md NOW with phase completion status. Do NOT skip this step. (state-sync warns)
8b. Update STATE.md (CRITICAL (no hook) — update BOTH frontmatter AND body):
status, plans_complete, last_activity, progress_percent, last_command## Current Position: Phase: line, Plan: line, Status: line, Last activity: line, Progress: barCompletion check: Before proceeding to 8c, confirm ALL of:
To verify programmatically: node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js step-verify build step-8b '["STATE.md updated","ROADMAP.md updated","commit made"]'
If any item fails, investigate before marking phase complete.
8c. Commit planning docs (if configured):
Reference: skills/shared/commit-planning-docs.md for the standard commit pattern.
If planning.commit_docs is true:
docs({phase}): add build summaries and verification8d. Handle git branching:
If git.branching is phase:
git checkout main && git merge --squash plan-build-run/phase-{NN}-{name}skills/shared/gate-prompts.md):
question: "Phase {N} complete on branch plan-build-run/phase-{NN}-{name}. Squash merge to main?"
header: "Merge?"
options:
8d-ii. PR Creation (when branching enabled):
If config.git.branching is phase or milestone AND phase verification passed:
git push -u origin {branch-name}config.git.auto_pr is true:
Goal: {phase goal from ROADMAP.md}
{key_files from SUMMARY.md, bulleted}
{pass/fail status from VERIFICATION.md}
Generated by Plan-Build-Run
EOF
)"3. Ifconfig.git.auto_prisfalse`:
8e. Auto-advance / auto-continue (conditional):
If features.auto_advance is true AND mode is autonomous:
Chain to the next skill directly within this session. This eliminates manual phase cycling.
| Build Result | Next Action | How |
|---|---|---|
| Verification passed, more phases | Plan next phase | Skill({ skill: "pbr:plan", args: "{N+1}" }) |
| Verification skipped | Run review | Skill({ skill: "pbr:review", args: "{N}" }) |
| Verification gaps found | HARD STOP — present gaps to user | If auto_continue also true: write .planning/.auto-next with /pbr:review {N} before stopping. Do NOT auto-advance past failures. |
| Last phase in current milestone | HARD STOP — milestone boundary | If auto_continue also true: write .planning/.auto-next with /pbr:milestone complete before stopping. Suggest /pbr:milestone audit. Explain: "auto_advance pauses at milestone boundaries — your sign-off is required." |
| Build errors occurred | HARD STOP — errors need human review | If auto_continue also true: write .planning/.auto-next with /pbr:build {N} before stopping. Do NOT auto-advance past errors. |
After invoking the chained skill, it runs within the same session. When it completes, the chained skill may itself chain further (review→plan, plan→build) if auto_advance remains true. This creates the full cycle: build→review→plan→build→...
Else if features.auto_continue is true:
Write .planning/.auto-next containing the next logical command (e.g., /pbr:plan {N+1} or /pbr:review {N})
Completion check: Before proceeding to 8f, confirm ALL of:
.auto-next file written with correct next commandpbr-tools.js todo doneTo verify programmatically: node ${CLAUDE_PLUGIN_ROOT}/scripts/pbr-tools.js step-verify build step-8e '["STATE.md updated","commit made"]'
If any item fails, investigate before closing the session.
8e-ii. Check Pending Todos:
CRITICAL (no hook): Check pending todos after build. Do NOT skip this step.
After completing the build, check if any pending todos are now satisfied:
.planning/todos/pending/ exists and contains files.planning/todos/done/, update status: done, add completed: {YYYY-MM-DD}, delete from pending/ via Bash rm. Display: ✓ Auto-closed todo {NNN}: {title} (satisfied by Phase {N} build)
d. If partially related: display ℹ Related pending todo {NNN}: {title} — may be partially addressed
e. If unrelated: skip silentlyOnly auto-close when the match is unambiguous. When in doubt, leave it open.
8f. Present completion summary:
Use the branded output templates from references/ui-formatting.md. Route based on status:
| Status | Template |
|---|---|
passed + more phases in current milestone | "Phase Complete" template |
passed + last phase in current milestone | "Milestone Complete" template |
Milestone boundary detection: To determine "last phase in current milestone", read ROADMAP.md and find the ## Milestone: section containing the current phase. Check its **Phases:** start - end range. If the current phase equals end, this is the last phase in the milestone. For projects with a single milestone or no explicit milestone sections, "last phase in ROADMAP" is equivalent.
| gaps_found | "Gaps Found" template |
Before the branded banner, include the results table:
Results:
| Plan | Status | Tasks | Commits |
|------|--------|-------|---------|
| {id} | complete | 3/3 | 3 |
| {id} | complete | 2/2 | 2 |
{If verification ran:}
Verification: {PASSED | GAPS_FOUND}
{count} must-haves checked, {count} passed, {count} gaps
Total commits: {count}
Total files created: {count}
Total files modified: {count}
Deviations: {count}
Then present the appropriate branded banner from Read references/ui-formatting.md § "Completion Summary Templates":
passed + more phases: Use the "Phase Complete" template. Fill in phase number, name, plan count, and next phase details.passed + last phase: Use the "Milestone Complete" template. Fill in phase count.gaps_found: Use the "Gaps Found" template. Fill in phase number, name, score, and gap summaries from VERIFICATION.md.Include <sub>/clear first → fresh context window</sub> inside the Next Up routing block of the completion template.
8g. Display USER-SETUP.md (conditional):
Check if .planning/phases/{NN}-{slug}/USER-SETUP.md exists. If it does:
Setup Required:
This phase introduced external setup requirements. See the details below
or read .planning/phases/{NN}-{slug}/USER-SETUP.md directly.
{Read and display the USER-SETUP.md content — it's typically short}
This ensures the user sees setup requirements prominently instead of buried in SUMMARY files.
If a Task() doesn't return within a reasonable time, display:
╔══════════════════════════════════════════════════════════════╗
║ ERROR ║
╚══════════════════════════════════════════════════════════════╝
Executor agent timed out for Plan {id}.
**To fix:** Check `.planning/phases/{NN}-{slug}/` for partial SUMMARY.md, then retry or skip.
Treat as partial status. Present to user: retry or skip.
For commit conventions and git workflow details, see references/git-integration.md.
If multiple parallel executors create git lock conflicts:
⚠ Git lock conflicts detected with parallel execution. Consider reducing max_concurrent_agents to 1.If SUMMARY.md shows files not listed in the plan's files_modified:
If git.branching is phase but we're not on the phase branch:
git checkout -b plan-build-run/phase-{NN}-{name}| File | Purpose | When |
|---|---|---|
.planning/phases/{NN}-{slug}/.checkpoint-manifest.json | Execution progress for crash recovery | Step 5b, updated each wave |
.planning/phases/{NN}-{slug}/SUMMARY-{plan_id}.md | Per-plan build summary | Step 6 (each executor) |
.planning/phases/{NN}-{slug}/USER-SETUP.md | External setup requirements | Step 6 (executor, if needed) |
.planning/phases/{NN}-{slug}/VERIFICATION.md | Phase verification report | Step 7 |
.planning/codebase/*.md | Incremental codebase map updates | Step 8-pre-c (if codebase/ exists) |
.planning/ROADMAP.md | Plans Complete + Status → built or partial | Step 8a |
.planning/STATE.md | Updated progress | Steps 6f, 8b |
.planning/.auto-next | Next command signal (if auto_continue enabled) | Step 8e |
| Project source files | Actual code | Step 6 (executors) |
Delete .planning/.active-skill if it exists. This must happen on all paths (success, partial, and failure) before reporting results.