From brain-os
Automates per-issue implementation: grabs AFK GitHub issues, runs /tdd, commits, pushes, and closes. Modes include single issues, story DAGs, auto queue drain, and parallel workers for backlog clearance.
npx claudepluginhub sonthanh/brain-os-pluginThis skill uses the workspace's default tool permissions.
Wraps "grab next AFK issue → run /tdd → commit + push + close." Designed in the Pocock skill-process grill (`{vault}/daily/grill-sessions/2026-04-25-skill-process-pocock-adoption.md`) as Phase C of the new pipeline. Replaces `/develop`'s role as the "build it" skill.
Implements a single tracked issue end-to-end: fetches details via bd CLI, pulls relevant knowledge via ao, applies code changes, tests, commits code, and closes issue.
Grooms all open GitHub issues with subagents for clarity then loops implementation: branch, research, swarm code, test, commit, merge PRs until backlog cleared.
Runs autonomous loop on GitHub Issues: selects PRDs, creates git worktrees, implements stories iteratively with dependency awareness, commits/closes issues, and creates PRs.
Share bugs, ideas, or general feedback.
Wraps "grab next AFK issue → run /tdd → commit + push + close." Designed in the Pocock skill-process grill ({vault}/daily/grill-sessions/2026-04-25-skill-process-pocock-adoption.md) as Phase C of the new pipeline. Replaces /develop's role as the "build it" skill.
| Mode | Invocation | Behavior |
|---|---|---|
once (default) | /impl or /impl --area <label> | Grab one AFK issue (status:ready AND owner:bot), run /tdd, commit, push, close. Exit. |
issue | /impl <N> or /impl <N>,<M>,... | Run /tdd on a specific issue number; comma-separated list runs sequentially in list order. Owner-label filter is bypassed (works for owner:bot AND owner:human). Use when you (or another skill) already picked the issue(s) and want /impl to handle plumbing. See § Workflow — issue mode. |
parallel | /impl -p K (alias --parallel K) or /impl -p <N>,<M>,... | Dispatches scripts/run-parallel.ts (TS orchestrator, mirrors /impl story after #161 + #173). -p K (integer) picks K from queue head (status:ready,owner:bot); -p <list> (comma-separated) spawns one worker per listed issue (owner-bypass, same as issue-mode list). Workers are detached claude -w sessions spawned via Bun.spawn with explicit per-issue cwd: inferRepoFromIssueArea(labels). nohup-detached, ~0 main-session token burn. Hard cap 5. |
story | /impl story <parent-N> [-p N] (alias /impl --story <parent-N>) | Bash-orchestrated DAG drain of a multi-issue story. Reads parent issue body checklist + each child's Blocked by to build the DAG, spawns AFK workers (claude -w + /impl) up to parallel cap, surfaces HITL children via osascript notify, ticks parent body checklist on each close, closes parent + macOS-notifies on completion. Self-detached via nohup — main Claude session exits immediately, ~0 token burn during drain. Both invocation forms dispatch to the same scripts/run-story.ts orchestrator. |
auto | /impl auto [-p N] | Autonomous AFK queue drain. One-shot dispatcher that bash-executes scripts/run-ralph.sh "/impl[ -p N]" --completion-promise NO_MORE_ISSUES --max-iterations 50, which sets up an in-session /ralph-loop whose body is /impl (or /impl -p N) per iteration. The Stop hook re-feeds the prompt until /impl emits <promise>NO_MORE_ISSUES</promise> (empty backlog) or 50 iterations elapse. NOT detached — the loop runs in the same Claude session and burns inference per iteration. See § Workflow — auto mode. |
team | DEFERRED — do not implement | Reserved for Phase G when first-party agent teams (CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1) prove necessary for cross-session coordination. Today: error and exit if user passes --team. |
working-rules.md (single tracker for all areas)area:plugin-brain-os (override with --area <label>)status:ready AND owner:bot<promise>NO_MORE_ISSUES</promise> in your assistant text response so an outer /ralph-loop --completion-promise "NO_MORE_ISSUES" can latch and stop. Ralph reads transcript text blocks (type: text), not unix stdout — a Bash(echo) call will not satisfy the contract.bun test for new code; per-artifact-type table in /tdd for hooks, plists, scripts, vault docs/impl # Pick one AFK issue, /tdd it, commit/push/close, exit
/impl 147 # Run /tdd on issue #147 specifically (owner override)
/impl 147,148,149 # Sequential list — /tdd 147, then 148, then 149 (owner override per issue)
/impl --area plugin-vault # Same as bare /impl, filtered to a different area label
/impl -p 3 # Dispatch run-parallel.ts: three claude -w workers from queue head (alias: --parallel 3)
/impl -p 147,148 # Dispatch run-parallel.ts: parallel explicit list, one claude -w worker per listed issue
/impl story 145 # Drain the story rooted at parent issue #145 (alias: --story 145)
/impl story 145 -p 5 # Same, raise parallel cap to hard max
/impl auto is the canonical AFK drain entry point. Hardcoded args, no tuning:
/impl auto # Drain queue: in-session /ralph-loop wrapping /impl, max 50 iterations
/impl auto -p 3 # Same, each iteration dispatches run-parallel.ts to spawn 3 detached claude -w workers
Mechanism: /impl auto bash-executes scripts/run-ralph.sh "/impl[ -p N]" --completion-promise NO_MORE_ISSUES --max-iterations 50. The wrapper resolves the installed ralph-loop plugin path dynamically and forwards to setup-ralph-loop.sh, which writes .claude/ralph-loop.local.md and emits the initial /impl prompt for Claude to act on. Each loop iteration runs /impl (or /impl -p N), claims one AFK issue, ships it. When the gh issue list query returns zero, /impl emits <promise>NO_MORE_ISSUES</promise> in its assistant text response; the ralph Stop hook extracts the inner tag via Perl regex, exact-matches against NO_MORE_ISSUES, and exits cleanly. Hits --max-iterations 50 if the sentinel never fires.
/impl auto is dispatch-only — it shells the wrapper and exits. The loop body is still /impl invoked BY the ralph-loop Stop hook on each iteration; outcome logs come from the inner /impl (once or parallel rows), not from auto. Do not add an action=auto log row.
/ralph-loop wrap)Use only when you need different --max-iterations, a different completion promise, or a non-/impl body. The full form:
/ralph-loop "/impl" --completion-promise "NO_MORE_ISSUES" --max-iterations 10
/ralph-loop "/impl -p 3" --completion-promise "NO_MORE_ISSUES" --max-iterations 30
Tuning --max-iterations:
1–3 × expected backlog depth30–50 (the /impl auto default is 50)Caveats (apply to both /impl auto and the manual wrap):
Bash(echo). Ralph reads transcript text blocks (type: text), not unix stdout./grill, /handover, /improve) — they need human input ralph won't supply.--completion-promise is exact-string match on the inner tag text — no quotes, no extra whitespace, no paraphrasing./impl story, which detaches via nohup and burns ~0 main-session tokens.once modegh issue list \
-R <tracker-repo> \
--label "area:plugin-brain-os,status:ready,owner:bot" \
--json number,title,body,labels \
--limit 1
If empty → output <promise>NO_MORE_ISSUES</promise> in your text response and exit 0.
If one issue returned → continue with that issue's number, title, body.
Atomic flip via the canonical helper — single gh issue edit call removing status:ready and adding status:in-progress from one process invocation:
bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/transition-status.sh" <N> --to in-progress
Do NOT inline gh issue edit --remove-label status:ready --add-label status:in-progress — transition-status.sh is the single source of truth so the wire-level remove+add stays atomic AND target-validated against the canon list (see references/gh-task-labels.md § 3).
Read the issue body in full. In particular:
Files: declared list (if present from /slice) — confirm scopeBlocked by #M — if M is still open, abort and re-label status:ready; the issue isn't actually AFK-readyInvoke the /tdd skill on the issue. /tdd handles:
/impl does NOT duplicate any of /tdd's logic. /impl only owns issue plumbing.
git add <touched files>
git commit -m "$(cat <<'EOF'
<type>: <subject>
<body — 1–3 sentences on the why, not the what>
Closes <tracker-repo>#<N>
Co-Authored-By: Claude <noreply@anthropic.com>
EOF
)"
git pull --rebase
git push
bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/close-issue.sh" <N>
bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/close-issue.sh" is the single close site — it strips every status:* label before flipping state to closed, plugging the ghost-label leak that pre-dated #160. Do NOT inline gh issue close here or anywhere else; the helper is the lifecycle close-step.
git pull --rebase is mandatory before push because the CI auto-release pipeline runs on each push and may have bumped plugin.json.
After close-issue.sh succeeds (issue CLOSED, status:* labels stripped) and BEFORE the outcome-log row in step 7, run three sub-steps that together close the AC-evidence loop:
## Covers AC. Posts evidence comments on the parent and ticks the parent body. Skipped silently for orphans and pure-component children.## Acceptance. Posts self-evidence comments on the just-closed issue itself and ticks its own body. This path catches orphan leaves (issues filed without /slice that lack a native parent link, e.g. ad-hoc bug fixes filed via raw gh issue create) — their AC checkboxes used to never be ticked because §§ 6.1–6.4 short-circuited.Run §§ 6.1–6.4 first (parent path), then § 6.5 (self path). Both paths are idempotent; ordering only matters for log readability.
Wire-level constants used below:
<tracker-repo> is the tracker repo (e.g. sonthanh/ai-brain) — split into <owner>/<name> for gh api graphql's -F args.Acceptance verified: AC#N — <evidence> is U+2014 (literal —, not ASCII - or U+2013 –). Source of truth: PARENT_COMMENT_EVIDENCE_RE at scripts/run-story.ts:220 and references/ac-coverage-spec.md § 3.3.parent_n=$(gh api graphql \
-f query='query($owner:String!,$name:String!,$num:Int!){repository(owner:$owner,name:$name){issue(number:$num){parent{number}}}}' \
-F owner=<owner> -F name=<name> -F num=<N> \
--jq '.data.repository.issue.parent.number // empty')
gh api graphql --jq returns "null" for a null field unless the // empty fallback collapses it to "", so the bash existence check is a clean [ -z "$parent_n" ]. If parent_n is empty, the just-closed child is an orphan or a leaf — skip §§ 6.2 + 6.3 entirely and proceed to step 7. The skip-on-orphan path is what makes /impl idempotent against legacy issues filed before native sub-issue links (Phase 2 #219) AND against one-off issues with no parent story.
Parse the just-closed child's ## Covers AC section (per-line regex ^- AC#(\d+)\s*$ applied inside the section, per references/ac-coverage-spec.md § 3.2). For each AC#M listed:
Compose evidence text from the just-completed /tdd cycle. Concrete sources, in priority order:
bun test / pytest / artifact-runner stdout: test names + pass/fail counts (e.g. bun test scripts/run-story.test.ts: 47 pass, 0 fail)skills/impl/SKILL.md: 44321 bytes)grep -c "Acceptance verified: AC#" skills/impl/SKILL.md → 4)commit abc1234 — 1 file changed, +120/-3)Mix-and-match as the AC demands. Evidence MUST be a single non-empty line so the line-anchored regex matches.
Post one comment per AC#M to the parent. Use --body-file - with an unquoted heredoc so evidence text containing backticks, $(...), or shell metacharacters is treated as literal payload (bash expands ${m} and ${evidence} once when parsing the heredoc body; the expanded values are NOT re-parsed for further substitution, so embedded $(...) survives as text). Inline --body "…" would have the same expansion semantics but is harder to extend to multi-line evidence; heredoc + body-file is the gh idiom for multi-line comments:
gh issue comment "$parent_n" -R <tracker-repo> --body-file - <<EOF
Acceptance verified: AC#${m} — ${evidence}
EOF
Quoted heredoc (<<'EOF') would block ${m} / ${evidence} expansion entirely — DO NOT use it; the AC ID and evidence text MUST be interpolated.
tickAcceptance from scripts/run-story.ts (single SSOT for the U+2014 em-dash + bold AC bullet regex per references/ac-coverage-spec.md § 3.1):bun run "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/tick-parent-ac.ts" "$parent_n" "${m}"
The CLI is idempotent — already-ticked AC is a silent no-op; missing **AC#${m}** bullet in the parent body emits a stderr WARN and exits 0 (the parent body may have been edited after the child filed; not a fatal error). Per-AC-tick races between sibling /impl <N> workers closing simultaneously are anticipated: the parent close-trigger spawn (§ 6.3) re-runs Gate C's tickAcceptance as the second-line idempotent safety net. Operators recovering a parent whose evidence comments landed but whose ticks were missed (e.g. self-bootstrapping orchestrator runs that shipped tickAcceptance mid-flight) can run bun run "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/tick-parent-ac-from-comments.ts" <parent-N> to backfill from existing comments.
Empty / absent ## Covers AC (pure-component child) → skip evidence emission AND tick silently. The parent's evidence trail is reserved for live-AC criteria; pure-component children leave none.
If a gh issue comment or tick-parent-ac.ts call fails (network blip, rate limit, label/permission fault), log the failure and continue with the remaining AC#M and §§ 6.3. Don't abort post-close — partial-evidence is recoverable on the next /impl story <parent_n> run (Gate C re-ticks); raising an exception here would leave the issue closed with no trace of the post-close attempt.
Query the parent's subIssuesSummary to detect "this child was the last open sub-issue":
summary_json=$(gh api graphql \
-f query='query($owner:String!,$name:String!,$num:Int!){repository(owner:$owner,name:$name){issue(number:$num){subIssuesSummary{completed total}}}}' \
-F owner=<owner> -F name=<name> -F num="$parent_n")
completed=$(jq -r '.data.repository.issue.subIssuesSummary.completed' <<<"$summary_json")
total=$(jq -r '.data.repository.issue.subIssuesSummary.total' <<<"$summary_json")
completed == total → spawn /impl story <parent_n> background. Use the same nohup bun run "$PLUGIN_ROOT/scripts/run-story.ts" "$parent_n" -p 3 >> ~/.local/state/impl-story/<parent_n>.log 2>&1 & launcher as § Workflow — story mode → "Invocation" (and the same sleep-5 + ps verify block); substitute "$parent_n" for the parent arg. The orchestrator runs Gate C (AC evidence check) + parent close.completed < total → skip spawn. Sibling /impl <N> runs hit the same trigger on their own close; only the last one to close drains the parent./impl story <parent_n> recovers.Race note: two siblings closing near-simultaneously can both observe completed == total and both spawn /impl story. The race is benign — run-story.ts exits cleanly on an already-CLOSED parent and the second nohup instance does no work. No spawn-deduplication needed.
If § 6.1 returned no parent, §§ 6.2 + 6.3 do not run. No parent-comment posts, no /impl story spawn. § 6.5 still runs — orphan issues with their own ## Acceptance get self-tick coverage even when there's no parent to walk up to. The "byte-identical to pre-Phase-2" contract applied before § 6.5 shipped; post-§ 6.5, the only delta on orphan issues is the self-evidence comments + body ticks on the orphan itself.
Independent of §§ 6.1–6.4: tick the just-closed issue's OWN ## Acceptance bullets with self-evidence. Runs for every closed issue regardless of parent linkage — orphan leaves, parent-linked children, and standalone bug fixes all flow through this path. Without § 6.5, issues filed without /slice (e.g. ad-hoc gh issue create for bugs surfaced mid-session) close with all AC checkboxes still [ ] because no other actor mutates the leaf body.
Why separate from § 6.2: § 6.2 ticks the parent body based on the child's ## Covers AC declaration (form (ii) per references/ac-coverage-spec.md § 4.2). § 6.5 ticks the just-closed issue's own body based on per-AC self-assessment of the /tdd output + diff. The leaf's ## Acceptance is its own contract.
Steps:
## Acceptance per references/ac-coverage-spec.md § 3.1 regex.## Acceptance (e.g. handover stubs, plan docs, pure-component children) leave no self-tick trail.<tracker-repo> + heredoc-with-interpolation pattern as § 6.2; em-dash is U+2014):# For each met AC#${m}:
gh issue comment <N-just-closed> -R <tracker-repo> --body-file - <<EOF
Self acceptance verified: AC#${m} — ${evidence}
EOF
bun run "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/tick-parent-ac.ts" <N-just-closed> "${m}"
# For each out-of-scope AC#${m}:
gh issue comment <N-just-closed> -R <tracker-repo> --body-file - <<EOF
Self acceptance deferred: AC#${m} — ${reason}
EOF
# (no tick — checkbox stays [ ] because the AC was not actually verified)
# For each not-met AC#${m}: stderr WARN, no comment, no tick. The unticked checkbox is the audit trail.
tick-parent-ac.ts is named for its original parent-tick role but accepts any issue number — calling it with the leaf's own number ticks the leaf's body. No new CLI is required; the same idempotent regex (tickAcceptance from scripts/run-story.ts) applies.
Evidence form regexes (canonical SSOT in references/ac-coverage-spec.md § 3.5):
^Self acceptance verified: AC#(\d+) — (.+)$ — met^Self acceptance deferred: AC#(\d+) — (.+)$ — out-of-scopeBoth use a Self acceptance prefix distinct from § 3.3's Acceptance verified (parent-pointing) so backfill scripts and audits can distinguish self-evidence from parent-evidence on a single issue's comment thread.
Idempotence:
tick-parent-ac.ts already idempotent: already-ticked AC is a silent no-op.gh issue comment has no dedup primitive — re-running § 6.5 will duplicate comments. Backfill operators should grep existing comments for Self acceptance verified: AC#${m} before re-posting.Failure handling: mirror § 6.2 — if gh issue comment or tick-parent-ac.ts fails (network blip, rate limit, permission fault), log the failure and continue with remaining AC#N. Do not abort post-close; partial self-tick is recoverable via the backfill recipe below.
Backfill recipe for issues closed before § 6.5 shipped:
# For an issue #N closed without § 6.5 evidence:
# 1. gh issue view <N> --json body,comments → read body ## Acceptance + any prior context
# 2. Inspect commit history (git log --all --grep "#${N}") + /tdd output if available
# 3. Per AC, decide met/deferred/not-met
# 4. Run the gh issue comment + tick-parent-ac.ts pair from step 4 above
# 5. Verify: gh issue view <N> → expected ACs ticked, comments visible
Append one line to {vault}/daily/skill-outcomes/impl.log (see § Outcome log below).
issue mode/impl <N> runs /tdd on a specific issue regardless of its owner:bot / owner:human label. The owner-label filter that gates once mode is intentionally bypassed because the user (or a parent skill like /impl story) has already decided this issue is the next one to ship.
once mode| Step | once | issue <N> |
|---|---|---|
| Pick | gh issue list filtered by status:ready,owner:bot | Skip — issue number is given as arg |
| Owner check | Required (owner:bot only) | Bypassed — accepts any owner |
| Claim | status:ready → status:in-progress | Same |
| Read | Issue body, "Required reading", "Acceptance", Blocked by | Same |
| /tdd | Single pass | Wrapped in child-level ralph + advisor (max 3 iters) — see below |
| Commit | git commit ... | git commit ... |
| Push + close | Same | Same |
| Post-close (§ 6) | Per § 6 | Same |
/impl <N> does NOT call /tdd once and ship. It wraps /tdd in a 3-iteration ralph loop with a per-iteration advisor verdict gate. The wrapper is what enforces parent acceptance gate AC#6 — a child whose tests pass GREEN but whose live-AC criterion was actually mocked is caught by advisor and re-tried (or escalated to HITL after 3 iters). Pure-component children skip the advisor call to preserve cost optimization.
Canonical TS contract: scripts/lib/ac-gate.ts (runChildAdvisorGate). Tests in scripts/lib/ac-gate.test.ts pin behavior. If the prose below drifts from the function, the function wins — open an issue against the prose.
ralph-loop (max 3 iters, completion-promise AC_VERIFIED):
iter 1..3:
/tdd <N> [--prior-attempt-failed-because "<prior failure>"]
iter 1: no hint
iter 2/3: hint = last RED reason OR last advisor verdict reason
if RED: log, save reason as next-iter hint, continue
if GREEN:
classify child via CLI (no inline regex):
bun run "$CLAUDE_PLUGIN_ROOT/scripts/ac-coverage-cli.ts" classify <N>
→ "pure-component": emit AC_VERIFIED, break (NO advisor call — cost-optimized)
→ "live-ac":
advisor() # see Advisor prompt template below
parse verdict (substring search):
"AC met" present → emit AC_VERIFIED, break
"AC not met: <reason>" → log, save reason, continue
ambiguous → treat as not met (escalate sooner)
loop end
if not AC_VERIFIED:
gh issue edit <N> -R <tracker-repo> --remove-label owner:bot --add-label owner:human
bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/transition-status.sh" <N> --to ready
bun run "$CLAUDE_PLUGIN_ROOT/scripts/ac-coverage-cli.ts" handoff-comment "<last failure>" "<last verdict>" \
| gh issue comment <N> -R <tracker-repo> --body-file -
record counters (see "Per-child result file" below)
exit 0 # NOT 1 — orchestrator treats unclosed issue + dead worker as failure
Owner-first ordering matters: a mid-sequence failure leaves the issue owner:human + status:in-progress which is invisible to /impl auto's owner:bot + status:ready pick filter, rather than owner:bot + status:ready (which would re-pick on the same advisor failure next iteration).
When invoking advisor() after a GREEN /tdd on a live-AC child, prime the advisor with:
The just-completed /tdd cycle for child issue #<N> produced GREEN. Verify whether
the implementation actually satisfies the parent acceptance bullets the child
declared in its `## Covers AC` section.
Child issue: gh issue view <N>
Parent issue: gh issue view <parent-N>
Diff: git diff HEAD~1 HEAD (or staged diff if not yet committed)
Test output: <inline /tdd transcript>
Live-AC criteria the child claims to satisfy:
<list each AC#N from child Covers AC + the corresponding bullet from parent ## Acceptance>
Verdict format: respond with EITHER:
"AC met. <one-line evidence summary>" OR
"AC not met: <one-line reason>"
Be skeptical of test fixtures that mock external systems the parent AC text
demands run live (markers per references/ac-coverage-spec.md § 6.1: "runs against",
"live", "pass-rate", "log appended", "osascript", etc.). A test that mocks the
live integration when AC text says "runs against" is "AC not met".
Before exit, write counters so /impl story's orchestrator can aggregate them into the impl-story.log row (spec § 5):
bun run "$CLAUDE_PLUGIN_ROOT/scripts/ac-coverage-cli.ts" record-result \
<parent-N> <N> <verified:true|false> <tdd-run-count> \
<advisor-calls> <advisor-rejections> \
"<last-failure-or-empty>" "<last-verdict-or-empty>"
The CLI writes JSON to ~/.local/state/impl-story/<parent>-child-<N>.result. Missing files at orchestrator run-end are tolerated (counted as zero counters + a warning log line) — a worker that crashed before writing the file is already accounted for via the existing WORKER DEAD → relabelHuman path in runStory.
If <parent-N> is unknown to prose, resolve it via:
PARENT_N=$(bun run "$CLAUDE_PLUGIN_ROOT/scripts/ac-coverage-cli.ts" parent <N>)
/impl <N> invoked by hand on a leaf issue with no ## Parent section. The CLI's parent subcommand exits 1; treat the child as pure-component (no advisor call) and run a single /tdd → ship pass.## Covers AC section at all. Treated identically to ## Covers AC empty → pure-component.issue mode/impl <N> directly because they want THIS issue done now/impl story <P> orchestrator dispatches claude -w ... -p "/impl <M>" per childissue modeowner:human and route through /pickup, not /impl. See § Mid-flow rule below.Blocked by #M and #M is still open — abort and re-label status:ready, same as once mode./impl <N>,<M>,...)When the arg contains one or more commas, parse as a comma-separated list of issue numbers and run them through issue mode sequentially in list order.
Parser rule. Split on ,, trim whitespace, parse each token as integer. If any token fails to parse as a positive integer, abort with FATAL: invalid issue number in list: <token> and exit 1 — do not silently drop bad tokens.
Dedupe. Remove duplicate numbers preserving first occurrence; emit WARN: dropped duplicate #<N> from list to text response so the user sees the deviation.
Same-area assertion (deterministic). Before claiming any issue, run gh issue view <N> -R <tracker-repo> --json labels for every listed issue and extract the area:* label. If labels differ across the list, abort with FATAL: list spans multiple areas (#<X> area:<a>, #<Y> area:<b>). Run separate /impl invocations per area. and exit 1. The main session has ONE cwd, so commits would land in the wrong repo for the off-area issue. This pre-existed for single-issue /impl <N> (user picks the right repo) but list mode amplifies it — make the abort explicit.
Loop. For each issue in the deduped list:
issue mode workflow: claim → read → /tdd → git commit → git pull --rebase → git push → bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/close-issue.sh" <N>.status:ready, log the failure, and continue with the next issue in the list. Do not abort the rest of the list.mode=list-sequential,n=<list-length> in the optional fields.Final summary. After the loop completes, emit a one-block summary in the assistant text response:
/impl list complete:
closed: #<N>, #<M>, ...
failed: #<X> (test red), #<Y> (push rejected)
skipped: #<Z> (already CLOSED before claim)
No <promise>NO_MORE_ISSUES</promise>. List form runs only the issues you named — there is no "queue empty" condition. The sentinel is for once/auto modes; do not emit it here.
Worktree isolation. Sequential list does NOT spawn subagents and does NOT use worktree isolation — it runs each issue's edits in the main worktree, same as a single /impl <N>. If you need parallel + isolation, use /impl -p <N>,<M>,... instead.
story mode/impl story <parent-N> [-p N] drains a multi-issue story in DAG order. The orchestration is a TypeScript script (scripts/run-story.ts, bun runtime, unit-tested via scripts/run-story.test.ts) — main Claude session exits after kicking it off, ~0 token burn for the duration of the drain.
When the user types /impl story <parent-N> (optionally -p <N>), execute:
# Resolve plugin root: env var → glob fallback → fail loud
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(ls -d ~/.claude/plugins/cache/brain-os-marketplace/brain-os/*/ 2>/dev/null | sort -V | tail -1)}"
PLUGIN_ROOT="${PLUGIN_ROOT%/}"
SCRIPT="$PLUGIN_ROOT/scripts/run-story.ts"
if [ ! -f "$SCRIPT" ]; then
osascript -e 'display notification "FATAL: run-story.ts not found" with title "/impl story"'
echo "FATAL: $SCRIPT not found (PLUGIN_ROOT=$PLUGIN_ROOT)"
exit 1
fi
mkdir -p ~/.local/state/impl-story
nohup /opt/homebrew/bin/bun run "$SCRIPT" <parent-N> -p <N> \
>> ~/.local/state/impl-story/<parent-N>.log 2>&1 &
PID=$!
# Verify launch — script writes <parent-N>.status within ~2s of starting
sleep 5
if ! ps -p $PID > /dev/null 2>&1; then
osascript -e "display notification \"Orchestrator died within 5s — see log\" with title \"/impl story #<parent-N>\""
echo "FATAL: orchestrator (PID $PID) died. Tail of log:"
tail -20 ~/.local/state/impl-story/<parent-N>.log
exit 1
fi
echo "Started orchestrator (PID $PID). Log: ~/.local/state/impl-story/<parent-N>.log | Status: ~/.local/state/impl-story/<parent-N>.status"
nohup ... & self-detaches the process. The verify block (sleep 5 + ps) catches launch-time crashes (module-not-found, bun missing, syntax error in script) — without it, a dead-on-arrival dispatch returns "Started" with no notify ever firing. After the verify block passes, your work is done — exit cleanly.
The orchestrator writes a heartbeat status JSON to ~/.local/state/impl-story/<parent-N>.status on every poll cycle ({"phase":"polling","watching":[...],"hb":N,...}). External observers (main session pings via ScheduleWakeup, separate cr watcher sessions, manual cat) read that file — not the unstructured log — to know whether the drain is alive and progressing. On crash the script writes phase:"died" with the error.
runPreCheck(), runs BEFORE the main drainer loop): parses parent ## Acceptance bullets via the SSOT regex (references/ac-coverage-spec.md § 3.1) and each child's ## Covers AC section (§ 1, § 3.2). If any parent AC has zero covering child → posts a gh comment on the parent (AC mapping missing — add ## Covers AC to children before resuming. Uncovered: AC#2, AC#5) and exits with code 4 without entering the main loop. Skipped entirely on legacy parents with no ## Acceptance bullets (grandfathered). Distinct exit code so /impl auto and external callers can latch on AC-mapping failure without conflating it with generic crashes (exit 1).- [ ] #M / - [x] #M checklist → child issue numbers.## Blocked by\n- ai-brain#X → adjacency map. Detects owner:bot vs owner:human.-p cap (default 3, hard max 5).claude -w "story-<P>-issue-<M>" --dangerously-skip-permissions -p "/impl <M>" in background. Each worker runs /impl <M> (issue mode) which internally wraps /tdd in the child-level ralph + advisor wrapper (see § Workflow — issue mode → "Child-level ralph + advisor wrapper")."HITL: #M needs human. Run: cr /pickup M" — user resolves manually.gh issue view <M> --json state — on CLOSED: tick parent body checklist (sed - [ ] #M → - [x] #M, gh issue edit --body), promote any newly-unblocked waiters to ready queue.bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/close-issue.sh" <P> (strips status:* labels first).~/.local/state/impl-story/<P>-child-<N>.result, written by each /impl worker) and append one TSV row to {vault}/daily/skill-outcomes/impl-story.log. Schema and column order: references/ac-coverage-spec.md § 5. Missing per-child files are tolerated (counted as zero, with a warn line in the log) so a crashed worker doesn't suppress the parent-level row."Story #<P> complete. Log: ~/.local/state/impl-story/<P>.log".~/.local/state/impl-story/<parent-N>.log.| Code | Meaning |
|---|---|
| 0 | Drain completed (parent CLOSED or left OPEN with logged failures) |
| 1 | Fatal orchestrator error (spawn failure, area resolution, gh fault) |
| 4 | Gate B pre-check blocked — parent has ## Acceptance bullets but children lack ## Covers AC coverage. Comment posted on parent. |
Default -p 3, hard-capped at 5 inside the script. Each AFK worker is its own claude -w session consuming subscription quota; concurrent sessions add up. HITL children do NOT count toward the cap (they're notify-only). User can raise to 5 with -p 5; values above 5 are silently clamped.
| Phase | Main Claude tokens |
|---|---|
/impl story <P> invocation | ~1K (skill prose + bash command) |
| Detached drain (could be hours) | 0 |
| Per-worker session | Independent quota, isolated context |
Status check anytime via tail -f ~/.local/state/impl-story/<P>.log from any terminal — no Claude session needed.
- [ ] #M checklist (script aborts with notify)./impl auto -p N instead).-p N mode (alias --parallel N)/impl -p N and /impl -p <N>,<M>,... both dispatch to a TypeScript script orchestrator (scripts/run-parallel.ts, bun runtime, unit-tested via scripts/run-parallel.test.ts) — same architecture as /impl story (ai-brain#161 + #173). Main Claude session exits after kicking it off, ~0 token burn for the duration of the drain. Workers spawned via claude -w with explicit per-issue cwd from area:* labels — no Agent-tool path, no orchestrator-cwd dependency.
When the user types /impl -p <K> (integer K, queue-pick mode), execute:
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(ls -d ~/.claude/plugins/cache/brain-os-marketplace/brain-os/*/ 2>/dev/null | sort -V | tail -1)}"
PLUGIN_ROOT="${PLUGIN_ROOT%/}"
SCRIPT="$PLUGIN_ROOT/scripts/run-parallel.ts"
if [ ! -f "$SCRIPT" ]; then
osascript -e 'display notification "FATAL: run-parallel.ts not found" with title "/impl -p"'
echo "FATAL: $SCRIPT not found (PLUGIN_ROOT=$PLUGIN_ROOT)"
exit 1
fi
mkdir -p ~/.local/state/impl-parallel
BATCH_ID="count-<K>-plugin-brain-os-$(date +%s)"
nohup /opt/homebrew/bin/bun run "$SCRIPT" --count <K> --area plugin-brain-os -p <K> \
>> ~/.local/state/impl-parallel/${BATCH_ID}.log 2>&1 &
PID=$!
sleep 5
if ! ps -p $PID > /dev/null 2>&1; then
osascript -e "display notification \"Orchestrator died within 5s — see log\" with title \"/impl -p\""
echo "FATAL: orchestrator (PID $PID) died. Tail of log:"
tail -20 ~/.local/state/impl-parallel/${BATCH_ID}.log
exit 1
fi
echo "Started orchestrator (PID $PID). Log: ~/.local/state/impl-parallel/${BATCH_ID}.log"
When the user types /impl -p <N>,<M>,... (explicit list mode), execute:
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(ls -d ~/.claude/plugins/cache/brain-os-marketplace/brain-os/*/ 2>/dev/null | sort -V | tail -1)}"
PLUGIN_ROOT="${PLUGIN_ROOT%/}"
SCRIPT="$PLUGIN_ROOT/scripts/run-parallel.ts"
[ -f "$SCRIPT" ] || { echo "FATAL: run-parallel.ts not found"; exit 1; }
mkdir -p ~/.local/state/impl-parallel
ISSUES="<N>,<M>,..." # raw user input, parser dedupes + clamps inside script
BATCH_ID="list-${ISSUES//,/-}"
nohup /opt/homebrew/bin/bun run "$SCRIPT" --issues "$ISSUES" -p <PARALLEL_CAP> \
>> ~/.local/state/impl-parallel/${BATCH_ID}.log 2>&1 &
PID=$!
sleep 5
if ! ps -p $PID > /dev/null 2>&1; then
osascript -e "display notification \"Orchestrator died within 5s — see log\" with title \"/impl -p\""
tail -20 ~/.local/state/impl-parallel/${BATCH_ID}.log
exit 1
fi
echo "Started orchestrator (PID $PID). Log: ~/.local/state/impl-parallel/${BATCH_ID}.log"
Both forms self-detach via nohup ... &. The verify block (sleep 5 + ps) catches launch-time crashes (module-not-found, bun missing, syntax error). After verify passes, your work is done — exit cleanly.
The orchestrator writes a heartbeat status JSON to ~/.local/state/impl-parallel/<batch-id>.status on every poll cycle ({"phase":"polling","watching":[...],"hb":N,...}). External observers read that file — not the unstructured log — to track progress. On crash the script writes phase:"died" with the error.
--issues <N>,<M>,... → calls parseIssueList (split , → trim → integer assert → dedupe with WARN → clamp at HARD_CAP=5 with WARN).--count K --area X → calls gh.listIssues(["area:X","status:ready","owner:bot"], K) (queue-pick mode, owner:bot filter applied).gh.viewFull(N) for each issue. Asserts same area:* label across the list (validateSameArea); throws FATAL: list spans multiple areas (...) on mismatch BEFORE claiming any issue.gh.claim(N) (status:ready → status:in-progress); already-CLOSED issues are added to closed and skipped without spawn or claim. Per-issue cwd derived via inferRepoFromIssueArea(labels, areaMap).claude -w "parallel-<batchId>-issue-<N>" --dangerously-skip-permissions -p "/impl <N>" workers with explicit cwd: <area-repo> up to the parallel cap (clamped to HARD_CAP=5).gh.viewState(N) + proc.isAlive(pid) → decideWorkerAction. CLOSED → cleanupWorker(name, cwd) + add to closed. Dead → gh.relabelHuman(N) + add to failed; siblings continue.{closed, failed}. Inner /impl <N> workers handled their own commit + push + close-issue.sh; the orchestrator does NOT push or close from main repo.Default -p 3, hard-capped at 5 inside the script (clampParallel(p, HARD_CAP)). Each worker is its own claude -w session consuming subscription quota. User can raise to 5 with -p 5; values above 5 are silently clamped.
| Phase | Main Claude tokens |
|---|---|
/impl -p <K> or /impl -p <list> invocation | ~1K (skill prose + bash command) |
| Detached drain (could be hours) | 0 |
| Per-worker session | Independent quota, isolated context |
Status check anytime via tail -f ~/.local/state/impl-parallel/<batch-id>.log from any terminal — no Claude session needed.
Earlier /impl -p versions used Agent({isolation: "worktree"}) from the main session. Two problems surfaced (smoke-tested 2026-04-27, ai-brain#173):
Agent({isolation: "worktree"}) clones from the parent session's cwd — when main session is in the vault but listed issues are area:plugin-brain-os, the worktree was a vault clone and isolation broke (or had to be skipped entirely).cwd arg, so the workaround was "main session must cd into the area repo first" — but harness resets cwd between Bash calls, so the workaround couldn't be encoded in skill prose reliably.The script orchestrator pattern (mirroring /impl story after #161's same fix) sidesteps both: Bun.spawn(claude -w ..., { cwd }) takes explicit per-issue cwd derived from inferRepoFromIssueArea(labels). Each worker lands in the correct repo regardless of where the orchestrator launched.
/impl -p <N>,<M>,...)The --issues <N>,<M>,... form covered above is the list-mode entry. The script's parseIssueList enforces parser rules:
,, trim whitespace, parse each token as positive integer. Invalid token (abc, -1, 0) → throws FATAL: invalid issue number in list: <token>.WARN: dropped duplicate #<N> from list per drop.list.length > HARD_CAP (5) → keeps first 5 and emits WARN: clamped parallel list to first 5 (got <length>): dropped #<a>, #<b>, .... Does NOT silently drop.area:* labels via validateSameArea. Mismatch → throws FATAL: list spans multiple areas (#<X> area:<a>, #<Y> area:<b>). Run separate /impl invocations per area.. Critical because Bun.spawn(claude -w ..., {cwd}) takes ONE cwd per worker — multiple areas would still need split invocations even though the script accepts per-issue cwd; the same-area assertion is documentation/safety, not a runtime constraint.owner:bot (same as issue-mode list — the user named them explicitly). Only --count queue-pick mode applies the owner filter.Worker died (PID gone, issue still OPEN) → gh.relabelHuman(N) (remove status:in-progress+owner:bot, add status:ready+owner:human) + add to failed. Loop continues to siblings. Final notification surfaces failure list.
Worker closed cleanly → cleanupWorker(name, cwd) removes the worktree + branch left behind by the closed claude -w session.
auto mode/impl auto [-p N] is a one-shot dispatcher to the ralph-loop wrapper. The skill prose owns nothing else — no pick, no claim, no commit, no log. All of that happens inside the inner /impl body each iteration.
When the user types /impl auto (no -p):
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(ls -d ~/.claude/plugins/cache/brain-os-marketplace/brain-os/*/ 2>/dev/null | sort -V | tail -1)}"
PLUGIN_ROOT="${PLUGIN_ROOT%/}"
[ -f "$PLUGIN_ROOT/scripts/run-ralph.sh" ] || { echo "FATAL: run-ralph.sh not found (PLUGIN_ROOT=$PLUGIN_ROOT)"; exit 1; }
bash "$PLUGIN_ROOT/scripts/run-ralph.sh" "/impl" \
--completion-promise NO_MORE_ISSUES \
--max-iterations 50
When the user types /impl auto -p 3 (or any -p N, 1 ≤ N ≤ 5):
PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-$(ls -d ~/.claude/plugins/cache/brain-os-marketplace/brain-os/*/ 2>/dev/null | sort -V | tail -1)}"
PLUGIN_ROOT="${PLUGIN_ROOT%/}"
[ -f "$PLUGIN_ROOT/scripts/run-ralph.sh" ] || { echo "FATAL: run-ralph.sh not found (PLUGIN_ROOT=$PLUGIN_ROOT)"; exit 1; }
bash "$PLUGIN_ROOT/scripts/run-ralph.sh" "/impl -p 3" \
--completion-promise NO_MORE_ISSUES \
--max-iterations 50
The args are hardcoded by design — /impl auto is the no-knobs entry point. Use the manual /ralph-loop "/impl" --completion-promise ... --max-iterations ... form when you need different limits or a non-/impl body (see § Recommended invocation patterns → Custom-tuned drain).
Comma-list args are rejected in auto mode. auto semantically means "drain the queue until empty"; a fixed list has no queue and no <promise>NO_MORE_ISSUES</promise> condition. If the user types /impl auto 164,165 or /impl auto -p 164,165, abort with FATAL: /impl auto does not accept issue lists. Use /impl 164,165 (sequential) or /impl -p 164,165 (parallel) instead. and exit 1.
scripts/run-ralph.sh resolves the installed ralph-loop plugin path dynamically (no version pin) and execs setup-ralph-loop.sh with the forwarded args. setup-ralph-loop.sh writes the in-session state file .claude/ralph-loop.local.md (with frontmatter: active: true, iteration: 1, max_iterations: 50, completion_promise: "NO_MORE_ISSUES") and echoes the initial /impl prompt. Claude reads the prompt and runs /impl once. When Claude tries to stop, the ralph Stop hook intercepts:
<promise>NO_MORE_ISSUES</promise> (inner tag exact-match against --completion-promise) → let stop succeed, queue drained.iteration >= max_iterations → let stop succeed, safety cap reached./impl or /impl -p N) → next iteration.-p N/impl auto -p N forwards -p N into the inner /impl invocation, NOT into the ralph-loop wrapper. Each iteration runs /impl -p N, which dispatches scripts/run-parallel.ts to spawn N detached claude -w workers via Bun.spawn with explicit per-issue cwd from inferRepoFromIssueArea(labels, areaMap) — NOT Agent-tool subagents (subject to the 1 ≤ N ≤ 5 parallel cap, HARD_CAP=5 enforced by clampParallel). The ralph wrapper sees a single composite prompt — "/impl -p 3" is one prompt to ralph, even though it fans out N workers inside.
Per iteration burns inference in the current Claude session for the outer /impl invocation that calls run-parallel.ts (NOT detached at the auto-mode layer — the auto-mode loop runs in the same Claude session and re-feeds via the ralph Stop hook). The inner run-parallel.ts dispatch IS detached via nohup, so each iteration's worker burn happens off-session like /impl story. Budget accordingly: a 50-iteration drain with -p 3 runs up to 50 outer iterations × 3 workers = 150 inner /impl invocations in the worst case before the safety cap triggers. The <promise>NO_MORE_ISSUES</promise> sentinel is what brings it home early.
auto is dispatch-only — it does not write its own outcome log row. Each inner /impl iteration writes a once or parallel row as normal. Do NOT add an action=auto row to {vault}/daily/skill-outcomes/impl.log; that would double-count.
The sentinel is <promise>NO_MORE_ISSUES</promise> in the assistant text response — see § Defaults → Empty backlog sentinel for the contract. Bash(echo) does NOT satisfy it; ralph reads transcript text blocks (type: text), not unix stdout. The auto mode inherits this contract from the inner /impl it loops.
Workers spawned by /impl story (scripts/run-story.ts) and /impl -p (scripts/run-parallel.ts) are full claude -w "<name>" --dangerously-skip-permissions -p "/impl <N>" sessions — not Agent-tool subagents. Each worker reads skills/impl/SKILL.md fresh on startup and runs the issue mode workflow end-to-end (claim → read → /tdd → commit → push → close), which already encodes:
gh issue view <N>./tdd invocation per artifact-type table at ~/work/brain-os-plugin/skills/tdd/SKILL.md.Closes <tracker-repo>#<N> + Co-Authored-By line.#<N> done (or #<N> FAILED: <reason>) on a line by itself so external watchers can latch.The worker's cwd is set explicitly by the parent orchestrator via Bun.spawn(claude -w ..., { cwd }), derived from the issue's area:* label (inferRepoFromIssueArea(labels, areaMap)).
Handovers, incomplete grills, mid-draft writing stay owner:human. /impl (in once and parallel modes) filters on owner:bot — handovers are excluded by construction. Codified to prevent label drift where someone tags a half-finished grill owner:bot and an autonomous worker picks it up mid-thought.
The rule, stated mechanically:
| Artifact state | Owner label | Pickup path |
|---|---|---|
| Skill / script / hook / plist with closed acceptance criteria | owner:bot | /impl (any mode) |
| Handover doc (continuation contract for a future session) | owner:human | /pickup (HITL) |
| Grill session not yet settled | owner:human | /pickup (HITL) |
| Mid-draft Substack / vault essay | owner:human | /pickup (HITL) |
| Anything that needs human judgment to close | owner:human | /pickup (HITL) |
If /impl <N> is invoked on an owner:human issue (legal — owner filter bypassed in issue mode), the operator is making an explicit override decision. Honor it: run /tdd, commit, push, close. The mid-flow rule still applies as guidance, but the explicit override carries the day. Do NOT prompt the user to confirm; do NOT silently fall back to /pickup.
Blocked by #M and M is open — re-label status:ready and skip; the dependency must land first.status:in-progress already set, owner is a human) — do not steal it.Closes <tracker-repo>#N not Closes #N. Cross-repo close-on-merge requires the explicit repo prefix because the commit lands in the code repo (e.g. brain-os-plugin) but the issue lives in the tracker repo. bash "$CLAUDE_PLUGIN_ROOT/scripts/gh-tasks/close-issue.sh" <N> is still required as a belt-and-braces step because cross-repo Closes doesn't always auto-fire — and the helper additionally strips ghost status:* labels that bare gh issue close leaves behind.git pull --rebase is mandatory before push. CI auto-release pipelines bump plugin.json on every push. Skipping the rebase guarantees a non-fast-forward push failure.<promise> tags — emit exactly <promise>NO_MORE_ISSUES</promise> in the assistant text response. The ralph-loop Stop hook (stop-hook.sh:130-141) extracts the inner text via Perl regex and exact-matches against --completion-promise "NO_MORE_ISSUES". Bare NO_MORE_ISSUES without tags only matches when the entire response is literally that string and nothing else — adding any surrounding prose (which Claude almost always does) breaks the match. Stdout / Bash(echo) is also wrong: ralph reads transcript text blocks (type: text), not stdout.--team is DEFERRED. If a user passes --team, print "team mode is deferred to Phase G — see grill 2026-04-25" and exit 1. Don't fall back to -p silently — the user explicitly asked for the unimplemented mode.improve: encoded feedback — patches keep landing), fix /tdd, not /impl.Follow skill-spec.md § 11. Append to {vault}/daily/skill-outcomes/impl.log:
{date} | impl | {action} | ~/work/brain-os-plugin | {issue-or-batch-ref} | commit:{hash} | {result}
action: once (single issue), parallel (one of N issues from a parallel run, including parallel list -p <list>), or drain (the orchestrator's wrap-up after a parallel batch — output_path = batch label, e.g. parallel-N3). Sequential list (/impl <N>,<M>,...) writes one once row per issue with mode=list-sequential in the optional fields. NOTE: auto is NOT a valid action — /impl auto is dispatch-only; its inner iterations write once or parallel rows as normal.issue-or-batch-ref: <tracker-repo>#N for once/parallel; the batch label for drainresult: pass if the issue closed AND no reactive improve: encoded feedback — commit lands against the touched artifacts within 48h; partial if user manually corrected before close; fail if cycle aborted or reactive patch neededmode=parallel (or mode=list-sequential for the sequential list form), n=N (parallel size, or list-length post-dedupe-and-clamp), area=<label>, tdd_cycles=K (red-green count from /tdd)2026-04-27 | impl | once | ~/work/brain-os-plugin | ai-brain#147 | commit:abc1234 | pass
2026-04-27 | impl | parallel | ~/work/brain-os-plugin | ai-brain#148 | commit:def5678 | pass | mode=parallel,n=3
2026-04-27 | impl | once | ~/work/brain-os-plugin | ai-brain#164 | commit:9abcdef | pass | mode=list-sequential,n=2
2026-04-27 | impl | once | ~/work/brain-os-plugin | ai-brain#165 | commit:1234abc | pass | mode=list-sequential,n=2
2026-04-27 | impl | parallel | ~/work/brain-os-plugin | ai-brain#164 | commit:9abcdef | pass | mode=parallel,n=2
2026-04-27 | impl | parallel | ~/work/brain-os-plugin | ai-brain#165 | commit:1234abc | pass | mode=parallel,n=2
If result != pass, auto-invoke /brain-os:improve impl per the skill-spec § 11 auto-improve rule. The eval gate inside /improve reverts any change that drops pass rate, so auto-apply is safe.