npx claudepluginhub oduffy-delphi/coordinator-claude[stub-ids|directory-path|'all']# Delegate Execution — Dispatch Enriched Stubs to Executor Agents Hand enriched, reviewed stub specifications to executor agents for implementation, selecting the appropriate model (Sonnet or Opus) based on stub complexity. ## Instructions When invoked, dispatch executor agents to implement enriched and reviewed stubs. If `$ARGUMENTS` is provided: - Specific stub IDs (e.g., "2A 2B 2C") → execute only those stubs - A directory path → execute all ready stubs in that directory - "all" → execute everything with status "Enriched and reviewed" ### Phase 1: Read Tracker and Identify Ready Stu...
Hand enriched, reviewed stub specifications to executor agents for implementation, selecting the appropriate model (Sonnet or Opus) based on stub complexity.
When invoked, dispatch executor agents to implement enriched and reviewed stubs.
If $ARGUMENTS is provided:
Before dispatching any executors, mark every stub that is about to be executed:
The executor agents will also mark their individual stub documents (per the executor's write-ahead protocol), creating two layers of breadcrumbs. If a session crashes mid-execution, both the tracker and the stub itself show "in progress."
Default: Sonnet. Always. The enrichment pipeline exists precisely so execution can be cheap. By the time a stub reaches this phase, it has been through enrichment (exact code sketches, line numbers, file paths) and domain review (Sid/Camelia/Palí corrections). The Opus judgment has already been spent — the executor is a typist following a blueprint.
| Stub character | Model | Rationale |
|---|---|---|
| Any enriched+reviewed stub | model: "sonnet" | The spec IS the Opus judgment. Sonnet follows blueprints reliably. |
| Very large + natural seams — API surfaces with independent endpoints, feature sets with clear boundaries | model: "sonnet" with Opus tech lead (see below) | Too large for one executor; Opus coordinates, Sonnets type. |
Dispatched executors are always Sonnet. No exceptions. The model parameter on executor dispatch should never be set to "opus". The hierarchy is: Opus oversees, Sonnet types.
If a stub genuinely needs Opus-level judgment to execute (unresolved ambiguity, NEEDS_COORDINATOR markers, cross-file coherence decisions not captured in code sketches), the EM handles it directly — don't dispatch it. The coordinator IS the Opus. If you find yourself wanting to dispatch an Opus executor, ask: "Is the spec incomplete?" If yes, fix the spec. If no, the EM can do the work inline or supervise Sonnet executors directly.
For independent stubs (no shared dependencies):
run_in_background: truesubagent_type: "executor" and the model selected by the rubric aboveFor dependent stubs (shared files or sequential prerequisites):
For very large stubs with natural seams (Opus tech lead pattern):
For each executor that completes:
Re-dispatch budget: Each stub gets a maximum of 3 dispatch attempts (initial + 2 re-dispatches). This budget is shared across all failure modes (BLOCKED spec fixes, THRASHING re-dispatch, validation self-correction) and supersedes the previous THRASHING-specific rule ("if second executor also aborts → escalate to PM") — the universal 3-budget is the single source of truth.
Track attempts in the tracker README status column (coordinator-owned), not the stub's own status line (which the executor overwrites with its write-ahead format):
Tracker README: | chunk-2A | Execution in progress (attempt 2/3) | ... |
After the 3rd attempt, regardless of outcome:
## Execution History sectionException: The Phase 3 step-4 self-correction loop for deterministic validation failures (type errors, lint) counts as part of one dispatch attempt, not separate attempts. The budget counts coordinator-level re-dispatches, not executor-internal fix iterations.
Worked example — how budgets nest:
Phase 3.0: Post-Executor Haiku Verification
Before the coordinator reads files manually, dispatch a Haiku agent to do the mechanical data-gathering. The Haiku agent receives the executor's completion report and the stub's acceptance criteria, then:
git diff --name-only against the pre-execution state. Do the modified files match what the stub specified?## Acceptance Criteria section and for each AC-N: item:
AC-N | criterion text | ✓ checked / ✗ unchecked | evidence or gap description## Acceptance Criteria section, the Haiku agent reports this absence in its structured output. The coordinator treats a missing section as a signal to investigate the enrichment — not as a verification failure.The coordinator then performs the semantic spec compliance check (step 2 below) using the Haiku's structured data — not by reading every file from scratch.
Why Haiku: git diff, tsc --noEmit, and reading file:line are mechanical. Delegating this data-gathering saves coordinator context for the judgment calls (spec intent matching, gap identification).
Dispatch template:
Agent(
model: "haiku",
prompt: """
You are a mechanical verification agent. Check the following:
EXECUTOR REPORT:
{paste executor completion report}
STUB ACCEPTANCE CRITERIA:
{paste stub's ## Acceptance Criteria section, or "NONE — report absence"}
TASKS:
1. Run: git diff --name-only {pre-execution-commit}..HEAD
Report: files changed (expected vs actual from executor report)
2. Run project validation: {validation command from stub or project config}
If no explicit command, use the Validation Matrix: tsconfig.json → npx tsc --noEmit,
pyproject.toml → poetry run python -m py_compile, package.json with pnpm → pnpm typecheck.
If no project signal found, report validation as SKIPPED (do not assume passing).
Report: pass/fail/skipped with output
3. For each AC-N item, verify against current file state:
Report: AC-N | criterion | PASS/FAIL | evidence
OUTPUT FORMAT (write to stdout, not to a file):
## Haiku Verification Report
### Files Changed
Expected: [list from executor report]
Actual: [list from git diff]
Match: yes/no
### Validation
Command: {command}
Result: PASS/FAIL/SKIPPED
Output: {relevant output, truncated to 50 lines}
### Acceptance Criteria
| AC | Criterion | Result | Evidence |
|----|-----------|--------|----------|
| AC-1 | ... | PASS/FAIL | ... |
### Missing Criteria
[If stub has no ## Acceptance Criteria section, state:
"Stub lacks Acceptance Criteria section — flag for coordinator"]
"""
)
If Haiku reports "Stub lacks Acceptance Criteria section":
On DONE/DONE_WITH_CONCERNS report:
/review-dispatch
/review-dispatch), not the EM manuallyOn BLOCKED report:
metadata.tried_and_abandoned via TaskUpdate. Format: "Tried: [attempted approach] — Blocked: [blocker]"tried_and_abandoned is non-empty:
ANTI-REPETITION: The following approaches were tried on this stub:
{paste tried_and_abandoned entries}
The spec has been updated to address the blocker. Use the updated spec, not the old approach.
On THRASHING REPORT (aborted by watchdog or self-detected):
metadata.tried_and_abandoned via TaskUpdate. Format: "Tried: [approach] — Failed: [last error/state]". This survives compaction and prevents re-dispatched executors from repeating dead approaches.self-detected (executor caught it): more reliable diagnosis than watchdog (executor was unaware) — weight accordinglyANTI-REPETITION: The following approaches were tried and failed on this stub:
{paste tried_and_abandoned entries}
Do NOT repeat these approaches. See stub ## Execution Post-Mortem for details.
After all stubs are executed:
Executors own their tracker updates (status, commit hashes). The coordinator's role here is verification, not data entry — but verification must be thorough.
5.1: Dispatch tracker verification
5.2: Canonical tracker sweep verification For each completed stub, grep its codename across canonical trackers to confirm the executor ran its sweep:
grep -in "<codename>" docs/project-tracker.md tasks/*/todo.md docs/roadmap.md ROADMAP.md 2>/dev/null
docs/project-tracker.md references the work and wasn't updated, that's a gap — update it## Execution Summary
**Stubs executed:** N of M
**Stubs blocked:** K (with reasons)
**Validation:** Pass/Fail
### Completed
| Stub | Status | Notes |
|------|--------|-------|
| ... | Done | ... |
### Blocked (if any)
| Stub | Blocker | Stub Needs |
|------|---------|------------|
| ... | ... | ... |
### Next Steps
- [What remains to be done]
/enrich-and-review must be run before this command — stubs must be enriched and reviewed/review-dispatch handles the review step that precedes execution/review-dispatch (see Phase 3, step 5)