Use when executing implementation plans with multiple tasks - dispatches per task with dependency-aware parallel scheduling
From mbscodenpx claudepluginhub mbstools/mbscodeThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Dispatch a fresh subagent per task with isolated context. Two-stage review (spec then quality) after each task.
Why subagents: Fresh context per task prevents confusion and context pollution. You construct exactly what each subagent needs — they never inherit your session history. This preserves your context for coordination.
Dispatch mechanism: Use the Agent tool (Claude Code) or Task tool to create subagents. Each subagent receives a prompt from the templates below. If your platform has no subagent dispatch, fall back to mbscode:executing-plans.
mbscode:executing-plans (inline)mbscode:writing-plans firstRead plan → Extract ALL tasks with dependencies → Create todos with blockers
↓
DISPATCH LOOP:
Find READY tasks — tasks whose blockers are ALL complete
↓
Ready tasks > 1? → Dispatch in PARALLEL (each gets its own subagent)
Ready tasks = 1? → Dispatch single subagent
Ready tasks = 0? → All tasks done or deadlocked (check for cycles)
↓
Per dispatched task (runs in parallel when multiple):
CHECK Pre-Task Gates (GATES.md)
↓
**Resume integrity check:** If resuming, compare plan Resume with actual checkboxes. Trust checkboxes over Resume. If a [x] step has an unchecked verification, re-run it. Log discrepancies.
↓
Dispatch implementer subagent
- Give: full task text, project context, relevant file contents
- Never: make them read the plan file
↓
Implementer asks questions? → Answer, re-dispatch with more context
↓
Implementer completes → check status:
DONE → proceed to review
DONE_WITH_CONCERNS → read concerns, address if needed, then review
NEEDS_CONTEXT → provide context, re-dispatch same model
BLOCKED → assess: more context? stronger model? smaller task? escalate to human
↓
Stage 1: Dispatch SPEC REVIEWER
- Gets: task spec + implementer's changes (git diff)
- Checks: everything in spec is implemented, nothing extra added
- If issues → implementer fixes → spec reviewer re-reviews → loop
↓
Stage 2: Dispatch CODE QUALITY REVIEWER
- Gets: code changes (git diff)
- Checks: clean code, DRY, no magic numbers, error handling, testability
- If issues → implementer fixes → quality reviewer re-reviews → loop
↓
CHECK Post-Task Gates (GATES.md) — use mbscode:verification-before-completion process
↓
Mark task complete → check all steps [x] in plan, update Resume
↓
Update SESSION.md pointer if needed
↓
Any tasks completed? → loop back to DISPATCH LOOP (new tasks may be unblocked)
↓
Milestone boundary? → Invoke mbscode:autonomous-review
↓
All tasks done → Invoke mbscode:finishing-a-development-branch
Plans define task dependencies. Use them to maximize parallelism:
When reading the plan, identify which tasks depend on which:
Parallel dispatch is safe when tasks have no file overlap:
| Scenario | Safe to parallelize? |
|---|---|
| Tasks touch completely different files | Yes |
| Tasks read same files but write different ones | Maybe — safe only if neither task adds to shared files (e.g., imports). When in doubt, run sequentially |
| Tasks write to the same file | No — add dependency |
| Unclear file boundaries | No — run sequentially |
If the plan doesn't specify file boundaries, ask yourself: could these tasks edit the same file? If uncertain, run them sequentially.
Plan tasks:
Task 1: Project scaffolding (no deps)
Task 2: Database models (depends on: Task 1)
Task 3: Database tests (depends on: Task 2)
Task 4: API route handlers (depends on: Task 2)
Task 5: API route tests (depends on: Task 4)
Task 6: README + docs (depends on: Task 1)
Dispatch rounds:
Round 1: [Task 1] — only one ready
Round 2: [Task 2, Task 6] — both unblocked, different files (safe)
Round 3: [Task 3, Task 4] — both unblocked, different files (safe)
Round 4: [Task 5] — unblocked after Task 4
Important: When extracting task text from the plan, strip - [ ] / - [x] checkbox prefixes. Present steps as numbered instructions (1. 2. 3.) not as checkboxes. Subagents implement — the orchestrator tracks progress.
You are a subagent implementing a specific task. Do NOT read or follow the bootstrap skill (using-mbscode). Execute only the task below.
## Task
[Full task text from plan, with checkboxes converted to numbered steps]
## Project Context
[1-2 sentences: what the project is, relevant architecture]
## Relevant Files
**Context injection rules:**
- Files being **directly modified** by this task: include full contents
- Files **touched as dependencies** (imports, callers): include function signatures, exported interfaces, and type definitions only
- **Test files**: include describe/it block structure only (not full test implementations)
- If the subagent returns `NEEDS_CONTEXT` status, provide the requested additional file contents in re-dispatch
## Rules
- Follow TDD: write failing test → implement → verify
- Commit after each logical unit
- No dead code, unused imports, or commented-out code
- Named constants for non-obvious literals (0, 1, -1 exempt)
- No duplicated logic — extract if it appears twice
- Simplest solution — if code can be removed without losing functionality, remove it
- If you need more context, respond with status NEEDS_CONTEXT and list what you need
- If you're blocked, respond with status BLOCKED and explain why
- When done, respond with status DONE or DONE_WITH_CONCERNS
## Status Response Format
STATUS: DONE | DONE_WITH_CONCERNS | NEEDS_CONTEXT | BLOCKED
SUMMARY: [one line — what you did]
CONCERNS: [only if DONE_WITH_CONCERNS — what worries you]
NEEDS: [only if NEEDS_CONTEXT — what you need]
BLOCKER: [only if BLOCKED — what's stopping you]
Review these changes against the task specification.
## Task Spec
[Full task text from plan]
## Changes
[git diff output]
## Your Job
1. Is everything in the spec implemented? List any gaps.
2. Is anything added that's NOT in the spec? List any extras.
3. Do acceptance criteria pass?
## Response Format
SPEC_COMPLIANT: YES | NO
GAPS: [missing items, or "none"]
EXTRAS: [unspecified additions, or "none"]
NOTES: [any observations]
Review these code changes for quality.
## Changes
[git diff output]
## Check These Dimensions
1. **Correctness** — logic errors, edge cases, error handling
2. **Simplicity** — dead code, magic numbers, over-abstraction
3. **DRY** — duplicated logic
4. **Naming** — clear, descriptive names
5. **Testability** — can this be tested in isolation?
6. **Security** — hardcoded secrets, unsanitized input, eval with external data
7. **Error handling** — bare catches, swallowed errors, unhandled async exceptions
## Response Format
APPROVED: YES | NO
ISSUES: [list with severity: CRITICAL / IMPORTANT / MINOR]
STRENGTHS: [what's done well]
Use the appropriate model tier for each role:
| Role | Complexity Signal | Model Tier |
|---|---|---|
| Implementer | ≤ 2 files, no cross-file dependencies | fast (cheapest/fastest available) |
| Implementer | 3+ files or cross-file imports | default (standard quality) |
| Spec reviewer | Any size (mechanical comparison) | fast (cheapest/fastest available) |
| Code quality reviewer | Any size (requires judgment) | default (standard quality) |
| Architecture/design review | 5+ files or cross-module changes | premium (highest capability) |
Check PROJECT.md ## Model Preferences section for project-specific overrides. If not defined, use platform defaults.
Subagents may not follow instructions perfectly. Handle gracefully:
| Situation | Action |
|---|---|
| Status response missing/malformed | Treat as DONE_WITH_CONCERNS, inspect the diff manually |
| Subagent ignores TDD | Re-dispatch with stronger emphasis, or fix tests yourself |
| 2+ re-dispatches for same task | Break task into smaller pieces, or execute inline |
| Subagent modifies wrong files | Revert changes, re-dispatch with explicit file list |
Max 2 re-dispatches per task. After that, either simplify the task or execute it inline with mbscode:executing-plans.
If a subagent reports BLOCKED or a step is impossible, follow the Plan Modifications protocol in mbscode:executing-plans.
Not all platforms support subagent dispatch. If your platform has no Task/subagent tool (e.g., Cursor), fall back to mbscode:executing-plans (inline execution) instead.
Called by: mbscode:writing-plans (execution handoff)
During execution: mbscode:autonomous-review at milestone boundaries
After all tasks: mbscode:finishing-a-development-branch
| Thought | Reality |
|---|---|
| "Skip spec review, the code looks right" | Spec compliance catches scope drift. Never skip. |
| "Skip quality review, tests pass" | Tests don't catch style, naming, or architecture issues |
| "Start quality review before spec review" | Wrong order. Spec first, quality second. |
| "I'll fix issues myself instead of re-dispatching" | Context pollution. Use a subagent. |
| "Run implementers in parallel" | Only when dependency graph confirms no file overlap. Same-file tasks MUST be sequential. |
| "The subagent said DONE so it's done" | Verify with spec review. Trust but verify. |
| "Close enough on spec compliance" | If reviewer found gaps, they're real. Fix them. |