Decomposes features into parallel sub-tasks, plans each via planner agents, builds via parallel builder agents. Use for multi-part features requiring coordinated implementation.
npx claudepluginhub pipemind-com/pipemind-marketplace --plugin spec-driven-developmentThis skill is limited to using the following tools:
Coordinates planner and builder agents to implement multi-part features. The orchestrator decomposes, delegates, and tracks — it never writes production code or makes design decisions.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Coordinates planner and builder agents to implement multi-part features. The orchestrator decomposes, delegates, and tracks — it never writes production code or makes design decisions.
Core principle: The orchestrator coordinates — it never writes production code or makes design decisions. All code changes flow through builder subagents.
Before anything else, verify all prerequisites exist. FAIL and stop if any are missing:
CLAUDE.md exists in project root — if not: "Run /compiling-project-settings first".claude/agents/planner.md exists — if not: "Run /compiling-planner-agent first".claude/agents/builder.md exists — if not: "Run /compiling-builder-agent first"package.json scripts, Makefile, Cargo.toml, then CLAUDE.md — if not found: halt with "No test runner found — add a test command to package.json, Makefile, or CLAUDE.md"*.test.ts, *_test.go, test_*.py) — if not found: halt with "No test file naming convention found — add example to CLAUDE.md"specs/ (produced by /defining-test-scenarios) — if none found: halt with "No test scenario specs in specs/ — run /defining-test-scenarios first"Once verified, read CLAUDE.md and list available docs/*.md files — these will be passed as context to subagents.
CLAUDE.md for architecture, stack, and coding patternsBreak the feature into 2-6 independent sub-tasks. Each sub-task must be:
For each sub-task, define:
Dependency rules:
Present the decomposition to user via AskUserQuestion for confirmation. Adjust if user requests changes.
Spawn planner subagents via the Task tool with subagent_type: general-purpose, beginning the prompt with @"planner (agent)" — this routes to the compiled planner agent with its full identity. Only pass task-specific context in the rest of the prompt:
Each planner prompt includes:
CLAUDE.md as project contextdocs/ filesspecs/ — so the planner can reference specific scenarios in its output for Wave 1 buildersEmit all independent planner Task calls in a single response — multiple tool calls in one message run concurrently. Never launch planners one at a time. Only wait for a predecessor's plan to finish before launching a dependent sub-task, then immediately launch that dependent planner.
After ALL planner outputs return, review them together for conflicts:
blockedBy so they run sequentiallyThen create the task graph:
TaskCreate for each sub-task with the full planner output as the descriptionblockedBy relationships using the returned task IDsTask Graph:
#1 Add auth middleware [no dependencies]
#2 Add login/logout routes [blocked by #1]
#3 Add session persistence [blocked by #1]
#4 Add auth tests [blocked by #2, #3]
Launch ALL test builders for every sub-task simultaneously in a single response — dependency ordering is Wave 2's concern.
Each Wave 1 builder prompt begins with @"builder (agent)" using subagent_type: general-purpose, and includes: planner output, instruction to write TESTS ONLY, discovered test runner command, test file naming convention, and relevant test scenario spec file path(s).
Builders may only produce: test files, fixtures, test helpers, and minimal type/interface stubs (type signatures only — no implementation logic).
Each Wave 1 builder must satisfy these completion criteria before reporting done:
/reviewing-code-quality on its test files — addresses any Warning/Defect findingsWait for ALL Wave 1 builders before starting Wave 2.
If a builder fails: ask operator per sub-task — "Skip in Wave 2 or build without tests?" (30-second timeout defaults to skip).
Launch unblocked code builders in parallel (single response), respecting the dependency graph.
Each Wave 2 builder prompt begins with @"builder (agent)" using subagent_type: general-purpose, and includes: planner output and the FILE PATHS (not content) of Wave 1 test files. The builder reads those files itself.
Each Wave 2 builder must satisfy these completion criteria before reporting done:
Completion handling:
TaskUpdate to completed, TaskList to find newly unblocked tasks, launch them in single responsefailed, surface to user via AskUserQuestion: "Builder for '{task}' failed after retry: {reason}" with options: Retry with guidance / Skip task / Abort allWhen all tasks finish, print a summary:
Orchestration Complete
=====================
| # | Task | Status | Files Modified | Tests |
|---|-------------------------|-----------|--------------------------|-------|
| 1 | Add auth middleware | completed | src/middleware/auth.ts | 4 |
| 2 | Add login/logout routes | completed | src/routes/auth.ts | 6 |
| 3 | Add session persistence | skipped | — | — |
| 4 | Add auth tests | completed | tests/auth.test.ts | 3 |
Test Results: 13 passed, 0 failed
Issues: Task #3 skipped (user decision)
Suggest next steps:
/git-commit-changes to create atomic commits/conducting-post-mortem to capture lessons learnedStop and reassess if any of these occur: