/run
You are orchestrating the complete Spec Driven Development pipeline:
/spec → /plan → /tasks → /implement → /review → /ship
Each phase reduces ambiguity. By the time you start writing code, you have everything you need: what the feature does, how it integrates, what the edge cases are, what the tests should verify, and what architecture to follow.
For the full SDD methodology rationale, consult references/sdd-methodology.md.
Input
The user will provide a user story, PRD, feature request, or description:
$ARGUMENTS
If $ARGUMENTS is empty, ask the user to describe what they want to build.
Phase 1: Spec — What, not how
Execute the /spec workflow:
- Ask clarifying questions if the requirement is ambiguous
- Write the functional specification:
- Problem statement
- Actors
- Functional requirements
- Given/When/Then acceptance criteria
- Out of scope
- Open questions
Gate: User approval
Use the AskUserQuestion tool to present an interactive approval prompt:
- Question: "Phase 1 complete — Functional spec ready. How do you want to proceed?"
- Header: "Spec review"
- Options:
- Approve — "Proceed to technical plan (Phase 2)"
- Approve with comments — "Proceed to Phase 2, but note my feedback for later phases"
- Request changes — "Revise the spec based on my feedback before proceeding"
Do NOT proceed until the user approves. This is the most critical gate.
- If the user selects Approve: proceed to Phase 2.
- If the user selects Approve with comments: read their feedback, acknowledge it, carry it forward as context for later phases, and proceed to Phase 2.
- If the user selects Request changes (or selects "Other" with feedback): revise the spec incorporating their feedback, then present the gate again.
Phase 2: Plan — How to build it
Execute the /plan workflow:
- Read the codebase to understand existing patterns and conventions
- Write the technical plan:
- Architecture decisions (grounded in existing code)
- Affected components
- Data models / contracts
- Testing strategy
- Technical constraints
- Risks
Gate: User approval
Use the AskUserQuestion tool to present an interactive approval prompt:
- Question: "Phase 2 complete — Technical plan ready. How do you want to proceed?"
- Header: "Plan review"
- Options:
- Approve — "Proceed to task breakdown (Phase 3)"
- Approve with comments — "Proceed to Phase 3, but note my feedback for later phases"
- Request changes — "Revise the plan based on my feedback before proceeding"
Do NOT proceed until the user approves. The plan is where developer expertise matters most.
- If the user selects Approve: proceed to Phase 3.
- If the user selects Approve with comments: read their feedback, acknowledge it, carry it forward as context for later phases, and proceed to Phase 3.
- If the user selects Request changes (or selects "Other" with feedback): revise the plan incorporating their feedback, then present the gate again.
Phase 3: Tasks — Divide and conquer
Execute the /tasks workflow:
- Break the plan into small, ordered, self-contained tasks
- Present the task list for review
- Once approved, create the GitHub issue via
gh issue create
- Create the feature branch
Gate: User approval + issue created
Display the summary, then use the AskUserQuestion tool to present an interactive approval prompt:
Phase 3 complete.
✓ Tasks: N tasks defined
✓ Issue: #<number> — <title>
✓ Branch: feat/<number>-<name>
-
Question: "Phase 3 complete — Tasks defined and issue created. How do you want to proceed?"
-
Header: "Task review"
-
Options:
- Go — "Proceed to implementation (Phase 4)"
- Go with comments — "Proceed to Phase 4, but note my feedback"
- Request changes — "Revise the tasks based on my feedback before proceeding"
-
If the user selects Go: proceed to Phase 4.
-
If the user selects Go with comments: read their feedback, carry it forward, and proceed to Phase 4.
-
If the user selects Request changes (or selects "Other" with feedback): revise the tasks incorporating their feedback, then present the gate again.
Phase 4: Implement — TDD per task
Execute the /implement workflow:
- Work through tasks in order
- For each task: Red → Green → Refactor
- Verify each task before moving to the next
- Run full test suite after all tasks complete
- Refresh the GitHub issue description to reflect what was actually implemented (completed tasks, deviations, new files)
Gate: All tests passing + issue refreshed
Phase 4 complete.
✓ All tasks implemented (N/N)
✓ Tests: N passing, 0 failures
✓ Build: clean
✓ Issue #<number>: description refreshed
Proceeding to review...
If tests fail, diagnose and fix. Do NOT proceed with failures.
Phase 5: Review — Code review before shipping
Execute the /review workflow:
- Review all changes for bugs, security issues, performance problems
- Check test coverage gaps and convention adherence
- Fix any blocking or major issues found
Gate: Clean review + build and tests passing
Phase 5 complete.
✓ Build: clean
✓ Tests: N passing, 0 failures
✓ Review: Clean (or Minor Issues only)
✓ All blocking issues resolved
Proceeding to ship...
If the build fails, tests fail, or blocking issues are found, fix them and re-review. Do NOT proceed with failures.
Phase 6: Ship — Commit, push, PR
Execute the /ship workflow:
- Review all changes for unintentional additions
- Stage and commit using Conventional Commits
- Push to remote
- Create pull request via
gh pr create linking the issue
- Refresh the issue description with final state (completed tasks, actual files, any post-review changes)
Gate: PR created + descriptions current
Phase 6 complete.
✓ Committed: <message>
✓ Pushed to: feat/<number>-<name>
✓ PR created: <URL>
✓ Issue #<number>: description refreshed
Issue #<number> will close automatically when the PR is merged.
Post-ship updates: If review feedback, CI fixes, or follow-up commits are pushed after the PR is created, re-run /ship Step 8 to refresh the PR description so it reflects the latest changes.
Pipeline Controls
At any point during the pipeline, the user can say:
- "skip to phase N" — Jump to a specific phase (assumes prior phases are done)
- "redo phase N" — Re-execute a specific phase with new input
- "stop" — Halt the pipeline and save progress
- "status" — Show current phase and progress
Team mode (optional)
When the task list from Phase 3 contains 3+ independent tasks that touch different files, offer the user the option to use agent teams for parallel implementation and review:
Use the AskUserQuestion tool:
- Question: "Phase 3 complete — I see N independent tasks that could run in parallel. How do you want to implement?"
- Header: "Team mode"
- Options:
- Sequential (Recommended) — "Implement sequentially with /implement"
- Parallel (team mode) — "Implement in parallel with /team-implement, review with /team-review"
If the user chooses team mode:
- Phase 4 uses
/team-implement instead of /implement
- Phase 5 uses
/team-review instead of /review
- All other phases remain the same
Team mode requires CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS to be enabled.
When to use /run vs individual skills
| Situation | Use |
|---|
| New feature from scratch | /run |
| Complex multi-file change | /run |
| Legacy codebase, unclear domain | /run |
| Large feature with parallel tasks | /run with team mode |
| Quick bug fix | /implement directly |
| Config change, dependency update | /ship directly |
| Already have a spec from PM | Start at /plan |
| Already have a plan, need tasks | Start at /tasks |
Gotchas
These are common failure modes when orchestrating the full pipeline. Watch for them:
- Rushing through gates. Claude tends to auto-approve its own output: "The spec looks complete, proceeding to plan." Gates exist because a human must validate. Always stop and wait for explicit user approval.
- Context loss between phases. By Phase 4, the spec details from Phase 1 may have scrolled out of context. If you're unsure about a requirement, re-read the spec — don't guess from memory.
- Scope creep during implementation. A task says "add validation" and Claude also refactors the surrounding code, adds logging, and improves error messages. Each phase's scope was defined earlier — stick to it.
- Skipping review because "I just wrote it." The author is the worst reviewer. Phase 5 exists precisely because the implementing agent has blind spots about its own code.
- Using /run for simple changes. A one-file bug fix does not need 6 phases. The "When to use" table exists for a reason — recommend the right tool.
- Losing the issue number. The branch name and issue number from Phase 3 are needed in Phase 6 for
Closes #N. Track them explicitly through the pipeline.
- Stale issue and PR descriptions. The issue is created in Phase 3 before any code is written, and the PR is created in Phase 6 at a point-in-time snapshot. If implementation deviates from the plan, or review feedback triggers changes, these descriptions become outdated. Always refresh descriptions at Phase 4 completion (issue) and after any post-PR commits (PR).
Important constraints
- Never skip Phase 1 approval. The spec is the contract.
- Never skip Phase 2 approval. The plan encodes architectural decisions.
- Never ship with failing tests. Phase 4 gate is non-negotiable.
- Never skip code review. Phase 5 catches bugs and security issues before they ship.
- Each phase must complete before the next begins. No partial transitions.
- Carry context between phases. The spec, plan, tasks, issue number, and branch name persist through the pipeline.