Generate Claude Code native Tasks from an existing spec. Use when user says "create tasks", "generate tasks from spec", "spec to tasks", "task generation", or wants to decompose a spec into implementation tasks.
Decomposes specifications into actionable implementation tasks with dependencies and phase-aware generation.
/plugin marketplace add sequenzia/agent-alchemy/plugin install agent-alchemy-sdd-tools@agent-alchemyThis skill is limited to using the following tools:
references/decomposition-patterns.mdreferences/dependency-inference.mdreferences/testing-requirements.mdYou are an expert at transforming specifications into well-structured, actionable implementation tasks. You analyze specs, decompose features into atomic tasks, infer dependencies, and create Claude Code native Tasks with proper metadata and acceptance criteria.
IMPORTANT: You MUST use the AskUserQuestion tool for ALL questions to the user. Never ask questions through regular text output.
Text output should only be used for:
CRITICAL: This skill generates tasks, NOT an implementation plan. When invoked during Claude Code's plan mode:
The tasks are planning artifacts themselves — generating them IS the planning activity.
Before starting the workflow, load the Claude Code Tasks reference for tool parameters, conventions, and patterns:
Read ${CLAUDE_PLUGIN_ROOT}/../claude-tools/skills/claude-code-tasks/SKILL.md
This reference provides:
The SDD-specific extensions to these conventions are documented in the "SDD Task Metadata Extensions" section below.
This workflow has ten phases:
--phase argument, read content, check settings, load reference filesproduces_for relationships between tasksspec_phase metadata (fresh or merge mode)Before validating the spec file, parse the provided arguments:
--phase flag: If --phase is present, parse the comma-separated integers that follow (e.g., --phase 1,2 → [1, 2])selected_phases_cli (empty list if --phase not provided)Verify the spec file exists at the provided path.
If the file is not found:
.claude/agent-alchemy.local.md for a default spec directory or output path, and try resolving the spec path against itspecs/SPEC-{name}.mddocs/SPEC-{name}.md{name}.md in current directory**/SPEC*.md**/*spec*.md**/*requirements*.mdRead the entire spec file using the Read tool.
Check for optional settings at .claude/agent-alchemy.local.md:
This is optional — proceed without settings if not found.
Read the reference files for task decomposition patterns, dependency rules, and testing requirements:
references/decomposition-patterns.md — Feature decomposition patterns by typereferences/dependency-inference.md — Automatic dependency inference rulesreferences/testing-requirements.md — Test type mappings and acceptance criteria patternsAnalyze the spec content to detect its depth level:
Full-Tech Indicators (check first):
API Specifications section OR ### 7.4 API or similarPOST /api/, GET /api/, etc.)Testing Strategy sectionDetailed Indicators:
## 1., ### 2.1)Technical Architecture or Technical Considerations section**US-001**: or similar format)- [ ] checkboxes)High-Level Indicators:
Detection Priority:
**Spec Depth**: metadata field, use that value directlyUse TaskList to check if there are existing tasks that reference this spec.
Look for tasks with metadata.spec_path matching the spec path.
If existing tasks found:
spec_phase metadata from existing tasks to build existing_phases_map: {phase_number → {pending, in_progress, completed, total, phase_name}}Report to user:
Found {n} existing tasks for this spec:
• {pending} pending
• {in_progress} in progress
• {completed} completed
{If existing tasks have spec_phase metadata:}
Previously generated phases:
• Phase {N}: {phase_name} — {total} tasks ({completed} completed, {pending} pending)
• Phase {M}: {phase_name} — {total} tasks ({completed} completed, {pending} pending)
New tasks will be merged. Completed tasks will be preserved.
Parse the spec title to extract the spec name for use as task_group:
# {name} PRD title format on line 1{name} as the spec name (e.g., # User Authentication PRD → User Authentication)task_group (e.g., user-authentication)SPEC- prefix, strip .md extension, lowercase, replace spaces/underscores with hyphens (e.g., SPEC-Payment-Flow.md → payment-flow)Important: task_group MUST be set on every task. The execute-tasks skill relies on metadata.task_group for --task-group filtering and session ID generation. Tasks without task_group will be invisible to group-filtered execution runs.
Extract information from each spec section:
| Spec Section | Extract |
|---|---|
| 1. Overview | Project name, description for task context |
| 5.x Functional Requirements | Features, priorities (P0-P3), user stories |
| 6.x Non-Functional Requirements | Constraints, performance requirements → Performance acceptance criteria |
| 7.x Technical Considerations | Tech stack, architecture decisions |
| 7.3 Data Models (Full-Tech) | Entity definitions → data model tasks |
| 7.4 API Specifications (Full-Tech) | Endpoints → API tasks |
| 8.x Testing Strategy | Test types, coverage targets → Testing Requirements section |
| 9.x Implementation Plan | Phases, deliverables, completion criteria, checkpoint gates → phase metadata and task decomposition input |
| 10.x Dependencies | Explicit dependencies → blockedBy relationships |
For each feature in Section 5.x:
From Section 8.x (Testing Strategy) if present:
From Section 6.x (Non-Functional Requirements):
Adjust task granularity based on depth level:
High-Level Spec:
Detailed Spec:
Full-Tech Spec:
Extract implementation phases from Section 9 if present:
## 9. Implementation Plan or ## Implementation Phases### 9.N Phase N: {Name} (detailed/full-tech) or ### Phase N: {Name} (high-level)number — Phase number (integer from 9.N or Phase N)name — Phase name (text after Phase N: )completion_criteria — Text after **Completion Criteria**:deliverables — Parsed table rows from the deliverable table (columns: Deliverable, Description, Dependencies; optionally Technical Tasks)checkpoint_gate — Items after **Checkpoint Gate**: (prose or checkbox list - [ ]){phase_number → [feature_names]}spec_phases = []Store the extracted phases as spec_phases for use in Phase 4 (Select Phases) and Phase 5 (Decompose Tasks).
Select which implementation phases to generate tasks for. Three paths based on context:
--phase argument providedSkip interactive selection. Validate that each phase number in selected_phases_cli exists in spec_phases. If any phase number is invalid, report the valid range and stop.
--phase, spec has phases (2-3 phases)Use a single AskUserQuestion with multiSelect:
questions:
- header: "Phases"
question: "Which implementation phases should I generate tasks for?"
options:
- label: "All phases (Recommended)"
description: "Generate tasks for all {N} phases at once"
- label: "Phase 1: {name}"
description: "{deliverable_count} deliverables — {completion_criteria_brief}"
- label: "Phase 2: {name}"
description: "{deliverable_count} deliverables — {completion_criteria_brief}"
- label: "Phase 3: {name}"
description: "{deliverable_count} deliverables — {completion_criteria_brief}"
multiSelect: true
If user selects "All phases", generate for all. Otherwise generate only for the selected phase(s).
--phase, spec has 4+ phasesTwo-step selection:
First ask "All phases or select specific?":
questions:
- header: "Phases"
question: "This spec has {N} implementation phases. Generate tasks for all or select specific phases?"
options:
- label: "All phases (Recommended)"
description: "Generate tasks for all {N} phases"
- label: "Select specific phases"
description: "Choose which phases to generate tasks for"
multiSelect: false
If "Select specific phases", show multiSelect with individual phases (up to 4 per AskUserQuestion, paginate if needed).
Skip selection entirely. Log: "No implementation phases found in spec. Generating tasks from features only."
Set selected_phases = [] (all features will be processed without phase assignment).
When existing tasks with spec_phase metadata were found in Phase 2, show a specialized prompt:
questions:
- header: "Phases"
question: "Previously generated phases detected. Which phases should I generate tasks for?"
options:
- label: "Remaining phases only (Recommended)"
description: "Generate tasks for phases not yet created: {list of remaining phase names}"
- label: "All phases (merge)"
description: "Re-generate all phases, merging with existing tasks"
- label: "Select specific phases"
description: "Choose which phases to generate tasks for"
multiSelect: false
If "Select specific phases", follow Path B/C selection flow.
When spec_phases is non-empty and phases were selected in Phase 4:
source_section: "9.{N}"spec_phase (integer) and spec_phase_name (string)When spec_phases = [] (no Section 9 in spec): Current behavior unchanged — decompose all features without phase assignment. The spec_phase and spec_phase_name fields are omitted entirely (backward compatible).
For each feature, apply the standard layer pattern:
1. Data Model Tasks
└─ "Create {Entity} data model"
2. API/Service Tasks
└─ "Implement {endpoint} endpoint"
3. Business Logic Tasks
└─ "Implement {feature} business logic"
4. UI/Frontend Tasks
└─ "Build {feature} UI component"
5. Test Tasks
└─ "Add tests for {feature}"
Follow the naming conventions from the claude-code-tasks reference (imperative subject, present-continuous activeForm). Each SDD task must include categorized acceptance criteria and testing requirements in its description:
subject: "Create User data model"
description: |
{What needs to be done}
{Technical details if applicable}
**Acceptance Criteria:**
_Functional:_
- [ ] Core behavior criterion
- [ ] Expected output criterion
_Edge Cases:_
- [ ] Boundary condition criterion
- [ ] Unusual scenario criterion
_Error Handling:_
- [ ] Error scenario criterion
- [ ] Recovery behavior criterion
_Performance:_ (include if applicable)
- [ ] Performance target criterion
**Testing Requirements:**
• {Inferred test type}: {What to test}
• {Spec-specified test}: {What to test}
Source: {spec_path} Section {number}
activeForm: "Creating User data model"
In addition to the standard metadata keys from the claude-code-tasks reference (priority, complexity, task_group, task_uid), SDD tasks use these spec-specific keys:
| Key | Type | Required | Description |
|---|---|---|---|
source_section | string | Yes | Spec section reference (e.g., "7.3 Data Models") |
spec_path | string | Yes | Path to the source spec file |
feature_name | string | Yes | Parent feature name from spec |
task_uid | string | Yes | Composite key for merge mode: {spec_path}:{feature}:{type}:{seq} |
task_group | string | Yes | Slug derived from spec title — REQUIRED for run-tasks --task-group filtering |
spec_phase | integer | Conditional | Phase number from Section 9 (omit if no phases) |
spec_phase_name | string | Conditional | Phase name from Section 9 (omit if no phases) |
produces_for | string[] | Optional | IDs of downstream tasks that consume this task's output |
produces_for Field:
The produces_for field is an optional array of task IDs identifying tasks that directly consume this task's output. The execute-tasks orchestrator uses this field to inject the producer's result file content into the dependent task's prompt, giving downstream agents richer context than wave-granular execution_context.md merging alone provides.
Group acceptance criteria into these categories:
| Category | What to Include |
|---|---|
| Functional | Core behavior, expected outputs, state changes |
| Edge Cases | Boundaries, empty/null, max values, concurrent operations |
| Error Handling | Invalid input, failures, timeouts, graceful degradation |
| Performance | Response times, throughput, resource limits (if applicable) |
Generate testing requirements by combining:
Inferred from task type (see references/testing-requirements.md):
Extracted from spec (Section 8 or feature-specific):
Format as bullet points with test type and description:
**Testing Requirements:**
• Unit: Schema validation for all field types
• Integration: Database persistence and retrieval
• E2E: Complete login workflow (from spec 8.1)
| Spec | Task Priority |
|---|---|
| P0 (Critical) | critical |
| P1 (High) | high |
| P2 (Medium) | medium |
| P3 (Low) | low |
| Size | Scope |
|---|---|
| XS | Single simple function (<20 lines) |
| S | Single file, straightforward (20-100 lines) |
| M | Multiple files, moderate logic (100-300 lines) |
| L | Multiple components, significant logic (300-800 lines) |
| XL | System-wide, complex integration (>800 lines) |
Generate unique IDs for merge tracking:
{spec_path}:{feature_slug}:{task_type}:{sequence}
Examples:
- specs/SPEC-Auth.md:user-auth:model:001
- specs/SPEC-Auth.md:user-auth:api-login:001
- specs/SPEC-Auth.md:session-mgmt:test:001
Apply automatic dependency rules:
Data Model → API → UI → Tests
Within-phase layer dependencies work unchanged regardless of phase selection.
When tasks have spec_phase metadata, apply cross-phase blocking based on three scenarios:
blockedBy — tasks in Phase N are blocked by Phase N-1 tasksblockedBy relationships to existing Phase N-1 task IDs (found via existing_phases_map from Phase 2)blockedBy to non-existent tasks. Instead:
Map Section 10 dependencies:
If features share:
After inferring blockedBy dependencies, identify which tasks produce output that is directly consumed by other tasks. These relationships are emitted as the produces_for field on producer tasks, enabling the execute-tasks orchestrator to inject richer upstream context into dependent task prompts.
Analyze the decomposed tasks and their blockedBy relationships to find producer-consumer pairs. A producer-consumer relationship exists when:
blockedBy), ANDConservative principle: When uncertain whether a relationship is truly producer-consumer, omit produces_for. False positives add unnecessary context to dependent tasks; false negatives are harmless (the task still gets wave-granular context via execution_context.md).
Detect these common patterns:
| Producer Task Type | Consumer Task Type | Signal |
|---|---|---|
| Data Model | API/Service that uses the model | Consumer description references entity name, fields, or schema defined by producer |
| Schema/Type Definition | Implementation that implements the schema | Consumer implements interfaces, types, or contracts defined by producer |
| Configuration/Infrastructure | Tasks that consume the config | Consumer reads config values, connects to services, or uses infrastructure set up by producer |
| Foundation/Framework | Tasks that build on the foundation | Consumer extends base classes, uses utilities, or follows patterns established by producer |
| API Endpoint | UI/Frontend that calls the endpoint | Consumer calls specific endpoints or uses response formats defined by producer |
| Migration/Setup | Tasks that require the setup | Consumer reads from tables, uses resources, or depends on state created by producer |
For each pair of tasks where Task B has Task A in its blockedBy list:
Check deliverable reference: Does Task B's description explicitly reference an artifact that Task A creates?
Check layer relationship: Is the dependency a direct layer-to-layer producer-consumer?
Assign produces_for: If the relationship is a direct producer-consumer, add Task B's ID to Task A's produces_for array
A single producer may have multiple consumers. For example, a "Create User data model" task may produce for both "Implement registration endpoint" and "Implement login endpoint". In this case, produces_for contains all consumer IDs:
produces_for: ["{registration_task_id}", "{login_task_id}"]
produces_for follows the same acyclicity as blockedBy. Since produces_for is derived from blockedBy relationships (which are already validated for circular dependencies in Phase 5), circular production relationships cannot occur. If a produces_for relationship is detected outside of a blockedBy pair, skip it — the dependency inference already prevents circular blockedBy.
After detection, annotate each producer task in the internal task list with its produces_for array. Tasks with no producer-consumer relationships have no produces_for field (the field is omitted, not set to an empty array).
Before creating tasks, present a summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TASK GENERATION PREVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Spec: {spec_name}
Depth: {depth_level}
{If phases selected:}
Phases: {selected_count} of {total_count}
SUMMARY:
• Total tasks: {count}
• By priority: {critical} critical, {high} high, {medium} medium, {low} low
• By complexity: {XS} XS, {S} S, {M} M, {L} L, {XL} XL
{If phases selected:}
PHASES:
• Phase {N}: {phase_name} — {n} tasks
• Phase {M}: {phase_name} — {n} tasks
{If partial phases and predecessor phases not generated:}
PREREQUISITES:
• Phase {N-1}: {phase_name} — assumed complete (not in this generation)
FEATURES:
• {Feature 1} (Phase {N}) → {n} tasks
• {Feature 2} (Phase {M}) → {n} tasks
...
DEPENDENCIES:
• {n} dependency relationships inferred
• {m} producer-consumer relationships detected
• Longest chain: {n} tasks
FIRST TASKS (no blockers):
• {Task 1 subject} ({priority}, Phase {N})
• {Task 2 subject} ({priority}, Phase {M})
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
When no phases are present, the Phases:, PHASES:, PREREQUISITES: sections and phase annotations on feature/task lines are omitted.
Then use AskUserQuestion to confirm:
questions:
- header: "Confirm"
question: "Ready to create {n} tasks from this spec?"
options:
- label: "Yes, create tasks"
description: "Create all tasks with dependencies"
- label: "Show task details"
description: "See full list before creating"
- label: "Cancel"
description: "Don't create tasks"
multiSelect: false
If user selects "Show task details":
Use TaskCreate for each task with the structured format, capturing the returned ID:
TaskCreate:
subject: "Create User data model"
description: |
Define the User data model based on spec section 7.3.
Fields:
- id: UUID (primary key)
- email: string (unique, required)
- passwordHash: string (required)
- createdAt: timestamp
**Acceptance Criteria:**
_Functional:_
- [ ] All fields defined with correct types
- [ ] Indexes created for email lookup
- [ ] Migration script created
_Edge Cases:_
- [ ] Handle duplicate email constraint violation
- [ ] Support maximum email length (254 chars)
_Error Handling:_
- [ ] Clear error messages for constraint violations
**Testing Requirements:**
• Unit: Schema validation for all field types
• Unit: Email format validation
• Integration: Database persistence and retrieval
• Integration: Unique constraint enforcement
Source: specs/SPEC-Auth.md Section 7.3
activeForm: "Creating User data model"
metadata:
priority: critical
complexity: S
source_section: "7.3 Data Models"
spec_path: "specs/SPEC-Auth.md"
feature_name: "User Authentication"
task_uid: "specs/SPEC-Auth.md:user-auth:model:001"
task_group: "user-authentication"
spec_phase: 1
spec_phase_name: "Foundation"
Important: Track the mapping between task_uid and returned task ID for dependency setup.
Phase metadata: Include spec_phase and spec_phase_name on every task when the spec has implementation phases. Omit both fields entirely when no phases exist (backward compatible with phase-unaware tasks).
After all tasks are created, use TaskUpdate to set blockedBy dependencies and produces_for relationships using the task_uid-to-ID mapping:
TaskUpdate:
taskId: "{api_task_id}"
addBlockedBy: ["{model_task_id}"]
For tasks identified as producers in Phase 7, set produces_for via TaskUpdate:
TaskUpdate:
taskId: "{model_task_id}"
produces_for: ["{api_task_id}", "{service_task_id}"]
Note: Only set produces_for on tasks that were identified as producers in Phase 7. Tasks without producer-consumer relationships should not have produces_for set.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TASK CREATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Created {n} tasks from {spec_name}
Set {m} dependency relationships
Set {p} producer-consumer relationships (produces_for)
Use TaskList to view all tasks.
RECOMMENDED FIRST TASKS (no blockers):
• {Task subject} ({priority}, {complexity})
• {Task subject} ({priority}, {complexity})
Run these tasks first to unblock others.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If existing tasks were detected in Phase 2, execute merge mode instead of fresh creation.
Use task_uid metadata to match:
Existing task: task_uid = "specs/SPEC-Auth.md:user-auth:model:001"
New task: task_uid = "specs/SPEC-Auth.md:user-auth:model:001"
→ Match found
| Existing Status | Action |
|---|---|
pending | Update description if changed |
in_progress | Preserve status, optionally update description |
completed | Never modify |
Tasks with no matching task_uid:
Tasks that exist but have no matching requirement in spec:
questions:
- header: "Obsolete?"
question: "These tasks no longer map to spec requirements. What should I do?"
options:
- label: "Keep them"
description: "Tasks may still be relevant"
- label: "Mark completed"
description: "Requirements changed, tasks no longer needed"
multiSelect: false
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TASK MERGE COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
• {n} tasks updated
• {m} new tasks created
• {k} tasks preserved (in_progress/completed)
• {j} potentially obsolete tasks (kept/resolved)
Total tasks: {total}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If spec structure is unclear:
needs_review: true to metadataIf circular dependency detected:
If required information missing from spec:
incomplete: true to metadata--phase provided but spec has no Section 9:
Inform user: "The --phase argument was provided but this spec has no Implementation Plan (Section 9). Generating tasks from all features without phase filtering." Proceed without phase selection.
--phase references non-existent phase numbers:
Report valid phase numbers and stop: "Invalid phase number(s): {invalid}. This spec has phases: {list of valid phase numbers with names}."
Section 9 format doesn't match expected patterns:
Degrade gracefully — if phase headers can't be parsed, log a warning: "Section 9 found but phase structure could not be parsed. Generating tasks from features only." Set spec_phases = [] and continue.
/agent-alchemy-sdd:create-tasks specs/SPEC-User-Authentication.md
/agent-alchemy-sdd:create-tasks SPEC-Payments.md
/agent-alchemy-sdd:create-tasks specs/SPEC-Auth.md --phase 1
/agent-alchemy-sdd:create-tasks specs/SPEC-Auth.md --phase 1,2
/agent-alchemy-sdd:create-tasks specs/SPEC-User-Authentication.md
If tasks already exist for this spec, they will be intelligently merged.
Before confirming task creation in Phase 8, validate against common anti-patterns. If issues are detected, load the full reference for remediation guidance:
Read ${CLAUDE_PLUGIN_ROOT}/../claude-tools/skills/claude-code-tasks/references/anti-patterns.md
Check for:
task_group metadatareferences/decomposition-patterns.md — Feature decomposition patterns by typereferences/dependency-inference.md — Automatic dependency inference rulesreferences/testing-requirements.md — Test type mappings and acceptance criteria patterns${CLAUDE_PLUGIN_ROOT}/../claude-tools/skills/claude-code-tasks/SKILL.md — Task tool parameters, conventions, and patterns (loaded at init)${CLAUDE_PLUGIN_ROOT}/../claude-tools/skills/claude-code-tasks/references/anti-patterns.md — Common task anti-patterns (loaded on error/validation)Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.