Transform SDD tasks into test-first TDD task pairs. Reads existing tasks from /create-tasks and generates paired test tasks with RED-GREEN dependencies. Use when user says "create tdd tasks", "add tdd pairs", "convert to tdd", or wants to apply test-first ordering to SDD tasks.
Generates test-first TDD task pairs from SDD implementation tasks with proper dependencies and merge mode detection.
/plugin marketplace add sequenzia/agent-alchemy/plugin install agent-alchemy-tdd-tools@agent-alchemyThis skill is limited to using the following tools:
references/tdd-decomposition-patterns.mdreferences/tdd-dependency-rules.mdTransform existing SDD implementation tasks into test-first TDD task pairs. For each implementation task, this skill creates a paired test task that must complete first, enforcing test-first development at the pipeline level.
This skill bridges the SDD pipeline (/create-tasks) and TDD execution (/execute-tdd-tasks), converting a standard task list into one where every implementation task is preceded by a failing-test-writing task.
CRITICAL: Complete ALL 8 phases. The workflow is not complete until Phase 8: Report is finished. After completing each phase, immediately proceed to the next phase without waiting for user prompts.
tdd_mode, tdd_phase, and paired_task_id are added. All existing metadata is preserved.IMPORTANT: You MUST use the AskUserQuestion tool for ALL questions to the user. Never ask questions through regular text output.
Text output should only be used for:
NEVER do this (asking via text output):
Should I proceed with creating 12 TDD task pairs?
1. Yes
2. No
ALWAYS do this (using AskUserQuestion tool):
AskUserQuestion:
questions:
- header: "Confirm TDD Pair Creation"
question: "Ready to create 12 TDD task pairs?"
options:
- label: "Yes, create pairs"
description: "Create test tasks and set TDD dependencies"
- label: "Show details"
description: "See full list of pairs before creating"
- label: "Cancel"
description: "Don't create TDD pairs"
multiSelect: false
CRITICAL: This skill transforms tasks, NOT creates an implementation plan. When invoked during Claude Code's plan mode:
The TDD task pairs are planning artifacts themselves -- generating them IS the planning activity.
This skill is part of the tdd-tools plugin and works with agents in the same plugin (tdd-executor, test-writer). It bridges the SDD pipeline (/create-tasks from sdd-tools) and TDD execution (/execute-tdd-tasks). The sdd-tools plugin is expected to be installed since TDD tasks are generated from SDD tasks created by /create-tasks.
Goal: Verify prerequisites and load reference materials.
Check $ARGUMENTS for optional --task-group filter:
--task-group <group> is present, extract the group name for filteringRead the TDD decomposition and dependency reference files:
references/tdd-decomposition-patterns.md -- Task pairing rules, naming conventions, metadata, merge mode detectionreferences/tdd-dependency-rules.md -- Dependency insertion algorithm, circular dependency detection and breakingGoal: Load the current task list and identify tasks to transform.
Use TaskList to retrieve all current tasks.
If --task-group was specified:
metadata.task_group matches the specified groupIf no --task-group specified:
Handle empty/missing states:
Empty task list (no tasks at all):
No tasks found. Please run /create-tasks first to generate implementation tasks from a spec.
Usage:
/agent-alchemy-sdd:create-tasks <spec-path>
No tasks matching --task-group filter:
No tasks found for group "{group}".
Available task groups:
- {group1} ({n} tasks)
- {group2} ({n} tasks)
Try: /create-tdd-tasks --task-group {group1}
For each task, determine if it should receive a TDD pair:
Eligible for TDD pairing:
Skip (no TDD pair created):
tdd_mode: true in metadatatest in task_uid)Record the classification for each task: eligible, skipped (with reason).
Goal: Identify tasks that already have TDD pairs to avoid duplication.
For each eligible task, check if it already has a TDD pair using these 4 signals (any match means paired):
tdd_mode: true in metadatapaired_task_id in metadata, and the paired task exists in the task listtask_uid equal to this task's task_uid + :red"Write tests for {this task's subject}" exists in the same task_groupFor tasks with existing TDD pairs:
| Existing Pair Status | Action |
|---|---|
| Both tasks pending | Skip -- pair already exists |
| Test completed, impl pending | Skip -- pair progressing normally |
| Test completed, impl completed | Skip -- pair fully done |
| Test completed, impl in_progress | Skip -- pair in progress |
| Test pending, impl completed | Flag as anomaly -- impl completed without tests |
| Only impl exists, test missing | Treat as unpaired -- create the test task |
| Only test exists, impl missing | Flag as orphan -- ask user |
If any existing pairs detected:
TDD PAIR STATUS:
- {n} tasks already have TDD pairs (will skip)
- {m} tasks need TDD pairs (will create)
- {k} anomalies detected (need user input)
If anomalies exist, use AskUserQuestion to resolve each one:
AskUserQuestion:
questions:
- header: "TDD Pair Anomaly"
question: "Task #{id} '{subject}' was completed without its test task. What should I do?"
options:
- label: "Create test task anyway"
description: "Add a test task for documentation/coverage purposes"
- label: "Skip this task"
description: "Leave it as-is without a test pair"
multiSelect: false
Goal: Create test task definitions for each eligible unpaired implementation task.
For each eligible task that needs a TDD pair, generate a paired test task.
Follow the naming convention:
"Write tests for {original task subject}"
Examples:
Determine the test file path based on the implementation task context:
src/foo.ts -> tests/foo.test.ts, src/foo.py -> tests/test_foo.pyDetermine the test framework using project detection:
jest.config.*, vitest.config.*, pytest.ini, pyproject.toml, setup.cfgpackage.json for test dependenciesIf the implementation task HAS acceptance criteria (**Acceptance Criteria:** section):
Convert each criterion into a test description:
**Test Descriptions:**
_From Functional Criteria:_
- [ ] Test that {criterion rephrased as test assertion}
_From Edge Cases:_
- [ ] Test that {edge case rephrased as test assertion}
_From Error Handling:_
- [ ] Test that {error scenario rephrased as test assertion}
_From Performance:_ (if applicable)
- [ ] Test that {performance target as measurable assertion}
If the implementation task LACKS acceptance criteria:
Generate basic test descriptions from the subject and description:
**Test Descriptions:**
_Inferred from task description:_
- [ ] Test that {subject entity} can be created/initialized
- [ ] Test that {subject entity} has expected structure/interface
- [ ] Test that {described behavior} works as described
Assemble the complete test task:
subject: "Write tests for {original subject}"
description: |
Write failing tests for: {original task subject}
Test file: {inferred test file path}
Test framework: {detected framework}
Original task: #{original_task_id}
{test descriptions from Step 4}
**Acceptance Criteria:**
_Functional:_
- [ ] All test descriptions converted into runnable test functions
- [ ] Tests follow project test conventions (naming, structure, fixtures)
- [ ] Tests are discoverable by the test runner
- [ ] Tests fail when run without implementation (RED state)
_Edge Cases:_
- [ ] Tests handle import errors gracefully when implementation module does not exist
_Error Handling:_
- [ ] Test file is syntactically valid even when implementation is missing
Source: {original source reference}
activeForm: "Writing tests for {original subject}"
metadata:
tdd_mode: true
tdd_phase: "red"
paired_task_id: "{original_task_id}"
priority: {inherited from original}
complexity: {S or M -- test files are typically smaller}
source_section: {inherited from original}
spec_path: {inherited from original}
feature_name: {inherited from original}
task_uid: "{original_task_uid}:red"
task_group: {inherited from original}
For each original implementation task, plan the metadata update:
metadata additions:
tdd_mode: true
tdd_phase: "green"
paired_task_id: "{test_task_id}" # Will be set after test task creation
Goal: Insert TDD pairs into the existing dependency chain.
Apply the insertion algorithm from tdd-dependency-rules.md:
Given implementation task #N with existing dependencies blockedBy: [A, B, ...]:
#T gets blockedBy: [A, B, ...] (same as original)#N adds #T to its blockedBy list#N continue to depend on #NBefore: Model (#1) --> API (#2) --> UI (#3)
After:
Test-Model (#4) blockedBy: []
Model (#1) blockedBy: [#4]
Test-API (#5) blockedBy: [#1]
API (#2) blockedBy: [#1, #5]
Test-UI (#6) blockedBy: [#2]
UI (#3) blockedBy: [#2, #6]
After planning all insertions, validate the full dependency graph:
Breaking cycles (weakest-link strategy):
Score each dependency link in the cycle:
Remove the dependency with the lowest score. Log a warning:
WARNING: Circular dependency detected after TDD pair insertion.
Cycle: {task chain}
Broken at: {removed link}
Reason: TDD pair link is weakest (score: 1)
Impact: {explanation of what may run out of order}
Add needs_review: true and circular_dep_break: true to the affected task's metadata.
Goal: Present the TDD transformation plan and get user approval.
Present a summary of the planned changes:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TDD TASK PAIR GENERATION PREVIEW
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SUMMARY:
Total implementation tasks: {count}
Eligible for TDD pairing: {eligible}
Already have TDD pairs: {skipped} (merge mode)
New TDD pairs to create: {new_pairs}
Tasks skipped (test/config/docs): {skipped_ineligible}
NEW TDD PAIRS:
Test Task | Blocks | Phase
─────────────────────────────────────────────────────────────
Write tests for {subject1} | #{impl_id1} | RED
Write tests for {subject2} | #{impl_id2} | RED
...
DEPENDENCY CHAIN (after insertion):
{visualization of the dependency chain with TDD pairs inserted}
{If circular deps detected and broken:}
WARNINGS:
- Circular dependency broken at: {link}
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Use AskUserQuestion to confirm:
AskUserQuestion:
questions:
- header: "Confirm TDD Pair Creation"
question: "Ready to create {n} TDD task pairs?"
options:
- label: "Yes, create pairs"
description: "Create test tasks and update implementation tasks with TDD metadata"
- label: "Show task details"
description: "See full test task descriptions before creating"
- label: "Cancel"
description: "Don't create TDD pairs"
multiSelect: false
If user selects "Show task details":
If user selects "Cancel":
Goal: Create test tasks and update implementation tasks with TDD metadata.
For each planned test task, use TaskCreate:
TaskCreate:
subject: "Write tests for {original subject}"
description: {generated description from Phase 4}
activeForm: "Writing tests for {original subject}"
metadata:
tdd_mode: true
tdd_phase: "red"
paired_task_id: "{impl_task_id}"
priority: {inherited}
complexity: {estimated}
source_section: {inherited}
spec_path: {inherited}
feature_name: {inherited}
task_uid: "{original_uid}:red"
task_group: {inherited}
Capture the returned task ID for each created test task.
For each paired implementation task, use TaskUpdate:
TaskUpdate:
taskId: "{impl_task_id}"
metadata:
tdd_mode: true
tdd_phase: "green"
paired_task_id: "{test_task_id}"
For each TDD pair, set the dependency relationships:
TaskUpdate:
taskId: "{impl_task_id}"
addBlockedBy: ["{test_task_id}"]
For test tasks that need upstream dependencies (inheriting from the original impl task):
TaskUpdate:
taskId: "{test_task_id}"
addBlockedBy: ["{upstream_dep_1}", "{upstream_dep_2}"]
If any circular dependencies were detected and broken in Phase 5:
TaskUpdate:
taskId: "{affected_task_id}"
metadata:
needs_review: true
circular_dep_break: true
Goal: Present the final summary of created TDD task pairs.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
TDD TASK PAIR CREATION COMPLETE
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Created {n} TDD task pairs
Set {m} dependency relationships
TDD PAIRS CREATED:
Test Task (RED) | Impl Task (GREEN) | Test Blocks
─────────────────────────────────────────────────────────────────────────────────
#{test_id}: Write tests for {subj} | #{impl_id}: {subj} | #{impl_id}
...
DEPENDENCY CHAIN:
{visual representation of the full dependency chain}
{If --task-group was used:}
Group: {group}
Tasks in group: {total}
TDD pairs added: {new}
{If merge mode detected:}
MERGE MODE:
Existing pairs preserved: {n}
New pairs created: {m}
{If circular deps broken:}
WARNINGS:
{n} circular dependencies detected and broken. Review recommended.
NEXT STEPS:
Run /execute-tdd-tasks to execute TDD pairs with RED-GREEN-REFACTOR workflow.
Run /execute-tdd-tasks --task-group {group} for group-specific execution.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If TaskList returns no tasks:
/create-tasks firstIf --task-group filter matches zero tasks:
If merge mode detects that all eligible tasks already have TDD pairs:
All eligible tasks already have TDD pairs. Nothing to create.
TDD pair status:
- {n} active TDD pairs
- {m} completed TDD pairs
- {k} tasks skipped (test/config/docs)
If a TaskCreate call fails:
/agent-alchemy-tdd:create-tdd-tasks
/agent-alchemy-tdd:create-tdd-tasks --task-group user-authentication
/agent-alchemy-tdd:create-tdd-tasks --task-group user-authentication
If TDD pairs already exist for some tasks, they will be detected and skipped.
task_group, priority, feature_name, spec_path, and source_section from the original:red to the original UIDtdd_mode: true are never paired again (prevents double-pairing)references/tdd-decomposition-patterns.md -- Task pairing rules, naming conventions, criteria conversion, merge modereferences/tdd-dependency-rules.md -- Dependency insertion algorithm, circular dependency detection and breakingActivates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.