Converts functional specs into atomic executable tasks with acceptance criteria, dependencies, and implementation commands. Outputs tasks.md list plus individual TASK-XXX.md files. Supports --lang=java|spring|typescript|nestjs|react|python|general.
From developer-kitnpx claudepluginhub giuseppe-trisciuoglio/developer-kit --plugin developer-kit[ --lang=java|spring|typescript|nestjs|react|python|general ] [ spec-file|spec-folder ]inheritspecs/Converts a functional specification into a list of executable, trackable tasks. This is the bridge between WHAT (specification) and HOW (implementation).
This command reads a functional specification generated by /developer-kit:devkit.brainstorm and converts it into atomic, executable tasks.
Input: docs/specs/[id]/YYYY-MM-DD--feature-name.md (preferred) or legacy docs/specs/[id]/YYYY-MM-DD--feature-name-specs.md
Output:
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.mdEach task includes:
Idea → Functional Specification → Tasks → Implementation
(devkit.brainstorm) (this) (devkit.task-implementation)
# Basic usage - specify spec file or folder
/developer-kit:devkit.spec-to-tasks docs/specs/001-hotel-search-aggregation/
/developer-kit:devkit.spec-to-tasks docs/specs/001-hotel-search-aggregation/2026-03-07--hotel-search-aggregation.md
# With language specification
/developer-kit:devkit.spec-to-tasks --lang=spring docs/specs/001-user-auth/
/developer-kit:devkit.spec-to-tasks --lang=typescript docs/specs/001-user-auth/
/developer-kit:devkit.spec-to-tasks --lang=nestjs docs/specs/001-user-auth/
/developer-kit:devkit.spec-to-tasks --lang=react docs/specs/001-user-auth/
/developer-kit:devkit.spec-to-tasks --lang=python docs/specs/001-user-auth/
/developer-kit:devkit.spec-to-tasks --lang=general docs/specs/001-user-auth/
| Argument | Required | Description |
|---|---|---|
--lang | Recommended | Target language/framework: java, spring, typescript, nestjs, react, python, php, general. Required for codebase analysis and technical task generation |
spec-file | No | Path to spec file or spec folder (e.g., docs/specs/001-feature-name/, docs/specs/001-feature-name/2026-03-07--feature-name.md, or legacy *-specs.md) |
The command will automatically gather context information when needed:
You are converting a functional specification into executable tasks. Follow a systematic approach: analyze requirements, identify dependencies, generate atomic tasks, and create a trackable task list.
Goal: Read and understand the functional specification
Input: $ARGUMENTS (spec file or folder path)
Actions:
Create todo list with all phases
Parse $ARGUMENTS to extract:
--lang parameter (language/framework for implementation)spec-path (path to spec file or folder)Determine the spec folder:
YYYY-MM-DD--feature-name.mdYYYY-MM-DD--feature-name-specs.md--tasks.md, decision-log.md, traceability-matrix.md, user-request.md, and brainstorming-notes.mdRead the resolved functional specification file
CRITICAL: Look for user context files in the spec folder:
user-request.md - Original user request (from brainstorming - PRIMARY)brainstorming-notes.md - Notes from brainstorming session (SECONDARY)Extract the spec ID from folder name (e.g., 001-hotel-search-aggregation)
Verify the specification exists and is valid
If file not found:
Quality Pre-Check (Soft Gate):
## Clarifications in the spec file (added by spec-review)/devkit.spec-review docs/specs/[id]/)Goal: Extract and organize requirements from the specification
Actions:
Analyze the specification for:
CRITICAL: Include technical requirements from context files:
Group related requirements:
CRITICAL: Verify against original user request:
user-request.md content already read in Phase 1Present the extracted requirements structure (including technical requirements) to user for confirmation
Assign unique REQ-IDs to each extracted requirement:
REQ-001: [User story / requirement text]
REQ-002: [Business rule / requirement text]
REQ-003: [Acceptance criterion / requirement text]
...
Goal: Check if cached codebase analysis exists from previous runs
Prerequisite: Requires --lang parameter and spec folder path
Actions:
Check for existing Knowledge Graph:
knowledge-graph.json in the spec folderIf Knowledge Graph exists:
metadata.updated_at timestampcurrent_time - updated_atPresent summary to user:
Found cached codebase analysis from X days ago:
- Y architectural patterns (Repository, Service Layer, etc.)
- Z components (N controllers, M services, K repositories)
- Q API endpoints documented
- Technology stack: [framework] [version]
The analysis is [fresh/getting stale/old].
Choose reuse strategy automatically unless the case is borderline:
Based on the chosen strategy:
If using cached KG:
query knowledge-graph [spec-folder] patternsquery knowledge-graph [spec-folder] componentsquery knowledge-graph [spec-folder] apisCheck and Load Global Knowledge Graph (if exists):
docs/specs/.global-knowledge-graph.jsonGoal: Understand existing codebase to generate technically accurate tasks
Prerequisite: This phase requires --lang parameter to select appropriate agents
Actions:
--lang parameter, select appropriate codebase exploration agent:| Language | Agent |
|---|---|
java / spring | developer-kit-java:java-software-architect-review |
typescript / nestjs | developer-kit-typescript:typescript-software-architect-review |
react | developer-kit-typescript:react-software-architect-review |
python | developer-kit-python:python-software-architect-expert |
php | developer-kit-php:php-software-architect-expert |
general | developer-kit:general-code-explorer |
For java / spring:
Explore the Java/Spring Boot codebase to understand:
1. **Project Structure**:
- Package organization (domain-driven, layered, etc.)
- Build configuration (Maven/Gradle, pom.xml/build.gradle)
- Main application class and entry points
2. **Spring Patterns**:
- Spring Data JPA repositories and entity mapping
- Spring Security configuration and auth patterns
- REST controller conventions (@RestController, @RequestMapping)
- Service layer patterns (@Service, transaction management)
- Configuration properties (@ConfigurationProperties)
3. **Data Layer**:
- Entity/DTO patterns
- Database migrations (Flyway, Liquibase)
- ORM patterns (Hibernate)
4. **Testing Patterns**:
- Test directory structure
- Testing conventions (JUnit 5, Mockito)
- Integration test setup
Provide a summary that will inform task generation with Spring-specific context.
For typescript / nestjs:
Explore the TypeScript/NestJS codebase to understand:
1. **Project Structure**:
- Module organization
- TypeScript configuration (tsconfig.json)
- NestJS module structure
2. **NestJS Patterns**:
- Controller conventions (@Controller, @Get, @Post, etc.)
- Service layer patterns (@Injectable, providers)
- Module organization (@Module)
- Dependency injection setup
- Guards and interceptors
3. **Data Access**:
- ORM usage (TypeORM, Drizzle, Prisma)
- Repository patterns
- Database migrations
4. **Testing Patterns**:
- Jest configuration
- Unit vs integration test structure
Provide a summary that will inform task generation with NestJS-specific context.
For react:
Explore the React codebase to understand:
1. **Project Structure**:
- App organization (Next.js, Remix, or CRA/Vite)
- Routing structure
- Component directory layout
2. **React Patterns**:
- Component patterns (functional, hooks)
- State management (Context, Redux, Zustand, etc.)
- API communication (React Query, SWR, fetch)
- Form handling patterns
3. **Styling**:
- CSS approach (CSS modules, Tailwind, styled-components)
- Component library usage
4. **Testing Patterns**:
- Testing library (Jest, Vitest, React Testing Library)
- Component testing conventions
Provide a summary that will inform task generation with React-specific context.
For python:
Explore the Python codebase to understand:
1. **Project Structure**:
- Package organization
- requirements.txt, setup.py, or pyproject.toml
- Entry points (main.py, __main__.py)
2. **Python Patterns**:
- Web framework (Django, FastAPI, Flask)
- Data models (SQLAlchemy, Pydantic, Django ORM)
- API patterns (REST, GraphQL)
- Authentication patterns
3. **Testing Patterns**:
- pytest configuration
- Test directory structure
- Mocking conventions
Provide a summary that will inform task generation with Python-specific context.
For php:
Explore the PHP codebase to understand:
1. **Project Structure**:
- Composer-based project organization
- Laravel directory structure or custom MVC
2. **PHP Patterns**:
- Framework conventions (Laravel, Symfony)
- ORM usage (Eloquent, Doctrine)
- Controller patterns
- Routing and middleware
3. **Testing Patterns**:
- PHPUnit configuration
- Feature vs unit test structure
Provide a summary that will inform task generation with PHP-specific context.
For general:
Explore the codebase to understand:
1. **Project Structure**:
- Main directories and their purpose
- Configuration files (package.json, pom.xml, requirements.txt, etc.)
- Entry points and main modules
2. **Existing Patterns**:
- Data models/schemas used
- API patterns (REST, GraphQL, etc.)
- Authentication/authorization patterns
- Database access patterns (ORM, raw queries, etc.)
- Error handling patterns
- Logging and monitoring approaches
3. **Technology Stack**:
- Frameworks and libraries used
- Database systems
- External service integrations
- Build and deployment tools
4. **Integration Points**:
- Existing APIs the new feature must integrate with
- Shared utilities or helper functions
- Common components or services
- Configuration management
5. **Code Organization**:
- Layered architecture (if any)
- Module boundaries
- Dependency injection patterns
- Testing patterns and conventions
Provide a comprehensive summary that will inform task generation.
Goal: Persist agent discoveries into the Knowledge Graph for future reuse
Prerequisite: Phase 3 (Codebase Analysis) must have completed
Actions:
Extract structured findings from agent analysis:
patterns.architectural: Design patterns discovered (Repository, Service Layer, etc.)patterns.conventions: Coding conventions (naming, testing, etc.)components: Code components identified (controllers, services, repositories, entities)apis.internal: REST endpoints and API structureapis.external: External service integrationsintegration_points: Database, cache, message queues, etc.Construct KG update object:
{
"metadata": {
"spec_id": "[extracted from folder]",
"feature_name": "[extracted from folder]",
"updated_at": "[current ISO timestamp]",
"analysis_sources": [
{
"agent": "[agent-type-used]",
"timestamp": "[current ISO timestamp]",
"focus": "codebase analysis for task generation"
}
]
},
"codebase_context": {
"project_structure": { /* from agent analysis */ },
"technology_stack": { /* from agent analysis */ }
},
"patterns": {
"architectural": [ /* patterns discovered */ ],
"conventions": [ /* conventions identified */ ]
},
"components": {
"controllers": [ /* controllers found */ ],
"services": [ /* services found */ ],
"repositories": [ /* repositories found */ ],
"entities": [ /* entities found */ ],
"dtos": [ /* DTOs found */ ]
},
"apis": {
"internal": [ /* endpoints discovered */ ],
"external": [ /* external integrations */ ]
},
"integration_points": [ /* databases, caches, etc. */ ]
}
Update Knowledge Graph using spec-quality command:
/developer-kit:devkit.spec-quality [spec-folder] --update-kg-onlyknowledge-graph.json with discovered patternsmetadata.updated_at and metadata.analysis_sourcesLog and report:
Knowledge Graph updated via spec-quality:
- X architectural patterns documented
- Y coding conventions identified
- Z components catalogued (N controllers, M services, K repositories)
- Q API endpoints documented
- R integration points mapped
Saved to: docs/specs/[ID]/knowledge-graph.json
Verify update:
Note: If user chose to use cached KG in Phase 2.5, skip this phase and proceed directly to Phase 4.
Goal: Break down requirements into atomic, executable tasks
Actions:
If Knowledge Graph context is available (from Phase 2.5 cached or Phase 3.5 updated):
For each requirement group, create one or more tasks:
For each task, define:
Map dependencies explicitly:
Validate dependencies before generating files:
| Task ID | Title | Dependencies |
|---|---|---|
| TASK-001 | [Title] | None |
| TASK-002 | [Title] | TASK-001 |
| TASK-003 | [Title] | TASK-001, TASK-002 |
| ... | ... | ... |
Identify Test Requirements for Each Task: For each identified task, you must now precisely and mandatorily define what needs to be tested. This analysis will guide the generation of the "Test Instructions" section in the task file.
Analyze involved classes/components: For each file that the task will create or modify, determine its complexity level and testing importance.
Define behaviors to test: For each high-priority component, list specific test scenarios. Do not generate code, but describe the behavior.
register(userData) method in UserService calls UserRepository.save() only if the email is unique and valid."calculateTotal(price, tax, discount) function returns the correct value for valid inputs, for zero taxes, and for maximum discounts."/api/register endpoint with valid data saves a new user in the database and returns status 201."Suggest test files to create: For each source file requiring tests, indicate the corresponding test file according to language conventions.
UserService.java → UserServiceTest.javauser.service.ts → user.service.spec.tsuser_service.py → test_user_service.pyLink Tests to Acceptance Criteria: Ensure that for each functional acceptance criterion, there is at least one test scenario that verifies it. This step is critical for guaranteeing traceability.
Present task structure to the user only if major restructuring, optional tasks, or scope gaps were detected. Otherwise generate the files directly and summarize the resulting plan.
Goal: Generate the task list markdown file and individual task files with technical details
Actions:
Generate a unique task ID for each task (e.g., TASK-001, TASK-002)
Extract feature name from folder (remove ID prefix, e.g., 001-hotel-search-aggregation → hotel-search-aggregation)
Create tasks directory: docs/specs/[id]/tasks/
For each task, create an individual task file with technical details from codebase analysis:
IMPORTANT: Always include test files in "Files to Create" section for any class that contains business logic, state management, validation, or complex behavior. Test files should be listed alongside source files with clear descriptions of what to test (e.g., "test state transitions", "test validation logic").
---
id: TASK-XXX
title: "[Task Title]"
spec: [resolved spec file path]
lang: [java|spring|typescript|nestjs|react|python|general]
dependencies: [TASK-YYY if applicable]
---
# TASK-XXX: [Task Title]
**Functional Description**: [Functional description of what this task covers]
## Technical Context (from Codebase Analysis)
- **Existing Patterns to Follow**: [patterns from codebase analysis]
- **APIs to Integrate With**: [existing APIs or services]
- **Shared Components**: [existing utilities, services, or modules to use]
- **Conventions**: [coding conventions, naming, structure, framework-specific patterns]
## Implementation Details (File names only, no code)
**Files to Create**:
- `[path/source/1]` - [brief description of its purpose]
- `[path/source/2]` - [brief description of its purpose]
- `[path/test/1]` - [e.g., user.service.spec.ts]
- `[path/test/2]` - [e.g., user.controller.integration.spec.ts]
**Files to Modify** (if applicable):
- `[path/existing/1]` - [what modifications are needed]
## Test Instructions
This section describes **what** to test, not **how** to implement test code.
**1. Mandatory Unit Tests:**
- `[Source Class/File Name 1]`:
- [ ] Verify that [method/unit] correctly handles [success scenario].
- [ ] Verify that [method/unit] throws an exception/error when [error scenario].
- [ ] Verify that the [specific business rule] logic works as described in the specification.
- `[Source Class/File Name 2]`:
- [ ] Test validation of [specific field] with valid, invalid, and borderline values.
**2. Mandatory Integration Tests:**
- `[Flow/Component Name]`:
- [ ] Verify that the `[API endpoint]` endpoint with valid data correctly interacts with the database and returns the expected response (e.g., status 201, correct body).
- [ ] Verify that a call to the `[API endpoint]` endpoint with invalid data **does not** modify the database state and returns an appropriate error (e.g., status 400).
**3. Edge Cases and Error Conditions to Test:**
- [ ] Send missing or malformed data.
- [ ] Simulate timeout or failure of an external service.
- [ ] Test race conditions (if relevant, e.g., double booking).
- [ ] Test with high data loads or boundary values (e.g., maximum length strings).
**Test Acceptance Criteria**:
- [ ] All tests described above are implemented and pass.
- [ ] Test coverage for classes with business logic is >= 80%.
**Dependencies**: [TASK-YYY if applicable, otherwise "None"]
**Implementation Command**:
/developer-kit:devkit.task-implementation --lang=[language] --task="docs/specs/[id]/tasks/TASK-XXX.md"
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.md# Task List: [Feature Name]
**Specification**: [resolved spec file path]
**Generated**: [current date]
**Language**: [language]
## Codebase Analysis Summary
- **Project Structure**: [summary from codebase analysis]
- **Key Patterns**: [patterns identified]
- **Integration Points**: [APIs/services to integrate with]
## Task Index
| Task ID | Title | Technical Focus | Status | Dependencies |
|---------|-------|-----------------|--------|--------------|
| [TASK-001](tasks/TASK-001.md) | Task title | [files/components] | [ ] | - |
| [TASK-002](tasks/TASK-002.md) | Task title | [files/components] | [ ] | TASK-001 |
## Tasks
Each task has its own detailed file with technical context:
- [TASK-001](tasks/TASK-001.md): Task title
- [TASK-002](tasks/TASK-002.md): Task title
Goal: Generate traceability matrix mapping requirements to tasks
Prerequisite: Phase 2 (Requirement Extraction with REQ-IDs) and Phase 4 (Technical Task Decomposition) completed
Actions:
Map REQ-IDs to tasks:
Generate traceability matrix file: Create docs/specs/[id]/traceability-matrix.md:
# Traceability Matrix: [Feature Name]
**Spec**: [resolved spec file path]
**Generated**: YYYY-MM-DD
**Last Updated**: YYYY-MM-DD
## Coverage Summary
- **Requirements**: N total
- **Covered by Tasks**: N/N (100%)
- **With Tests**: N/N (X%)
- **Implemented**: N/N (X%)
## Matrix
| REQ ID | Requirement | Task(s) | Test Files | Code Files | Status |
|--------|-------------|---------|------------|------------|--------|
| REQ-001 | User can search by destination | TASK-001, TASK-003 | - | - | Pending |
| REQ-002 | Results paginated | TASK-005 | - | - | Pending |
Initialize matrix columns:
Calculate coverage summary:
Goal: Verify the task list quality
Actions:
Present the generated task structure to the user:
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.mdAsk for confirmation via AskUserQuestion:
If modifications needed, return to Phase 3
Goal: Document what was accomplished
Actions:
Mark all todos complete
Summarize:
docs/specs/[id]/YYYY-MM-DD--feature-name--tasks.mddocs/specs/[id]/tasks/TASK-XXX.md (with technical context)Provide example commands for implementing tasks:
# Example: Implement a specific task
/developer-kit:devkit.task-implementation --lang=[language] --task="docs/specs/001-feature-name/tasks/TASK-001.md"
# Example: List available tasks in the folder
ls docs/specs/001-feature-name/tasks/
When tasks have dependencies, the workflow is:
docs/specs/001-user-auth/tasks/TASK-001.md (no dependencies)
↓
docs/specs/001-user-auth/tasks/TASK-002.md (depends on TASK-001)
↓
docs/specs/001-user-auth/tasks/TASK-003.md (depends on TASK-001)
↓
docs/specs/001-user-auth/tasks/TASK-004.md (depends on TASK-002)
The --lang parameter affects only the Implementation Command in each task file:
| Language | Implementation Command |
|---|---|
java | devkit.task-implementation --lang=java --task="docs/specs/[id]/tasks/TASK-XXX.md" |
spring | devkit.task-implementation --lang=spring --task="docs/specs/[id]/tasks/TASK-XXX.md" |
typescript | devkit.task-implementation --lang=typescript --task="docs/specs/[id]/tasks/TASK-XXX.md" |
nestjs | devkit.task-implementation --lang=nestjs --task="docs/specs/[id]/tasks/TASK-XXX.md" |
react | devkit.task-implementation --lang=react --task="docs/specs/[id]/tasks/TASK-XXX.md" |
python | devkit.task-implementation --lang=python --task="docs/specs/[id]/tasks/TASK-XXX.md" |
php | devkit.task-implementation --lang=php --task="docs/specs/[id]/tasks/TASK-XXX.md" |
general | devkit.task-implementation --lang=general --task="docs/specs/[id]/tasks/TASK-XXX.md" |
# Convert specification to tasks
/developer-kit:devkit.spec-to-tasks --lang=spring --task=docs/specs/001-user-auth/
Output structure:
docs/specs/001-user-auth/
├── 2026-03-07--user-auth-specs.md
├── 2026-03-07--user-auth--tasks.md
└── tasks/
├── TASK-001.md
├── TASK-002.md
├── TASK-003.md
├── TASK-004.md
├── TASK-005.md
└── TASK-006.md
Sample task file (TASK-001.md):
---
id: TASK-001
title: "User registration endpoint"
spec: docs/specs/001-user-auth/2026-03-07--user-auth-specs.md
lang: spring
dependencies: []
---
# TASK-001: User registration endpoint
**Functional Description**: Implement user registration with email validation
## Technical Context (from Codebase Analysis)
- **Existing Patterns to Follow**: REST controllers in src/main/java/.../controller/
- **APIs to Integrate With**: Existing UserRepository
- **Conventions**: @RestController, @Valid annotations
## Implementation Details (File names only, no code)
**Files to Create**:
- `src/main/java/.../controller/AuthController.java` - Controller for registration
- `src/main/java/.../service/UserService.java` - Business logic service
- `src/test/java/.../controller/AuthControllerTest.java` - Controller tests
- `src/test/java/.../service/UserServiceTest.java` - Service tests
**Files to Modify**:
- `src/main/java/.../config/SecurityConfig.java` - Add public endpoint
## Test Instructions
This section describes **what** to test, not **how** to implement test code.
**1. Mandatory Unit Tests:**
- `UserService`:
- [ ] Verify that the `register(userData)` method calls `UserRepository.save()` only if the email is unique.
- [ ] Verify that `EmailAlreadyExistsException` is thrown when the email is already registered.
- [ ] Verify that the password is encoded before saving.
- `AuthController`:
- [ ] Test email validation with valid, invalid, and missing formats.
- [ ] Verify that the controller returns status 201 for successful registration.
**2. Mandatory Integration Tests:**
- `Registration Flow`:
- [ ] Verify that a POST request to the `/api/v1/users/register` endpoint with valid data saves a new user in the database and returns status 201.
- [ ] Verify that a request with duplicate email returns status 409 and does not modify the database.
**3. Edge Cases and Error Conditions to Test:**
- [ ] Send malformed email (e.g., without @).
- [ ] Send too short password (e.g., less than 8 characters).
- [ ] Send malformed JSON payload.
**Test Acceptance Criteria**:
- [ ] All tests described above are implemented and pass.
- [ ] Test coverage for UserService is >= 80%.
**Implementation Command**:
/developer-kit:devkit.task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"
/developer-kit:devkit.spec-to-tasks --lang=typescript docs/specs/005-checkout-flow/
/developer-kit:devkit.spec-to-tasks --lang=python docs/specs/010-payment-integration/
# Step 1: Generate tasks from specification
/developer-kit:devkit.spec-to-tasks --lang=nestjs docs/specs/003-notification-system/
# Step 2: Implement tasks in dependency order
/developer-kit:devkit.task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-001.md"
/developer-kit:devkit.task-implementation --lang=nestjs --task="docs/specs/003-notification-system/tasks/TASK-002.md"
The task list generated by this command feeds directly into /developer-kit:devkit.task-implementation:
# After generating tasks, implement each one:
# Option 1: Implement all tasks (sequentially)
/developer-kit:devkit.task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-001.md"
# (complete, then)
/developer-kit:devkit.task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-002.md"
# ...
# Option 2: Implement tasks in dependency order
# Start with tasks that have no dependencies
# Progress through the dependency graph
# Option 3: Pick specific task to work on
/developer-kit:devkit.task-implementation --lang=spring --task="docs/specs/001-user-auth/tasks/TASK-003.md"
Throughout the process, maintain a todo list like:
[ ] Phase 1: Specification Analysis
[ ] Phase 2: Requirement Extraction
[ ] Phase 3: Codebase Analysis
[ ] Phase 4: Technical Task Decomposition
[ ] Phase 5: Task List Generation
[ ] Phase 6: Review and Confirmation
[ ] Phase 7: Summary
Update the status as you progress through each phase.
Note: This command follows the "divide et impera" (divide and conquer) principle — splitting complex problems into simpler, manageable tasks. Each task can be implemented independently, with clear dependencies and acceptance criteria.