From ring-pm-team
Decomposes PRD/TRD into value-driven implementation tasks delivering working increments with measurable success criteria, dependencies, risks, and sizing (<2 weeks). Use after TRD/dependency map, before subtasks.
npx claudepluginhub lerianstudio/ring --plugin ring-pm-teamThis skill uses the workspace's default tool permissions.
**Every task must deliver working software that provides measurable user value.**
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Every task must deliver working software that provides measurable user value.
Creating technical-only or oversized tasks creates:
Tasks answer: What working increment will be delivered? Tasks never answer: How to implement that increment (that's Subtasks).
| Phase | Activities |
|---|---|
| 1. Task Identification | Load PRD (Gate 1, required), TRD (Gate 3, required); optional: Feature Map, API Design, Data Model, Dependency Map; identify value streams |
| 2. Decomposition | Per component/feature: define deliverable, set success criteria, map dependencies, estimate effort via AI analysis (max 16 AI-agent-hours), plan testing, identify risks |
| 3. Gate 7 Validation | All TRD components covered; every task delivers working software; measurable success criteria; correct dependencies; no task >2 weeks; testing strategy defined; risks with mitigations; delivery sequence optimizes value |
Task ID, title, type (Foundation/Feature/Integration/Polish), deliverable (what ships), user value (what users can do), technical value (what it enables), success criteria (testable/measurable), dependencies (blocks/requires/optional), effort estimate (AI-agent-hours with confidence), testing strategy, risk identification with mitigations, Definition of Done checklist
Implementation details (file paths, code examples), step-by-step instructions (those go in subtasks), technical-only tasks with no user value, tasks exceeding 2 weeks (break them down), vague success criteria ("improve performance"), missing dependency information, undefined testing approach
| Size | AI-agent-hours | Calendar Duration* | Scope |
|---|---|---|---|
| Small (S) | 1-4h | 1-2 days | Single component |
| Medium (M) | 4-8h | 2-4 days | Few dependencies |
| Large (L) | 8-16h | 1-2 weeks | Multiple components |
| XL (>16h) | BREAK IT DOWN | Too large | Not atomic |
*Calendar duration assumes 1.5x multiplier (standard validation), 90% capacity, and 1 developer
See shared-patterns/ai-agent-baseline.md for baseline definition.
After defining task scope and success criteria, the system automatically estimates implementation time.
Tech Stack Detection: Identify project type from TRD
Scope Analysis: Specialized agent analyzes:
Time Calculation: Agent estimates per scope item
Output: Total in AI-agent-hours
For detailed baseline definition and capacity explanation, see shared-patterns/ai-agent-baseline.md.
| Level | Criteria | Example |
|---|---|---|
| High | Standard patterns, libs available, clear scope | CRUD API with lib-commons |
| Medium | Some custom logic, partial lib support | Payment integration |
| Low | Novel algorithms, no lib support, vague scope | ML feature, R&D work |
**Effort Estimate:**
- **Baseline:** AI Agent via ring:dev-cycle
- **AI Estimate:** 4.5 AI-agent-hours
- **Estimation Method:** ring:backend-engineer-golang analysis
- **Confidence:** High (standard CRUD, lib-commons available)
**Breakdown:**
- Database schema + migrations: 0.5h
- Repository layer (CRUD): 0.5h
- Service layer (business logic): 0.5h
- HTTP handlers (4 endpoints): 1.0h
- Input validation: 0.3h
- Error handling: 0.2h
- Unit tests (TDD, 85% coverage): 0.8h
- Integration tests: 0.5h
- OpenAPI documentation: 0.2h
**Total: 4.5 AI-agent-hours**
**Assumptions:**
- lib-commons/http, lib-commons/postgres, lib-commons/validator available
- Standard CRUD patterns (no complex algorithms)
- PostgreSQL database configured
- ring:dev-cycle will execute implementation
**Team Type:** Backend Engineer (Go)
| Excuse | Reality |
|---|---|
| "This 3-week task is fine" | Tasks >2 weeks hide complexity. Break it down. |
| "Setup tasks don't need value" | Setup enables value. Define what it enables. |
| "Success criteria are obvious" | Obvious to you ≠ testable. Document explicitly. |
| "Dependencies will be clear later" | Later is too late. Map them now. |
| "We don't need detailed estimates" | Without estimates, no planning possible. Size them. |
| "Technical tasks can skip user value" | Even infrastructure enables users. Define the connection. |
| "Testing strategy can be decided during" | Testing affects design. Plan it upfront. |
| "Risks aren't relevant at task level" | Risks compound across tasks. Identify them early. |
| "DoD is the same for all tasks" | Different tasks need different criteria. Specify. |
| "We can combine multiple features" | Combining hides value delivery. Keep tasks focused. |
| "Skip AI estimation, use story points" | Story points are abstract, AI hours are concrete |
| "Manual estimate is faster" | Fast ≠ accurate. AI analyzes full scope consistently |
| "AI estimate too low, inflate it" | Inflation happens in multiplier (Gate 9), not here |
| "Confidence is always High" | Confidence reflects scope clarity and complexity |
| "Skip breakdown, just give total" | Breakdown enables validation and learning |
| "AI can't estimate this, too complex" | Complex = lower confidence, not impossible |
If you catch yourself writing any of these in a task, STOP:
When you catch yourself: Refine the task until it's concrete, valuable, and testable.
| Category | Requirements |
|---|---|
| Task Completeness | All TRD components have tasks; all PRD features have tasks; each task appropriately sized (no XL+); task boundaries clear |
| Delivery Value | Every task delivers working software; user value explicit; technical value clear; sequence optimizes value |
| Technical Clarity | Success criteria measurable/testable; dependencies correctly mapped; testing approach defined; DoD comprehensive |
| Team Readiness | Skills match capabilities; estimates realistic; capacity available; handoffs minimized |
| Risk Management | Risks identified per task; mitigations defined; high-risk tasks scheduled early; fallback plans exist |
| Multi-Module (if applicable) | All tasks have target: field; all tasks have working_directory:; per-module files generated (if doc_organization: per-module) |
Gate Result: ✅ PASS → Subtasks | ⚠️ CONDITIONAL (refine oversized/vague) | ❌ FAIL (re-decompose)
If TopologyConfig exists in research.md frontmatter (from Gate 0):
# From research.md frontmatter
topology:
scope: fullstack
structure: monorepo | multi-repo
modules:
backend:
path: packages/api
language: golang
frontend:
path: packages/web
framework: nextjs
doc_organization: unified | per-module
Each task MUST have target: and working_directory: fields when topology is multi-module.
Agent assignment depends on both target and api_pattern:
| Target | API Pattern | Task Type | Agent |
|---|---|---|---|
backend | any | API endpoints, services, data layer, CLI | ring:backend-engineer-golang or ring:backend-engineer-typescript |
frontend | direct | UI components, pages, forms, Server Components | ring:frontend-engineer |
frontend | direct | Server Actions, data fetching hooks | ring:frontend-engineer |
frontend | bff | API routes, data aggregation, transformation | ring:frontend-bff-engineer-typescript |
frontend | bff | UI components, pages, forms | ring:frontend-engineer |
shared | any | CI/CD, configs, docs, cross-module utilities | DevOps or general |
Read api_pattern from research.md frontmatter:
# From research.md
topology:
scope: fullstack
api_pattern: direct | bff | other
Decision Flow:
Is task target: frontend?
├─ NO → Use backend-engineer-* based on language
└─ YES → Check api_pattern
├─ direct → ALL frontend tasks use frontend-engineer
└─ bff → Split tasks:
├─ API routes, aggregation, transformation → frontend-bff-engineer-typescript
└─ UI components, pages, forms → frontend-engineer
## T-003: User Login API Endpoint
**Target:** backend
**Working Directory:** packages/api
**Agent:** ring:backend-engineer-golang
**Deliverable:** Working login API that validates credentials and returns JWT token.
...rest of task...
## T-004: User Dashboard Data Aggregation
**Target:** frontend
**Working Directory:** packages/web
**Agent:** ring:frontend-bff-engineer-typescript # Because api_pattern: bff
**Deliverable:** BFF endpoint that aggregates user profile, recent activity, and notifications.
...rest of task...
## T-005: User Dashboard UI
**Target:** frontend
**Working Directory:** packages/web
**Agent:** ring:frontend-engineer # UI task, even with BFF pattern
**Deliverable:** Dashboard page component consuming aggregated data from BFF.
...rest of task...
| Check | Requirement |
|---|---|
All tasks have Agent: field | MANDATORY |
| Agent matches api_pattern rules | If frontend + bff → check task type |
| BFF tasks clearly separated | Data aggregation vs UI clearly split |
| No mixed responsibilities | One task = one agent |
Document placement depends on topology.structure:
All tasks in one file:
docs/pre-dev/{feature}/
└── tasks.md # All tasks with target tags
Index at root, filtered tasks in module directories:
docs/pre-dev/{feature}/
└── tasks.md # Index with ALL tasks (target tags included)
{backend.path}/docs/pre-dev/{feature}/
└── tasks.md # Backend tasks only (target: backend)
{frontend.path}/docs/pre-dev/{feature}/
└── tasks.md # Frontend tasks only (target: frontend)
Tasks distributed to respective repositories:
{backend.path}/docs/pre-dev/{feature}/
└── tasks.md # Backend tasks only
{frontend.path}/docs/pre-dev/{feature}/
└── tasks.md # Frontend tasks only
Note: For multi-repo, there is no central index. Each repo contains only its relevant tasks.
def split_tasks_by_module(all_tasks: list, topology: dict) -> dict:
"""
Split tasks into module-specific files.
Returns dict with keys: 'index', 'backend', 'frontend'
"""
structure = topology.get('structure', 'single-repo')
modules = topology.get('modules', {})
backend_path = modules.get('backend', {}).get('path', '.')
frontend_path = modules.get('frontend', {}).get('path', '.')
backend_tasks = [t for t in all_tasks if t.get('target') == 'backend']
frontend_tasks = [t for t in all_tasks if t.get('target') == 'frontend']
shared_tasks = [t for t in all_tasks if t.get('target') == 'shared']
if structure == 'single-repo':
return {
'index': {
'path': f"docs/pre-dev/{feature}/tasks.md",
'tasks': all_tasks
}
}
if structure == 'monorepo':
return {
'index': {
'path': f"docs/pre-dev/{feature}/tasks.md",
'tasks': all_tasks
},
'backend': {
'path': f"{backend_path}/docs/pre-dev/{feature}/tasks.md",
'tasks': backend_tasks + shared_tasks
},
'frontend': {
'path': f"{frontend_path}/docs/pre-dev/{feature}/tasks.md",
'tasks': frontend_tasks + shared_tasks
}
}
if structure == 'multi-repo':
return {
'backend': {
'path': f"{backend_path}/docs/pre-dev/{feature}/tasks.md",
'tasks': backend_tasks + shared_tasks
},
'frontend': {
'path': f"{frontend_path}/docs/pre-dev/{feature}/tasks.md",
'tasks': frontend_tasks + shared_tasks
}
}
Each module-specific tasks.md should include:
---
feature: {feature-name}
module: backend | frontend
filtered_from: docs/pre-dev/{feature}/tasks.md # (monorepo only)
total_tasks: N
---
# {Feature Name} - {Module} Tasks
This file contains tasks filtered for the **{module}** module.
**Full task list:** {link to index if monorepo, or note "distributed" if multi-repo}
---
| Check | Requirement |
|---|---|
All tasks have target: | If topology is monorepo or multi-repo |
All tasks have working_directory: | If topology is monorepo or multi-repo |
| Target matches task content | Backend tasks have backend work, etc. |
| Working directory resolves correctly | Path exists or will be created |
Output path depends on topology — see Output & After Approval for the full topology-dependent rules. The file starts with two summary sections followed by the full task details.
MUST open with two summary tables before the individual task details.
A quick-reference table for the engineering team. The Status column is initialized by ring:pre-dev-task-breakdown and updated by ring:dev-cycle during execution.
⏸️ Pending at task creation time## Summary
| Task | Title | Type | Hours | Confidence | Blocks | Status |
|------|-------|------|-------|------------|--------|--------|
| T-001 | Project Foundation | Foundation | 3.0 | High | All | ⏸️ Pending |
| T-002 | ... | Feature | 6.5 | Medium | T-004, T-008 | ⏸️ Pending |
| | **TOTAL** | | **85.0h** | | | |
MUST leave the Status cell of the TOTAL row empty. CANNOT apply ⏸️ Pending or any status value to the TOTAL row.
Status lifecycle (managed by ring:dev-cycle):
| Value | Meaning | Set by |
|---|---|---|
⏸️ Pending | Not started | ring:pre-dev-task-breakdown at task creation |
🔄 Doing | Execution started (Gate 0 began) | ring:dev-cycle |
✅ Done | Gate 9 approved | ring:dev-cycle |
❌ Failed | Execution terminated with unresolved blocker | ring:dev-cycle |
MUST appear immediately after Summary Table 1. A plain-language view for product managers, stakeholders, and the team to understand what value each task delivers.
## Business Deliverables
| Task | Deliverable (business view) |
|------|-----------------------------|
| T-001 | The team can develop and test locally from day one — **every contributor gets a working environment without manual setup**. |
| T-002 | **Transactions reach their destination** — messages conform to the agreed standard and counterparties accept every one sent. |
| ... | _(additional tasks omitted for brevity)_ |
Writing rules for Business Deliverables View:
| Rule | Correct | Wrong |
|---|---|---|
| Language | Plain business language | Technical jargon (endpoints, migrations, repositories) |
| Perspective | What the business/user gains | What the developer implements |
| Voice | Active — "The product can...", "Users gain..." | Passive — "It is implemented...", "It will be created..." |
| Length | 1-3 sentences max | Bullet lists, long paragraphs |
| Emphasis | Bold the core value proposition | No bold or no emphasis at all |
| Source | Derived from each task's Deliverable field | Invented separately |
What to include:
What to exclude:
Validation for Business Deliverables View:
| Check | Requirement |
|---|---|
| Language | No technical jargon (no API, REST, handler, migration, repository) |
| Length | 1-3 sentences per row; no bullet lists |
| Voice | Active and capability-focused; no passive constructions |
| Source | Each row derived from the task's Deliverable field, not invented |
| Formatting | Core value proposition bolded; no other inline formatting |
| Exclusions | No file/class names, architecture terms, infrastructure terms, or implementation verbs |
Each task includes:
| Section | Content |
|---|---|
| Header | T-[XXX]: [Task Title - What It Delivers] |
| Target | backend | frontend | shared (if multi-module) |
| Working Directory | Path from topology config (if multi-module) |
| Agent | Recommended agent: ring:backend-engineer-, ring:frontend--engineer-*, etc. |
| Deliverable | One sentence: what working software ships |
| Scope | Includes (specific capabilities), Excludes (future tasks with IDs) |
| Success Criteria | Testable items: Functional, Technical, Operational, Quality |
| User/Technical Value | What users can do; what this enables |
| Technical Components | From TRD, From Dependencies |
| Dependencies | Blocks (T-AAA), Requires (T-BBB), Optional (T-CCC) |
| Effort Estimate | AI Estimate: X AI-agent-hours, Confidence: [High/Medium/Low], Estimation Method: [Agent Name], Team type |
| Risks | Per risk: Impact, Probability, Mitigation, Fallback |
| Testing Strategy | Unit, Integration, E2E, Performance, Security |
| Definition of Done | Code reviewed, tests passing, docs updated, security clean, performance met, deployed to staging, PO acceptance, monitoring configured |
When AI estimation fails or is unavailable:
AI estimation is considered failed when:
Who can approve: PM Team Lead or designated backup
Required evidence for override:
How to record:
**Effort Estimate:**
- AI Estimate: [FAILED - API unavailable]
- Manual Override: X hours (approved by: [Name], date: YYYY-MM-DD)
- Estimation Method: Historical comparison with Task T-XXX
- Confidence: Medium (manual estimation, subject to higher variance)
- Evidence: [Link to similar task or rationale document]
When manual estimation is used:
Example:
Manual estimate: 6 hours
Adjusted estimate: 6h × 1.3 = 7.8 hours
Confidence: Medium → Low (due to estimation method)
Re-estimation scheduled: [Date when AI available]
Align with rationalization table:
| Violation | Wrong | Correct |
|---|---|---|
| Technical-Only Tasks | "Setup PostgreSQL Database" with install/configure steps | "User Data Persistence Foundation" with deliverable (working DB layer <100ms), user value (enables T-002/T-003), success criteria (users table, pooling, migrations) |
| Oversized Tasks | "Complete User Management System" (6 weeks) with all auth features combined | Split into: T-005 Basic Auth (L), T-006 Password Mgmt (M), T-007 2FA (M), T-008 Permissions (L) |
| Vague Success Criteria | "Feature works, Tests pass, Code reviewed" | Functional (upload 100MB, formats), Technical (<2s response), Operational (99.5% success rate), Quality (90% coverage) |
Optimize task order by sprint/phase with goals, critical path identification, and parallel work opportunities.
| Factor | Points | Criteria |
|---|---|---|
| Task Decomposition | 0-30 | All appropriately sized: 30, Most well-scoped: 20, Too large/vague: 10 |
| Value Clarity | 0-25 | Every task delivers working software: 25, Most clear: 15, Unclear: 5 |
| Dependency Mapping | 0-25 | All documented: 25, Most clear: 15, Ambiguous: 5 |
| Estimation Quality | 0-20 | Based on past work: 20, Educated guesses: 12, Speculation: 5 |
Action: 80+ autonomous | 50-79 present options | <50 ask about velocity
Output to (depends on topology.structure):
| Structure | Files Generated |
|---|---|
| single-repo | docs/pre-dev/{feature}/tasks.md |
| monorepo | Index + {backend.path}/docs/pre-dev/{feature}/tasks.md + {frontend.path}/docs/pre-dev/{feature}/tasks.md |
| multi-repo | {backend.path}/docs/pre-dev/{feature}/tasks.md + {frontend.path}/docs/pre-dev/{feature}/tasks.md |
ring:pre-dev-subtask-creation)If you created tasks that don't deliver working software, rewrite them.
Tasks are not technical activities. Tasks are working increments.
"Setup database" is not a task. "User data persists correctly" is a task. "Implement OAuth" is not a task. "Users can log in with Google" is a task. "Write tests" is not a task. Tests are part of Definition of Done for other tasks.
Every task must answer: "What working software can I demo to users?"
If you can't demo it, it's not a task. It's subtask implementation detail.
Deliver value. Ship working software. Make tasks demoable.
This skill is a task decomposition skill and does NOT require WebFetch of language-specific standards.
Purpose: Task Breakdown defines WHAT value increments to deliver, not HOW to implement them. Language-specific standards apply during subtask creation and implementation.
However, MUST load PRD (Gate 1), TRD (Gate 3), and research.md to ensure tasks align with requirements and architecture.
| Condition | Action | Severity |
|---|---|---|
| PRD (Gate 1) not validated | STOP and complete Gate 1 first | CRITICAL |
| TRD (Gate 3) not validated | STOP and complete Gate 3 first | CRITICAL |
| Task exceeds 2 weeks (XL size) | STOP and break down further | HIGH |
| Task has no measurable success criteria | STOP and define testable criteria | HIGH |
| Task has no user or technical value | STOP and redefine as value delivery | HIGH |
| AI estimation failed | Follow fallback procedure in skill | MEDIUM |
| Dependencies form circular loop | STOP and resolve dependency cycle | HIGH |
These requirements are NON-NEGOTIABLE:
⏸️ Pending for every task row⏸️ Pending at task creation| Severity | Definition | Example |
|---|---|---|
| CRITICAL | Cannot create valid tasks | PRD/TRD not validated, no requirements to decompose |
| HIGH | Task violates sizing or value rules | XL task, no success criteria, no value defined |
| MEDIUM | Task incomplete but usable | Missing one dependency, unclear testing strategy |
| LOW | Minor documentation gaps | Definition of Done could be more detailed |
| User Says | Your Response |
|---|---|
| "This 3-week task is fine" | "Cannot accept 3-week tasks. Tasks >2 weeks hide complexity. I'll break it into smaller deliverables." |
| "Setup tasks don't need user value" | "Cannot create valueless tasks. Setup ENABLES value. I'll define what this setup enables." |
| "Success criteria are obvious" | "Cannot assume obvious criteria. Obvious to you ≠ testable. I'll document explicit, measurable criteria." |
| "Skip AI estimation, use story points" | "Cannot skip AI estimation. Story points are abstract; AI hours are concrete. I'll run AI analysis." |
| "We can figure out dependencies later" | "Cannot defer dependencies. Later is too late. I'll map dependencies now." |
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "This 3-week task is fine" | Tasks >2 weeks hide complexity | Break down into ≤2 week tasks |
| "Setup tasks don't need value" | Setup enables value. Define what it enables | Connect to user/technical value |
| "Success criteria are obvious" | Obvious to you ≠ testable. Document explicitly | Define measurable criteria |
| "Dependencies will be clear later" | Later is too late. Map them now | Document all dependencies |
| "Skip AI estimation, use story points" | Story points are abstract; AI hours are concrete | Run AI estimation |
| "Technical tasks can skip user value" | Even infrastructure enables users. Define connection | Connect to user impact |
| "Testing strategy can be decided during" | Testing affects design. Plan it upfront | Define testing strategy now |