From hieutrtr-ai1-skills
Decompose high-level objectives into atomic implementation tasks for Python/React projects. Use when breaking down large features, multi-file changes, or tasks requiring more than 3 steps. Produces independently-verifiable tasks with done-conditions, file paths, complexity estimates, and explicit ordering. Creates persistent task files (task_plan.md, progress.md) to track state across context windows. Does NOT cover high-level planning (use project-planner) or architecture decisions (use system-architecture).
npx claudepluginhub joshuarweaver/cascade-code-testing-misc --plugin hieutrtr-ai1-skillsThis skill is limited to using the following tools:
Activate this skill when:
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Activate this skill when:
project-planner output (module map, risks, acceptance criteria) needs to be broken into atomic, executable tasksExpected input: Read plan.md (or plan-<feature-name>.md) produced by project-planner. This file contains the module map, risks, and acceptance criteria. If no plan file exists, accept a high-level objective directly and work from that. The project-planner skill produces the strategic plan (what modules are affected and why). This skill turns that plan into ordered, executable atomic tasks with persistent tracking.
Do NOT use this skill for:
project-planner)system-architecture)python-backend-expert or react-frontend-expert)pytest-patterns or react-testing-patterns)Every task produced by this skill MUST follow these rules:
pytest tests/unit/test_user.py -x, npm test -- --grep "Component").Use this format for every task:
### Task [N]: [Short descriptive title]
- **Files:** [list of files to create or modify]
- **Preconditions:** [task IDs that must be done first, or "None"]
- **Steps:**
1. [Specific, unambiguous action]
2. [Specific, unambiguous action]
- **Done when:** [verification command] → [expected result]
- **Complexity:** [trivial / small / medium / large]
| Size | Files | Lines Changed | Verification | Typical Duration |
|---|---|---|---|---|
| Trivial | 1 | <20 | Quick check | Single action |
| Small | 1-2 | 20-100 | Unit test | Few steps |
| Medium | 2-3 | 100-200 | Unit + integration | Multiple steps |
| Large | 3+ | >200 | Full test suite | Split further |
If any task is sized "large", it MUST be decomposed into smaller tasks.
When ordering tasks, follow this priority sequence:
Create these files to maintain state across context windows:
The complete task list with status tracking:
# Task Plan: [Feature Name]
## Status: IN_PROGRESS
## Total Tasks: [N]
## Completed: [M] / [N]
### Task 1: [Title] ✅ DONE
[task details]
### Task 2: [Title] 🔄 IN PROGRESS
[task details]
### Task 3: [Title] ⏳ PENDING
[task details]
Update status markers as tasks complete:
⏳ PENDING — Not yet started🔄 IN PROGRESS — Currently being worked on✅ DONE — Completed and verified❌ BLOCKED — Cannot proceed (list reason)Current state for resuming after context window reset:
# Progress: [Feature Name]
## Current State
- **Last completed task:** Task [N]: [Title]
- **Current task:** Task [M]: [Title]
- **Next task:** Task [P]: [Title]
## What's Been Done
- [Summary of completed work]
## What's Next
- [Immediate next steps]
## Blockers
- [Any issues preventing progress]
Notes, decisions, and discoveries made during work:
# Findings: [Feature Name]
## Decisions Made
- [Decision 1: context and rationale]
## Discoveries
- [Unexpected finding 1]
## Blockers Encountered
- [Blocker 1: description and resolution]
After decomposing, produce a text-based dependency graph showing task ordering:
Task 1 (migration) ──→ Task 2 (schema)
↓
Task 3 (service) ──→ Task 4 (route) ──→ Task 6 (frontend)
↓
Task 5 (tests)
Verify there are no circular dependencies. If found, restructure tasks to break the cycle.
Follow these steps to decompose any objective:
See references/decomposition-examples.md for complete worked examples including:
Objective: Add email verification to user registration.
Task 1: Add email_verified field to User model + migration
Task 2: Create email verification token schema and service
Task 3: Add /verify-email endpoint
Task 4: Modify registration to send verification email
Task 5: Add frontend verification page
Task 6: Write tests for verification flow
Dependency graph:
Task 1 ──→ Task 2 ──→ Task 3 ──→ Task 4 ──→ Task 6
↓
Task 5 ──→ Task 6
Circular dependencies: If Task A requires Task B and Task B requires Task A, restructure by extracting the shared dependency into a new Task C that both depend on.
Tasks that are hard to verify in isolation: Add a lightweight integration test as the verification step. If no automated test is possible, document a manual verification procedure.
Context window running out: Immediately save current state to progress.md before the window resets. Include: last completed task, current task state, any in-progress changes, and the next step to take when resuming.
Scope creep during decomposition: If decomposition reveals the feature is larger than expected, flag this. Consider splitting into multiple phases with separate task plans rather than creating an unmanageable single plan.
Cross-cutting concerns: When a task affects a horizontal layer (auth middleware, logging, error handling), make it a root task that all subsequent tasks depend on. Do not scatter cross-cutting changes across multiple tasks.
Partially completed tasks on resume: When resuming from progress.md, verify the current task's "Done when" condition. If it passes, mark as done and move to the next task. If it fails, continue from where the progress notes indicate.