Creates well-formed tasks following a template that engineers can implement. Triggers on: 'create tasks', 'define work items', 'break this down', creating tasks from PRD, converting requirements into actionable tasks, feature breakdown, sprint planning.
From hugin-v0npx claudepluginhub michelve/hugin-marketplace --plugin hugin-v0This skill uses the workspace's default tool permissions.
Guides browser automation with Playwright, Puppeteer, Selenium for e2e testing and scraping. Teaches reliable selectors, auto-waits, isolation to fix flaky tests.
Provides checklists to review code for functionality, quality, security, performance, tests, and maintainability. Use for PRs, audits, team standards, and developer training.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
!`cat .tasks/_template.md 2>/dev/null || echo 'No task template found at .tasks/_template.md'`
Creates well-formed tasks that provide large amounts of contexts so that engineers that weren't in conversations can implement the task without any prior knowledge and without asking questions.
Tasks should be created using the tools and documentation conventions in the project the skills is being applied to. If the conventions are not clear, ask the user to clarify and then document them.
All tasks are stored in .tasks/ with the following lifecycle directories:
| Directory | Purpose |
|---|---|
.tasks/backlog/ | Tasks not yet started |
.tasks/in-progress/ | Tasks currently being worked on |
.tasks/done/ | Completed tasks (passed task-check) |
.tasks/cancelled/ | Cancelled tasks (with reason) |
File naming: NNNN-short-title.md - zero-padded number, lowercase, hyphenated.
To find the next number, check across all subdirectories for the highest NNNN prefix and increment by 1.
When creating tasks, use .tasks/_template.md as the base and save them to .tasks/backlog/.
Every task must provide:
🚨 NEVER create a task without validating its size first. A PRD deliverable is NOT automatically a task—it may be an epic that needs splitting.
🚨 Never copy PRD bullets verbatim. Use Example Mapping to transform them into executable specifications.
| Card | What You Do |
|---|---|
| 🟡 Story | State the deliverable in one specific sentence |
| 🔵 Rules | List every business rule/constraint (3-4 max per task) |
| 🟢 Examples | For EACH rule: happy path + edge cases + error cases |
| 🔴 Questions | Surface unknowns → resolve or spike first |
The Examples (🟢) ARE your acceptance criteria. Write them in Given-When-Then format:
Given [context/precondition]
When [action/trigger]
Then [expected outcome]
Edge case checklist - for each rule, systematically consider:
| Category | Check For |
|---|---|
| Input | Empty, null, whitespace, boundaries, invalid format, special chars, unicode, too long |
| State | Concurrent updates, race conditions, invalid sequences, already exists, doesn't exist |
| Errors | Network failure, timeout, partial failure, invalid permissions, quota exceeded |
Example: PRD says "User can search products"
Rules identified: (1) Search by title, (2) Pagination, (3) Empty state
For Rule 1 alone, edge case thinking yields:
If ANY of these are true, STOP and split:
When you need to split, use these techniques:
| Technique | Split By | Example |
|---|---|---|
| Paths | Different user flows | "Pay with card" vs "Pay with PayPal" |
| Interfaces | Different UIs/platforms | "Desktop search" vs "Mobile search" |
| Data | Different data types | "Upload images" vs "Upload videos" |
| Rules | Different business rules | "Basic validation" vs "Premium validation" |
| Spikes | Unknown areas | "Research payment APIs" before "Implement payments" |
Every task must be a vertical slice—cutting through all layers needed for ONE specific thing:
✅ VERTICAL (correct):
"Add search by title" → touches UI + API + DB for ONE search type
❌ HORIZONTAL (wrong):
"Build search UI" → "Build search API" → "Build search DB"
[Action verb] [specific object] [outcome/constraint]
🚨 NEVER use these—they signal an epic, not a task:
| Pattern | Why It's Wrong |
|---|---|
| "Full implementation of X" | Epic masquerading as task |
| "Build the X system" | Too vague, no specific deliverable |
| "Complete X feature" | Undefined scope |
| "Implement X" (alone) | Missing specificity |
| "X and Y" | Two tasks combined |
| "Set up X infrastructure" | Horizontal slice |
If you catch yourself writing one of these, STOP and apply SPIDR.
Every task MUST pass INVEST before creation:
| Criterion | Question | Fail = Split |
|---|---|---|
| Independent | Does it deliver value alone? | Depends on other incomplete tasks |
| Negotiable | Can scope be discussed? | Rigid, all-or-nothing |
| Valuable | Does user/stakeholder see benefit? | Only technical benefit |
| Estimable | Can you size it confidently? | "Uh... maybe 3 days?" |
| Small | Fits in 1 day? | More than 1 day |
| Testable | Has concrete acceptance criteria? | Vague or missing criteria |
See .tasks/_template.md for the full template. Key sections:
---
status: backlog
created: YYYY-MM-DD
updated: YYYY-MM-DD
---
# NNNN-short-title
## Deliverable
[What user/stakeholder sees when this is done]
## Context and Motivation
[WHY this matters - PRD path, bug report URL, conversation context]
## Key Decisions
- [Decision/Principle] - [rationale]
## Acceptance Criteria
- [ ] Given [context], when [action], then [outcome]
## Out of Scope
[What this task does NOT cover]
## Dependencies
- [Dependency] - [why needed]
## Related Code
- `path/to/file` - [what pattern to follow]
## Verification
[Commands/tests that prove it works]
Before finalizing any task, verify ALL of these:
| Check | Question | If No |
|---|---|---|
| Size | Is this ≤1 day of work? | Split using SPIDR |
| Name | Is the title specific and action-oriented? | Rewrite using formula |
| Vertical | Does it cut through all layers for ONE thing? | Restructure as vertical slice |
| INVEST | Does it pass all 6 criteria? | Fix the failing criterion |
| Context | Can an engineer implement without asking questions? | Add what's missing |
🚨 If the PRD says "full implementation" or similar, you MUST split it. Creating such a task is a critical failure.
This skill uses ${CLAUDE_SESSION_ID} to track task creation sessions:
// Each task creation is logged with session context
const sessionId = process.env.CLAUDE_SESSION_ID;
console.log(`[${sessionId}] Created task: ${taskNumber}-${taskTitle}.md`);
This allows correlation between:
.tasks/backlog/in-progress/ and done/task-check agentUse the session ID to trace the full lifecycle of a task from requirements to completion.