Task creation for issue trackers — clear descriptions, acceptance criteria, proper categorization. Invoke when creating tasks, bug reports, or feature requests in any tracker.
Creates well-structured issue tracker tasks with clear descriptions, acceptance criteria, and proper categorization.
npx claudepluginhub xobotyi/cc-foundryThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Create well-formed tasks in an issue tracker. The input is either a decomposition document (from the pipeline) or a standalone request from a user. The output is a tracked task with enough context for an implementer to start work without asking questions.
A task is the atomic unit of tracked work. It's the handoff point — where planning ends and implementation begins. Everything upstream in the pipeline (design documents, technical designs, decomposition) converges here into a concrete assignment that someone picks up and completes.
Bad tasks waste more time than they save. A vague title with no description forces the implementer to reverse-engineer the intent from chat history, commit logs, or the person who created it — if that person is still available. A task that's too detailed prescribes implementation and removes the judgment that makes skilled developers effective. The sweet spot is a description that communicates intent, scope, and completion criteria without dictating how to get there.
Tasks also serve as institutional memory. Months later, when someone asks "why did we change the authentication flow?", the task description, its linked design document, and its acceptance criteria reconstruct the reasoning. If the task was just a title, that context is lost.
From the pipeline: After task-decomposition produces a decomposition document with task descriptions, this skill handles creating those tasks in the issue tracker with proper fields, categorization, and linking.
Standalone: When a user requests a task directly — "create a bug for the pagination issue" or "add a task to upgrade the dependency." No decomposition document needed; gather context from the conversation.
This is an active discovery step — use available tools, don't rely on inference.
Write the title. The title is what people see in lists, boards, and notifications:
Write the description. Structure depends on task type, but every description needs:
Context — Why this task exists. Connect it to the larger effort. Link to the design document or technical design if one exists. One or two sentences.
What to do — The specific work. Concrete enough that the implementer knows the scope, abstract enough that they choose the approach. Describe the change, not the code.
Acceptance criteria — How to know it's done. See the dedicated section below.
References — Links to design documents, technical designs, relevant code paths, mockups, or similar implementations.
Set fields based on what you discovered in Step 1. Carry over estimates from the decomposition document if available. Only set fields that the tracker supports and the project uses — don't invent metadata.
## Task Draft
**Project:** [project name]
**Type:** [value from tracker]
**Title:** [title]
**Description:**
[full description with context, what to do, acceptance criteria, references]
**Fields:**
[list each field you plan to set with its value, based on project config]
After approval:
Acceptance criteria are the most important part of a task description after context. They transform vague intent ("make search faster") into verifiable conditions ("search returns results within 200ms for queries under 50 characters").
Testable — Each criterion has a clear pass/fail outcome. "User experience is improved" fails this test. "Page loads in under 2 seconds on 3G" passes.
Outcome-focused — Describe what the system does, not how it's built. "Search returns partial matches for inputs of 3+ characters" — not "implement a LIKE query with wildcard prefix."
Independent — Each criterion can be verified on its own. If criterion B can only be tested after criterion A, consider whether they're really one criterion.
Measurable — Quantify when possible. Response times, character limits, item counts, error rates. Numbers eliminate ambiguity.
Rule-oriented (checklist) — Best for most engineering tasks. Simple, scannable, directly translatable to test cases:
- [ ] Search field appears in the top navigation bar
- [ ] Search triggers on button click or Enter key
- [ ] Input accepts up to 200 characters
- [ ] Results display within 500ms for datasets under 10,000 records
- [ ] Empty query shows recent items instead of empty state
Scenario-oriented (Given/When/Then) — Best for complex user flows where preconditions and sequences matter:
Given the user is on the search page with an empty query
When they type 3 or more characters
Then results appear as a dropdown below the search field
Given the user has selected a filter
When they clear the search field
Then the filter remains applied and results update accordingly
Use rule-oriented by default. Switch to scenario-oriented when the behavior depends on specific preconditions or multi-step sequences.
Too vague: "Works correctly" or "handles edge cases" — what does correctly mean? Which edge cases?
Too prescriptive: "Use a Redis cache with TTL of 300 seconds" — that's implementation, not acceptance. Say "repeated queries return cached results" and let the implementer choose the mechanism.
Missing negative cases: Only describing the happy path. Include: what happens with invalid input, empty states, error conditions, permission boundaries.
Untestable conditions: "Should be fast" or "user-friendly interface." Replace with measurable thresholds.
Different types of work call for different description structures. These patterns can be combined.
For broken functionality. The reader needs to understand what's wrong, how to see it, and what should happen instead.
## Context
[What feature is affected and how it relates to the system]
## Problem
[What's broken — observable symptoms, not root cause speculation]
## Steps to Reproduce
1. [Step one]
2. [Step two]
3. [Step three]
## Expected Behavior
[What should happen]
## Actual Behavior
[What happens instead]
## Acceptance Criteria
- [ ] [Verifiable fix condition]
- [ ] [Regression prevention]
Include screenshots, error messages, or log excerpts when they clarify the problem. Omit when the description is self-explanatory.
For building or changing functionality. The standard pattern — most tasks from a decomposition document follow this structure.
## Context
[Why this task exists, link to design/technical design]
## What to Do
- [Specific work items]
## Acceptance Criteria
- [ ] [Verifiable conditions]
## References
- [Links to design docs, code, similar implementations]
For work where the outcome is understanding, not code. Debugging, research, feasibility analysis.
## Context
[What prompted the investigation]
## Symptoms / Evidence
[What was observed — logs, error messages, metrics, user reports]
## Goal
[What answer or decision this investigation should produce]
## Acceptance Criteria
- [ ] Root cause identified and documented
- [ ] Recommended fix or next steps proposed
For code improvements that don't change behavior. The reader needs to understand what's being restructured and why, despite no user-visible change.
## Context
[What motivates the refactoring — upcoming feature, tech debt, performance]
## What to Do
- [Specific restructuring work]
## Constraints
- [Behavior must not change]
- [Backward compatibility requirements]
## Acceptance Criteria
- [ ] Existing tests pass without modification
- [ ] [Specific structural improvements verifiable in code review]
Two rules from the decomposition skill apply equally here:
Descriptions are plans, not reports. Write every description as if the work has not started. "Add pagination to the search results" — not "Added pagination" or "Pagination was implemented." The implementer reads this fresh. Tell them what to do, not what was done.
No implementation in descriptions. Describe WHAT should change and what "done" looks like. No production code, no function signatures, no class names to create. Pseudocode is acceptable for complex logic. Configuration samples are acceptable when configuration is the deliverable.
Before creating a task in the tracker:
Title-only tasks: "Fix the bug" or "Update the API" with no description. After a day, nobody — including the author — remembers the context. Every task needs a description.
Copy-pasted chat messages: Dumping a conversation thread into a task description. Chat messages lack structure, contain tangents, and age poorly. Extract the intent, structure it, and link to the conversation if someone needs the raw thread.
Implementation prescriptions: "Create a file called cache_handler.go with a struct CacheHandler that has methods Get and Set." This is code review territory, not task description. Describe the capability needed, not the code to write.
Missing acceptance criteria: A description that says what to do but never defines done. The implementer finishes "when it seems right" — which may not match what was intended.
Stale descriptions: Tasks written weeks ago with outdated assumptions. If the context changed between creation and implementation, update the description. Don't let implementers discover stale requirements mid-work.
Relationships in descriptions instead of native links: "Subtask of PROJ-456. See also PROJ-123." When the tracker supports linking, use it — native links are visible in dedicated UI and stay current. Description-based references are only acceptable when the tracker's tools lack linking capabilities.
When all tasks are created:
This skill should be used when the user asks about libraries, frameworks, API references, or needs code examples. Activates for setup questions, code generation involving libraries, or mentions of specific frameworks like React, Vue, Next.js, Prisma, Supabase, etc.