From cotask
Defines conventions for TASKS.md files: structure with optional sections, status symbols ([ ] todo, [/] ongoing, [x] done, [-] backlog), task descriptions, and testable acceptance criteria. Use for creating, editing, updating tasks or tracking progress.
npx claudepluginhub wbopan/cotask-marketplace --plugin cotaskThis skill uses the workspace's default tool permissions.
Conventions and workflow for maintaining a `TASKS.md` file. This skill tells you how to read, write, and update tasks correctly.
Manages tasks in TASKS.md with Active/Waiting/Someday/Done sections; add/complete/summarize tasks, extract from conversations, setup HTML dashboard.
Creates tasks or sections in TASKS.md to track project work. Auto-triggers on phrases like 'add a task', 'todo', 'let's plan', or untracked future work.
Organizes markdown task files into folders like tasks/, ideas/, templates/, bugs/ with YAML frontmatter defining types (task, idea, bug), due dates, tags, status. Use when creating or modifying task files.
Share bugs, ideas, or general feedback.
Conventions and workflow for maintaining a TASKS.md file. This skill tells you how to read, write, and update tasks correctly.
TASKS.md lives at the project root (TASKS.md). This keeps it visible and easy to access, and avoids permission prompts that occur when writing inside .claude/.
Tasks in TASKS.md can optionally be grouped under ## Section Name headers. Sections are lightweight grouping — they help organize tasks when a project has distinct areas of work, but they're entirely optional. A TASKS.md with no ## headers at all is perfectly valid.
Each section can have an optional Description: paragraph after its header, explaining the section's purpose.
# TASKS → optional ## Section Name → optional Description: paragraph → task list. Each task is a - [ ] line ending with a #slug ID, with indented description and AC: line underneath.
Tasks can appear directly under # TASKS without any section header — this is the simplest form. When sections are used, each ## header starts a new group.
When creating a new TASKS.md from scratch, copy references/template.md to TASKS.md at the project root and fill in the placeholders.
| Symbol | Meaning | When to use |
|---|---|---|
[ ] | Todo | Task hasn't been started |
[/] | Ongoing | Work has begun but isn't complete |
[x] | Done | Acceptance criteria met |
[-] | Backlog | Decided not to do this now; keep it for the record |
Mark [/] when you start working on a task. Before marking [x], report what you've done and ask the user for confirmation — never mark a task done on your own.
You MUST mark [/] in two situations:
[/], not [ ].[/] before doing the actual work. Don't wait until you're done.Each task has three parts: a title (the - [ ] line), a description (indented lines explaining context), and an AC (acceptance criteria).
AC defines "done" before work begins. It should be:
AC should be black-box: it describes observable behavior of the final deliverable, not internal implementation details. "Model produces correct answers on the test split" is a good AC. "Unit tests pass" or "code compiles without errors" are not — they verify internals, not whether the thing actually works when you use it.
Good AC:
- [ ] Run GEPA baseline #run-gepa-baseline
Set up official GEPA, configured to match our experimental setup with the same data splits and a comparable memory module.
AC: GEPA's optimized prompt and corresponding test score on LoCoMo, evaluated on the same train/val/test split as Engram.
Bad — AC specifies implementation:
- [ ] Run GEPA baseline #run-gepa-baseline
AC: Clone repo, apply patch to config.yaml, run `python main.py --dataset locomo`, collect scores.
Bad — AC depends on experimental outcome:
- [ ] Add WebArena benchmark #add-webarena-benchmark
AC: Engram achieves +10pp advantage over seed programs.
Fixed:
- [ ] Add WebArena benchmark #add-webarena-benchmark
AC: Benchmark integrated; all configs (No Memory, Vanilla RAG, Engram) produce test scores on the hosted instance.
A task should be a complete, meaningful unit of work with a tangible deliverable — not a small research step or preparatory action. If the title could be paraphrased as "look into X" or "figure out Y", it's too small. Fold it into a larger task that produces something concrete.
Too small:
- [ ] Research ALFWorld macOS compatibility #research-alfworld-macos
- [ ] Read GEPA paper and summarize approach #read-gepa-paper
- [ ] Check if WebArena Docker images work #check-webarena-docker
Right size:
- [ ] Add ALFWorld benchmark support #add-alfworld-benchmark
- [ ] Run GEPA baseline on LoCoMo #run-gepa-locomo
- [ ] Deploy WebArena-Verified environments #deploy-webarena-envs
Research, investigation, and exploration are steps within a task, not tasks themselves. The task title should describe the end result, not the process.
If the user asks you to add a task without providing an AC, warn them and suggest one. Don't refuse the edit, but flag the gap:
"Added the task. It doesn't have an acceptance criterion yet — here's a suggestion:
AC: .... Want me to add it?"
AC: for easy scanning.Every task carries a short, human-readable ID appended to the title line as a #slug tag. The slug is 3–4 lowercase words joined by hyphens, e.g. #fix-auth-bug. It must be unique within the file.
- [ ] Fix Login Page Auth Bugs #fix-auth-bug
Users intermittently get 403 when logging in with SSO.
AC: SSO login succeeds on all tested providers; no 403 in logs.
IDs are useful for referencing tasks in commits, conversations, and branch names. When creating a task, always generate an ID from the title. When the user specifies an ID, use it as-is.
Sections can optionally open with a Description: paragraph. This explains what the section covers and provides context for its tasks. It's purely informational — there's no "gate" or completion condition.
When you finish working on a task, follow this sequence:
CM: line (completion memo) under the task description — one or two sentences recording what was actually done, key decisions made, or unexpected findings. This turns TASKS.md into a lightweight record of outcomes.[x] yourself.[x] only after the user confirms.- [x] Fix Login Page Auth Bugs #fix-auth-bug
Users intermittently get 403 when logging in with SSO.
AC: SSO login succeeds on all tested providers; no 403 in logs.
CM: Root cause was stale CSRF tokens after IdP redirect. Added token refresh on the callback route. Tested with Google, Okta, and Azure AD.
When a task is deferred, mark it [-] rather than deleting it. Backlog items are recognized, worthwhile work that hasn't been pulled into the current focus yet — they're expected to be picked up later. If a task is truly obsolete or superseded, delete it — backlog is not a graveyard.
The plugin includes an interactive kanban dashboard. Use the /dashboard skill inside Claude Code to start and open it.
The dashboard runs at http://localhost:3847. The index page lists all projects (discovered from ~/.claude/projects/) that have a TASKS.md file. Click a project to open its dashboard. The dashboard: