From todoist-manager
Review 1-3 Todoist tasks for clarity, actionability, and execution readiness using a structured scoring rubric
npx claudepluginhub infiquetra/infiquetra-claude-plugins --plugin todoist-managerThis skill uses the workspace's default tool permissions.
Structured review of 1-3 Todoist tasks for clarity, actionability, and execution readiness. Scores each task across 5 dimensions and provides specific improvement recommendations.
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Structured review of 1-3 Todoist tasks for clarity, actionability, and execution readiness. Scores each task across 5 dimensions and provides specific improvement recommendations.
Use for focused reviews of 1-3 tasks when the user wants:
For batch reviews of many tasks, use the todoist-manager agent instead.
TODOIST_TOKEN set in environmenttodoist-api-python SDK installed (auto-installs if missing)plugins/todoist-manager/skills/todoist-manage/scripts/todoist_client.pyThe input can be a task ID, task content, or a project name. Resolve in this order:
1a. Task ID provided — fetch directly:
python3 <script_path> tasks get --task-id <ID>
1b. Looks like a project name — check projects list first, then use ## to fetch all tasks including sub-projects:
# List projects to find a name match
python3 <script_path> projects list
# If matched, fetch all tasks in project + sub-projects
python3 <script_path> tasks filter --query "##<Project Name>"
1c. Task content provided — fall back to text search:
python3 <script_path> tasks filter --query "search: <task content>"
Resolution priority: task ID > project name match > text search. Always prefer project-based lookup when the input matches a project name, because text search only finds tasks where the search string appears in the task title.
Score each dimension using the readiness rubric (references/readiness-rubric.md):
| Dimension | Max Score | Key Question |
|---|---|---|
| Clarity | 2 | Is the task content unambiguous? |
| Actionability | 2 | Does it start with a clear action verb? |
| Scope | 2 | Can it be done in one session (<2 hours)? |
| Context | 2 | Does it have the right metadata? |
| Outcome | 2 | Is "done" clearly defined? |
Total: 10 points
Format:
📋 Task Review: "[Task Content]"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Clarity ██████░░░░ 3/5 ⚠️
Actionability ██████████ 5/5 ✅
Scope ████░░░░░░ 2/5 ❌
Context ████████░░ 4/5 ✅
Outcome ██████░░░░ 3/5 ⚠️
Overall: 17/25 — Needs Improvement
Issues Found:
❌ Scope too large — estimate suggests 3-4 hours
⚠️ Missing due date for time-sensitive work
⚠️ Outcome not clearly defined
Recommendations:
1. Break into 2-3 subtasks (30-60 min each)
2. Add due date: --due-string "friday"
3. Add success criterion to description
Want me to apply these improvements? [Y/N]
Based on review, offer to:
Update task content:
python3 <script_path> tasks update \
--task-id <ID> \
--content "Verb + specific action + outcome" \
--description "Done when: [criterion]" \
--due-string "friday" \
--labels focus computer
Break into subtasks:
# Create subtasks from parent
python3 <script_path> tasks add \
--content "Step 1: Research X" \
--parent-id <ID> \
--duration 30 \
--duration-unit minute
Add context labels:
python3 <script_path> tasks update \
--task-id <ID> \
--labels focus computer
See references/readiness-rubric.md for the complete 5-dimension scoring rubric with detailed criteria.
See references/task-templates.md for 7 task type templates (research, writing, coding, meeting, admin, communication, planning).
✅ Task Ready: "Write unit tests for auth module"
Score: 22/25 — Ready to Execute
Minor suggestions:
- Consider adding @focus label for deep work
- Due date is 5 days out — still realistic?
📋 Task Review: "Work on the project"
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
❌ NOT READY — Score: 8/25
Critical Issues:
1. Clarity (1/5): "Work on the project" is too vague
→ Which project? What aspect? What outcome?
2. Actionability (1/5): No action verb
→ Start with: Research / Write / Review / Build / Fix / Test
3. Scope (2/5): "Project" implies undefined scope
→ What's the smallest meaningful next step?
4. Context (2/5): No project, labels, or due date
→ Add context to make it filterable
5. Outcome (2/5): No success criterion
→ When is this "done"?
Revised task suggestion:
"Draft project requirements document for Q2 feature"
+ Description: "Done when: 1-page requirements doc reviewed by stakeholder"
+ Due: next Tuesday
+ Labels: @focus @computer
+ Duration: 90 min
Apply this revision? [Y/N]
📋 Reviewing 3 Tasks for Readiness
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Task 1: "Write unit tests for auth module" — 22/25 Ready
⚠️ Task 2: "Update documentation" — 14/25 Needs Work
❌ Task 3: "Handle the client thing" — 6/25 Not Ready
Detailed reviews follow...
| Before | After |
|---|---|
| "Work on project" | "Write API endpoint for user login" |
| "Do the report" | "Draft Q2 metrics report (2 pages)" |
| "Fix the bug" | "Fix null pointer in user.profile.get()" |
| "Call someone" | "Call Sarah re: contract renewal" |
Strong action verbs: Research, Write, Review, Build, Fix, Test, Deploy, Update, Draft, Design, Schedule, Send, Call, Analyze, Create, Document
If task > 2 hours:
Energy context:
@focus — requires deep concentration@quick — under 15 minutes@routine — habitual, low-energyLocation context:
@computer — requires desktop/laptop@phone — can do from phone@office — requires physical presence@home — home environment neededStatus context:
@waiting — blocked on someone else@blocked — has an explicit blocker