Interactive triage and cleanup for your Todoist task list, designed with **Executive Function (EF) and ADHD support** in mind. Syncs data to a local SQLite database for efficient analysis, detects issues, suggests improvements, and coaches you toward better task management habits.
/plugin marketplace add caseyg/caseys-claude/plugin install todoist-triage@cag-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Interactive triage and cleanup for your Todoist task list, designed with Executive Function (EF) and ADHD support in mind. Syncs data to a local SQLite database for efficient analysis, detects issues, suggests improvements, and coaches you toward better task management habits.
This skill acts as an Executive Function coach, not just a task cleaner. It's built around principles that help people who struggle with:
Core Principles:
/todoist-triagethefuzz library for fuzzy matching (pip install thefuzz)This skill operates in 5 phases:
plugins/todoist-triage/
├── TRIAGE_MEMORY.md # User preferences & session history (git-tracked)
└── data/ # Runtime data (gitignored)
├── todoist.db # SQLite database
└── sync_state.json # Sync token & timestamps
Create the SQLite database if it doesn't exist:
PLUGIN_DIR="$PLUGIN_ROOT"
DB_PATH="$PLUGIN_DIR/data/todoist.db"
# Create data directory if needed
mkdir -p "$PLUGIN_DIR/data"
# Initialize database with schema
sqlite3 "$DB_PATH" << 'EOF'
-- Sync state tracking
CREATE TABLE IF NOT EXISTS sync_state (
id INTEGER PRIMARY KEY CHECK (id = 1),
sync_token TEXT NOT NULL DEFAULT '*',
last_full_sync DATETIME,
last_incremental_sync DATETIME
);
INSERT OR IGNORE INTO sync_state (id, sync_token) VALUES (1, '*');
-- Projects
CREATE TABLE IF NOT EXISTS projects (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
parent_id TEXT,
color TEXT,
is_inbox BOOLEAN DEFAULT 0,
is_archived BOOLEAN DEFAULT 0,
is_favorite BOOLEAN DEFAULT 0,
view_style TEXT,
child_order INTEGER,
synced_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Sections
CREATE TABLE IF NOT EXISTS sections (
id TEXT PRIMARY KEY,
project_id TEXT NOT NULL,
name TEXT NOT NULL,
section_order INTEGER,
synced_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Labels
CREATE TABLE IF NOT EXISTS labels (
id TEXT PRIMARY KEY,
name TEXT NOT NULL,
color TEXT,
is_favorite BOOLEAN DEFAULT 0,
synced_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Items (tasks)
CREATE TABLE IF NOT EXISTS items (
id TEXT PRIMARY KEY,
content TEXT NOT NULL,
description TEXT,
project_id TEXT,
section_id TEXT,
parent_id TEXT,
labels TEXT, -- JSON array
priority INTEGER DEFAULT 1, -- 1=p4, 2=p3, 3=p2, 4=p1
due_date TEXT,
due_datetime TEXT,
due_string TEXT,
due_is_recurring BOOLEAN DEFAULT 0,
deadline_date TEXT,
is_completed BOOLEAN DEFAULT 0,
completed_at DATETIME,
added_at DATETIME,
child_order INTEGER,
synced_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
-- Computed fields for analysis
content_normalized TEXT,
content_hash TEXT
);
-- Completed items archive
CREATE TABLE IF NOT EXISTS completed_items (
id TEXT PRIMARY KEY,
content TEXT NOT NULL,
project_id TEXT,
completed_at DATETIME,
synced_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Triage findings (per session)
CREATE TABLE IF NOT EXISTS triage_findings (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
finding_type TEXT NOT NULL, -- duplicate_exact, duplicate_fuzzy, stale, missing_due, inbox_old, etc.
item_id TEXT,
related_item_id TEXT,
similarity_score REAL,
details TEXT, -- JSON
suggested_action TEXT,
user_decision TEXT,
decided_at DATETIME,
applied_at DATETIME
);
-- Learned rules
CREATE TABLE IF NOT EXISTS triage_rules (
id INTEGER PRIMARY KEY AUTOINCREMENT,
rule_type TEXT NOT NULL,
pattern TEXT,
action TEXT NOT NULL,
confidence REAL DEFAULT 1.0,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
last_applied DATETIME,
apply_count INTEGER DEFAULT 0
);
-- Indices
CREATE INDEX IF NOT EXISTS idx_items_project ON items(project_id);
CREATE INDEX IF NOT EXISTS idx_items_content_hash ON items(content_hash);
CREATE INDEX IF NOT EXISTS idx_items_added ON items(added_at);
CREATE INDEX IF NOT EXISTS idx_items_due ON items(due_date);
CREATE INDEX IF NOT EXISTS idx_items_completed ON items(is_completed);
CREATE INDEX IF NOT EXISTS idx_findings_session ON triage_findings(session_id);
CREATE INDEX IF NOT EXISTS idx_findings_type ON triage_findings(finding_type);
EOF
echo "Database initialized at $DB_PATH"
TODOIST_TOKEN=$(op item get "Todoist" --fields "API Token" --reveal)
SYNC_TOKEN=$(sqlite3 "$DB_PATH" "SELECT sync_token FROM sync_state WHERE id = 1;")
echo "Current sync token: ${SYNC_TOKEN:0:20}..."
# Fetch all resources (full or incremental based on token)
RESPONSE=$(curl -s -X POST "https://api.todoist.com/sync/v9/sync" \
-H "Authorization: Bearer $TODOIST_TOKEN" \
-d "sync_token=$SYNC_TOKEN" \
-d 'resource_types=["all"]')
# Save response for parsing
echo "$RESPONSE" > "$PLUGIN_DIR/data/sync_response.json"
# Check if full sync
IS_FULL_SYNC=$(echo "$RESPONSE" | python3 -c "import json,sys; d=json.load(sys.stdin); print(d.get('full_sync', False))")
echo "Full sync: $IS_FULL_SYNC"
# Extract new sync token
NEW_SYNC_TOKEN=$(echo "$RESPONSE" | python3 -c "import json,sys; d=json.load(sys.stdin); print(d.get('sync_token', ''))")
Use this Python script to parse the sync response and populate SQLite:
#!/usr/bin/env python3
"""Parse Todoist sync response and populate SQLite database."""
import json
import sqlite3
import hashlib
import re
from pathlib import Path
from datetime import datetime
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
SYNC_RESPONSE = PLUGIN_DIR / "data" / "sync_response.json"
def normalize_content(content: str) -> str:
"""Normalize task content for comparison."""
# Lowercase, remove extra whitespace, strip punctuation
text = content.lower().strip()
text = re.sub(r'[^\w\s]', '', text)
text = re.sub(r'\s+', ' ', text)
return text
def hash_content(content: str) -> str:
"""Create hash of normalized content."""
normalized = normalize_content(content)
return hashlib.md5(normalized.encode()).hexdigest()
def main():
with open(SYNC_RESPONSE) as f:
data = json.load(f)
conn = sqlite3.connect(DB_PATH)
cur = conn.cursor()
now = datetime.now().isoformat()
# Update sync token
new_token = data.get('sync_token', '*')
is_full = data.get('full_sync', False)
if is_full:
cur.execute("""
UPDATE sync_state SET sync_token = ?, last_full_sync = ? WHERE id = 1
""", (new_token, now))
else:
cur.execute("""
UPDATE sync_state SET sync_token = ?, last_incremental_sync = ? WHERE id = 1
""", (new_token, now))
# Upsert projects
for proj in data.get('projects', []):
cur.execute("""
INSERT OR REPLACE INTO projects
(id, name, parent_id, color, is_inbox, is_archived, is_favorite, view_style, child_order, synced_at)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
proj['id'], proj['name'], proj.get('parent_id'),
proj.get('color'), proj.get('inbox_project', False),
proj.get('is_archived', False), proj.get('is_favorite', False),
proj.get('view_style'), proj.get('child_order'), now
))
# Upsert sections
for sec in data.get('sections', []):
cur.execute("""
INSERT OR REPLACE INTO sections (id, project_id, name, section_order, synced_at)
VALUES (?, ?, ?, ?, ?)
""", (sec['id'], sec['project_id'], sec['name'], sec.get('section_order'), now))
# Upsert labels
for label in data.get('labels', []):
cur.execute("""
INSERT OR REPLACE INTO labels (id, name, color, is_favorite, synced_at)
VALUES (?, ?, ?, ?, ?)
""", (label['id'], label['name'], label.get('color'), label.get('is_favorite', False), now))
# Upsert items (tasks)
for item in data.get('items', []):
due = item.get('due') or {}
content = item.get('content', '')
cur.execute("""
INSERT OR REPLACE INTO items
(id, content, description, project_id, section_id, parent_id, labels,
priority, due_date, due_datetime, due_string, due_is_recurring, deadline_date,
is_completed, added_at, child_order, synced_at, content_normalized, content_hash)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
item['id'], content, item.get('description'),
item.get('project_id'), item.get('section_id'), item.get('parent_id'),
json.dumps(item.get('labels', [])),
item.get('priority', 1),
due.get('date'), due.get('datetime'), due.get('string'),
due.get('is_recurring', False),
item.get('deadline', {}).get('date') if item.get('deadline') else None,
item.get('checked', False) or item.get('is_completed', False),
item.get('added_at'), item.get('child_order'), now,
normalize_content(content), hash_content(content)
))
# Handle deleted items
if item.get('is_deleted'):
cur.execute("DELETE FROM items WHERE id = ?", (item['id'],))
# Handle completed items info
for completed in data.get('completed_info', []):
# This contains aggregates, not individual items
pass
conn.commit()
# Print summary
cur.execute("SELECT COUNT(*) FROM projects WHERE NOT is_archived")
proj_count = cur.fetchone()[0]
cur.execute("SELECT COUNT(*) FROM items WHERE NOT is_completed")
item_count = cur.fetchone()[0]
cur.execute("SELECT COUNT(*) FROM labels")
label_count = cur.fetchone()[0]
print(f"Sync complete: {proj_count} projects, {item_count} active tasks, {label_count} labels")
conn.close()
if __name__ == "__main__":
main()
Save and run:
python3 "$PLUGIN_DIR/data/sync_parser.py"
SESSION_ID=$(date +%Y%m%d_%H%M%S)
echo "Triage session: $SESSION_ID"
-- Find tasks with identical normalized content
INSERT INTO triage_findings (session_id, finding_type, item_id, related_item_id, details, suggested_action)
SELECT
'{SESSION_ID}' as session_id,
'duplicate_exact' as finding_type,
i1.id as item_id,
i2.id as related_item_id,
json_object(
'content', i1.content,
'project1', p1.name,
'project2', p2.name,
'added1', i1.added_at,
'added2', i2.added_at
) as details,
'delete_older' as suggested_action
FROM items i1
JOIN items i2 ON i1.content_hash = i2.content_hash AND i1.id < i2.id
LEFT JOIN projects p1 ON i1.project_id = p1.id
LEFT JOIN projects p2 ON i2.project_id = p2.id
WHERE i1.is_completed = 0 AND i2.is_completed = 0
AND i1.parent_id IS NULL AND i2.parent_id IS NULL; -- Skip subtasks
Use Python with thefuzz for similarity detection:
#!/usr/bin/env python3
"""Detect fuzzy duplicate tasks using Levenshtein similarity."""
import sqlite3
import json
from pathlib import Path
from thefuzz import fuzz
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
SIMILARITY_THRESHOLD = 80 # Minimum similarity percentage
def find_fuzzy_duplicates(session_id: str):
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# Get all active non-subtasks
cur.execute("""
SELECT i.id, i.content, i.content_normalized, i.project_id, i.added_at,
p.name as project_name
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0 AND i.parent_id IS NULL
ORDER BY i.added_at
""")
tasks = cur.fetchall()
# Compare each pair (skip already found exact duplicates)
found_pairs = set()
cur.execute("""
SELECT item_id, related_item_id FROM triage_findings
WHERE session_id = ? AND finding_type = 'duplicate_exact'
""", (session_id,))
for row in cur.fetchall():
found_pairs.add((row[0], row[1]))
found_pairs.add((row[1], row[0]))
duplicates = []
for i, t1 in enumerate(tasks):
for t2 in tasks[i+1:]:
if (t1['id'], t2['id']) in found_pairs:
continue
# Calculate similarity
ratio = fuzz.ratio(t1['content_normalized'], t2['content_normalized'])
if ratio >= SIMILARITY_THRESHOLD:
duplicates.append({
'item_id': t1['id'],
'related_item_id': t2['id'],
'similarity': ratio,
'content1': t1['content'],
'content2': t2['content'],
'project1': t1['project_name'],
'project2': t2['project_name']
})
# Insert findings
for dup in duplicates:
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, item_id, related_item_id, similarity_score, details, suggested_action)
VALUES (?, 'duplicate_fuzzy', ?, ?, ?, ?, 'review')
""", (
session_id, dup['item_id'], dup['related_item_id'], dup['similarity'],
json.dumps({
'content1': dup['content1'],
'content2': dup['content2'],
'project1': dup['project1'],
'project2': dup['project2']
})
))
conn.commit()
print(f"Found {len(duplicates)} fuzzy duplicates")
conn.close()
if __name__ == "__main__":
import sys
session_id = sys.argv[1] if len(sys.argv) > 1 else "test"
find_fuzzy_duplicates(session_id)
Read thresholds from TRIAGE_MEMORY.md, defaulting to:
-- Stale: No due date and older than 60 days
INSERT INTO triage_findings (session_id, finding_type, item_id, details, suggested_action)
SELECT
'{SESSION_ID}',
'stale_no_due',
i.id,
json_object(
'content', i.content,
'project', p.name,
'added_at', i.added_at,
'age_days', CAST(julianday('now') - julianday(i.added_at) AS INTEGER)
),
'review'
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0
AND i.due_date IS NULL
AND i.due_is_recurring = 0
AND julianday('now') - julianday(i.added_at) > 60
AND i.parent_id IS NULL;
-- Stale: Overdue by more than 14 days
INSERT INTO triage_findings (session_id, finding_type, item_id, details, suggested_action)
SELECT
'{SESSION_ID}',
'stale_overdue',
i.id,
json_object(
'content', i.content,
'project', p.name,
'due_date', i.due_date,
'days_overdue', CAST(julianday('now') - julianday(i.due_date) AS INTEGER)
),
'review'
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0
AND i.due_date IS NOT NULL
AND i.due_is_recurring = 0
AND julianday('now') - julianday(i.due_date) > 14;
-- Stale: In Inbox for more than 7 days
INSERT INTO triage_findings (session_id, finding_type, item_id, details, suggested_action)
SELECT
'{SESSION_ID}',
'inbox_old',
i.id,
json_object(
'content', i.content,
'added_at', i.added_at,
'age_days', CAST(julianday('now') - julianday(i.added_at) AS INTEGER)
),
'assign_project'
FROM items i
JOIN projects p ON i.project_id = p.id AND p.is_inbox = 1
WHERE i.is_completed = 0
AND julianday('now') - julianday(i.added_at) > 7;
-- High priority tasks without due dates
INSERT INTO triage_findings (session_id, finding_type, item_id, details, suggested_action)
SELECT
'{SESSION_ID}',
'missing_due_high_priority',
i.id,
json_object(
'content', i.content,
'project', p.name,
'priority', i.priority
),
'set_due_date'
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0
AND i.priority >= 3 -- p1 or p2
AND i.due_date IS NULL
AND i.due_is_recurring = 0;
-- Projects near 300-task limit
SELECT p.name, COUNT(i.id) as task_count
FROM projects p
LEFT JOIN items i ON p.id = i.project_id AND i.is_completed = 0
WHERE p.is_archived = 0
GROUP BY p.id
HAVING task_count > 250
ORDER BY task_count DESC;
-- Empty projects (candidates for deletion)
SELECT p.name, p.id
FROM projects p
LEFT JOIN items i ON p.id = i.project_id AND i.is_completed = 0
WHERE p.is_archived = 0 AND p.is_inbox = 0
GROUP BY p.id
HAVING COUNT(i.id) = 0;
Tasks created within a short time window (e.g., 10 minutes) are likely related - people often add a batch of tasks about the same topic (work meeting notes, shopping list, project planning).
#!/usr/bin/env python3
"""Detect temporal clusters - tasks created in rapid succession."""
import sqlite3
import json
from pathlib import Path
from datetime import datetime, timedelta
from collections import defaultdict
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
CLUSTER_WINDOW_MINUTES = 10 # Tasks within this window are considered a cluster
MIN_CLUSTER_SIZE = 3 # Minimum tasks to form a meaningful cluster
def find_temporal_clusters(session_id: str):
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# Get all active tasks with timestamps, ordered by creation time
cur.execute("""
SELECT i.id, i.content, i.added_at, i.project_id, p.name as project_name
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0
AND i.added_at IS NOT NULL
AND i.parent_id IS NULL
ORDER BY i.added_at
""")
tasks = cur.fetchall()
# Find clusters of tasks created within the time window
clusters = []
current_cluster = []
for task in tasks:
if not task['added_at']:
continue
task_time = datetime.fromisoformat(task['added_at'].replace('Z', '+00:00'))
if not current_cluster:
current_cluster = [task]
else:
last_time = datetime.fromisoformat(
current_cluster[-1]['added_at'].replace('Z', '+00:00')
)
# Check if within window of the last task
if (task_time - last_time) <= timedelta(minutes=CLUSTER_WINDOW_MINUTES):
current_cluster.append(task)
else:
# End current cluster if it's big enough
if len(current_cluster) >= MIN_CLUSTER_SIZE:
clusters.append(current_cluster)
current_cluster = [task]
# Don't forget the last cluster
if len(current_cluster) >= MIN_CLUSTER_SIZE:
clusters.append(current_cluster)
# Analyze each cluster - check if tasks span multiple projects (disorganized)
for cluster in clusters:
projects = set(t['project_name'] for t in cluster if t['project_name'])
task_ids = [t['id'] for t in cluster]
contents = [t['content'] for t in cluster]
# Only flag clusters that are scattered across projects (potential for grouping)
if len(projects) > 1:
first_time = cluster[0]['added_at']
last_time = cluster[-1]['added_at']
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, item_id, details, suggested_action)
VALUES (?, 'temporal_cluster', ?, ?, 'group')
""", (
session_id,
task_ids[0], # Reference first task
json.dumps({
'task_ids': task_ids,
'task_contents': contents[:5], # First 5 for display
'total_tasks': len(cluster),
'projects': list(projects),
'time_window': f"{first_time} to {last_time}",
'window_minutes': CLUSTER_WINDOW_MINUTES
})
))
conn.commit()
print(f"Found {len(clusters)} temporal clusters")
conn.close()
if __name__ == "__main__":
import sys
session_id = sys.argv[1] if len(sys.argv) > 1 else "test"
find_temporal_clusters(session_id)
Analyze task content for recognizable patterns that could help with future organization:
#!/usr/bin/env python3
"""Extract patterns from task content for user confirmation."""
import sqlite3
import json
import re
from pathlib import Path
from collections import Counter, defaultdict
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
# Common action verbs to identify patterns
ACTION_VERBS = [
'call', 'email', 'text', 'message', 'contact', 'meet', 'schedule',
'review', 'check', 'update', 'fix', 'write', 'send', 'buy', 'get',
'book', 'order', 'pay', 'cancel', 'renew', 'submit', 'prepare',
'research', 'plan', 'organize', 'clean', 'finish', 'start'
]
def extract_patterns(session_id: str):
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
cur.execute("""
SELECT i.id, i.content, i.project_id, p.name as project_name
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0 AND i.parent_id IS NULL
""")
tasks = cur.fetchall()
# Track patterns
word_frequency = Counter()
capitalized_words = Counter() # Likely names or proper nouns
action_targets = defaultdict(list) # "call" -> ["John", "dentist", "mom"]
for task in tasks:
content = task['content']
words = content.split()
# Count all words (excluding very common ones)
for word in words:
clean = re.sub(r'[^\w]', '', word.lower())
if len(clean) > 2:
word_frequency[clean] += 1
# Find capitalized words (potential names/proper nouns)
# Skip first word as it's always capitalized
for word in words[1:]:
if word and word[0].isupper():
clean = re.sub(r'[^\w]', '', word)
if len(clean) > 1 and clean.lower() not in ACTION_VERBS:
capitalized_words[clean] += 1
# Extract action patterns: "Call John" -> action="call", target="John"
content_lower = content.lower()
for verb in ACTION_VERBS:
if content_lower.startswith(verb + ' '):
# Get the word after the verb
rest = content[len(verb)+1:].split()
if rest:
target = rest[0].strip('.,!?')
action_targets[verb].append({
'target': target,
'task_id': task['id'],
'full_content': content
})
# Identify potential patterns worth asking about
patterns = {
'potential_people': [],
'potential_concepts': [],
'action_patterns': []
}
# People: Capitalized words appearing 2+ times
for word, count in capitalized_words.most_common(20):
if count >= 2:
# Find all tasks mentioning this word
matching_tasks = [t for t in tasks if word in t['content']]
patterns['potential_people'].append({
'name': word,
'count': count,
'example_tasks': [t['content'] for t in matching_tasks[:3]],
'projects': list(set(t['project_name'] for t in matching_tasks if t['project_name']))
})
# Concepts: Frequent words that appear across multiple projects
for word, count in word_frequency.most_common(50):
if count >= 3 and word not in ACTION_VERBS:
matching_tasks = [t for t in tasks if word in t['content'].lower()]
projects = set(t['project_name'] for t in matching_tasks if t['project_name'])
# Only interesting if appears in multiple projects
if len(projects) >= 2:
patterns['potential_concepts'].append({
'concept': word,
'count': count,
'projects': list(projects),
'example_tasks': [t['content'] for t in matching_tasks[:3]]
})
# Action patterns: Same verb with multiple targets
for verb, targets in action_targets.items():
if len(targets) >= 2:
patterns['action_patterns'].append({
'action': verb,
'targets': [t['target'] for t in targets],
'count': len(targets),
'examples': [t['full_content'] for t in targets[:3]]
})
# Store patterns for interactive questioning
if any(patterns.values()):
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, details, suggested_action)
VALUES (?, 'discovered_patterns', ?, 'ask_user')
""", (session_id, json.dumps(patterns)))
conn.commit()
total = (len(patterns['potential_people']) +
len(patterns['potential_concepts']) +
len(patterns['action_patterns']))
print(f"Discovered {total} potential patterns to ask about")
conn.close()
return patterns
if __name__ == "__main__":
import sys
session_id = sys.argv[1] if len(sys.argv) > 1 else "test"
extract_patterns(session_id)
Identify tasks that are too large (need breakdown) or too small (quick wins / automation candidates).
Size indicators:
#!/usr/bin/env python3
"""Analyze task sizing - identify tasks needing breakdown or quick wins."""
import sqlite3
import json
import re
from pathlib import Path
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
# Indicators of task complexity
VAGUE_VERBS = [
'plan', 'organize', 'figure out', 'deal with', 'handle', 'work on',
'think about', 'look into', 'research', 'explore', 'consider',
'decide', 'sort out', 'get started on', 'begin', 'start'
]
QUICK_VERBS = [
'call', 'email', 'text', 'send', 'buy', 'order', 'schedule',
'book', 'cancel', 'confirm', 'reply', 'respond', 'check',
'download', 'upload', 'print', 'sign', 'pay', 'renew'
]
AUTOMATION_PATTERNS = [
r'remind.*to', # "Remind me to..."
r'check.*status', # "Check status of..."
r'follow up.*on', # "Follow up on..."
r'look up', # "Look up..."
r'find.*number', # "Find phone number for..."
r'get.*address', # "Get address for..."
r'schedule.*recurring', # Recurring scheduling
]
def analyze_task_size(content: str, description: str = "", has_subtasks: bool = False) -> dict:
"""Analyze a single task and return sizing assessment."""
content_lower = content.lower()
word_count = len(content.split())
result = {
'size': 'medium', # small, medium, large, project
'issues': [],
'suggestions': []
}
# Check for "too large" indicators
large_signals = 0
# Vague verbs suggest unclear scope
for verb in VAGUE_VERBS:
if content_lower.startswith(verb + ' ') or f' {verb} ' in content_lower:
large_signals += 1
result['issues'].append(f"Vague verb: '{verb}'")
# Multiple actions in one task
and_count = content_lower.count(' and ')
if and_count >= 2:
large_signals += 2
result['issues'].append(f"Multiple actions ({and_count + 1} things)")
elif and_count == 1:
large_signals += 1
result['issues'].append("Two actions combined")
# Very long task names suggest complexity
if word_count > 12:
large_signals += 1
result['issues'].append(f"Long task name ({word_count} words)")
# No clear action verb at start
first_word = content.split()[0].lower() if content else ""
if first_word not in QUICK_VERBS and first_word not in VAGUE_VERBS:
if not any(content_lower.startswith(v) for v in QUICK_VERBS + VAGUE_VERBS):
result['issues'].append("No clear action verb")
# Classify size
if large_signals >= 3:
result['size'] = 'project'
result['suggestions'].append("This looks like a project, not a task. Break it into 3-5 concrete next actions.")
elif large_signals >= 2:
result['size'] = 'large'
result['suggestions'].append("Consider breaking this into 2-3 smaller tasks with specific outcomes.")
# Check for "quick win" indicators
if result['size'] == 'medium':
quick_signals = 0
for verb in QUICK_VERBS:
if content_lower.startswith(verb + ' '):
quick_signals += 2
break
if word_count <= 5:
quick_signals += 1
if and_count == 0 and word_count <= 8:
quick_signals += 1
if quick_signals >= 2:
result['size'] = 'small'
result['suggestions'].append("Quick win! This could be done in 2-5 minutes.")
# Check for automation candidates
for pattern in AUTOMATION_PATTERNS:
if re.search(pattern, content_lower):
result['automation_candidate'] = True
result['suggestions'].append("This might be automatable (reminder, lookup, or recurring task).")
break
return result
def analyze_all_tasks(session_id: str):
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
# Get all active tasks
cur.execute("""
SELECT i.id, i.content, i.description, i.project_id, p.name as project_name,
(SELECT COUNT(*) FROM items sub WHERE sub.parent_id = i.id) as subtask_count
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0 AND i.parent_id IS NULL
""")
tasks = cur.fetchall()
quick_wins = []
needs_breakdown = []
automation_candidates = []
for task in tasks:
analysis = analyze_task_size(
task['content'],
task['description'] or "",
task['subtask_count'] > 0
)
if analysis['size'] == 'small':
quick_wins.append({
'id': task['id'],
'content': task['content'],
'project': task['project_name'],
'suggestions': analysis['suggestions']
})
elif analysis['size'] in ('large', 'project'):
needs_breakdown.append({
'id': task['id'],
'content': task['content'],
'project': task['project_name'],
'size': analysis['size'],
'issues': analysis['issues'],
'suggestions': analysis['suggestions']
})
if analysis.get('automation_candidate'):
automation_candidates.append({
'id': task['id'],
'content': task['content'],
'project': task['project_name']
})
# Store findings
if quick_wins:
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, details, suggested_action)
VALUES (?, 'quick_wins', ?, 'show_user')
""", (session_id, json.dumps(quick_wins)))
if needs_breakdown:
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, details, suggested_action)
VALUES (?, 'needs_breakdown', ?, 'break_down')
""", (session_id, json.dumps(needs_breakdown)))
if automation_candidates:
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, details, suggested_action)
VALUES (?, 'automation_candidates', ?, 'suggest_automation')
""", (session_id, json.dumps(automation_candidates)))
conn.commit()
print(f"Found {len(quick_wins)} quick wins, {len(needs_breakdown)} need breakdown, {len(automation_candidates)} automation candidates")
conn.close()
if __name__ == "__main__":
import sys
session_id = sys.argv[1] if len(sys.argv) > 1 else "test"
analyze_all_tasks(session_id)
Evaluate tasks against SMART criteria to identify tasks needing clarification:
#!/usr/bin/env python3
"""Analyze tasks against SMART criteria."""
import sqlite3
import json
import re
from pathlib import Path
from datetime import datetime, timedelta
PLUGIN_DIR = Path("$PLUGIN_ROOT")
DB_PATH = PLUGIN_DIR / "data" / "todoist.db"
# Words that indicate vague/non-specific tasks
VAGUE_INDICATORS = [
'stuff', 'things', 'etc', 'whatever', 'somehow', 'maybe',
'probably', 'might', 'could', 'should', 'would', 'try to',
'attempt', 'look into', 'figure out', 'deal with'
]
# Words that suggest measurable outcomes
MEASURABLE_WORDS = [
'complete', 'finish', 'submit', 'send', 'deliver', 'publish',
'launch', 'deploy', 'release', 'ship', 'post', 'file',
'schedule', 'book', 'confirm', 'cancel', 'pay', 'order'
]
def analyze_smart(task: dict) -> dict:
"""Analyze a task against SMART criteria."""
content = task['content']
content_lower = content.lower()
description = (task.get('description') or '').lower()
score = {
'specific': {'score': 0, 'max': 2, 'issues': [], 'suggestions': []},
'measurable': {'score': 0, 'max': 2, 'issues': [], 'suggestions': []},
'attainable': {'score': 0, 'max': 2, 'issues': [], 'suggestions': []},
'relevant': {'score': 0, 'max': 2, 'issues': [], 'suggestions': []},
'timebound': {'score': 0, 'max': 2, 'issues': [], 'suggestions': []},
}
# SPECIFIC: Clear outcome?
has_vague = any(v in content_lower for v in VAGUE_INDICATORS)
has_clear_verb = content.split()[0].lower() in MEASURABLE_WORDS if content else False
if has_vague:
score['specific']['issues'].append("Contains vague language")
score['specific']['suggestions'].append("Replace vague words with specific outcomes")
else:
score['specific']['score'] += 1
if has_clear_verb:
score['specific']['score'] += 1
else:
score['specific']['suggestions'].append("Start with a clear action verb (send, complete, schedule...)")
# MEASURABLE: Can you tell when it's done?
word_count = len(content.split())
if any(w in content_lower for w in MEASURABLE_WORDS):
score['measurable']['score'] += 2
elif word_count <= 6 and not has_vague:
score['measurable']['score'] += 1
score['measurable']['suggestions'].append("Add a clear completion criterion")
else:
score['measurable']['issues'].append("Unclear when this is 'done'")
score['measurable']['suggestions'].append("Define what 'complete' looks like")
# ATTAINABLE: Single task or project?
and_count = content_lower.count(' and ')
or_count = content_lower.count(' or ')
if and_count >= 2 or (and_count >= 1 and word_count > 10):
score['attainable']['issues'].append("Multiple actions bundled together")
score['attainable']['suggestions'].append("Split into separate tasks")
elif and_count == 1:
score['attainable']['score'] += 1
score['attainable']['suggestions'].append("Consider splitting into two tasks")
else:
score['attainable']['score'] += 2
# RELEVANT: Has a project assigned?
if task.get('is_inbox'):
score['relevant']['issues'].append("Still in Inbox - needs a home")
score['relevant']['suggestions'].append("Move to an appropriate project")
elif task.get('project_name'):
score['relevant']['score'] += 2
else:
score['relevant']['score'] += 1
# TIMEBOUND: Has a due date?
if task.get('due_date'):
score['timebound']['score'] += 2
elif task.get('priority', 1) >= 3: # High priority but no date
score['timebound']['issues'].append("High priority but no due date")
score['timebound']['suggestions'].append("Add a due date to create urgency")
else:
score['timebound']['score'] += 1
score['timebound']['suggestions'].append("Consider adding a due date")
# Calculate overall score
total_score = sum(c['score'] for c in score.values())
max_score = sum(c['max'] for c in score.values())
return {
'total_score': total_score,
'max_score': max_score,
'percentage': round(total_score / max_score * 100),
'criteria': score,
'needs_improvement': total_score < max_score * 0.6 # Below 60%
}
def analyze_all_smart(session_id: str):
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
cur = conn.cursor()
cur.execute("""
SELECT i.id, i.content, i.description, i.project_id, i.due_date, i.priority,
p.name as project_name, p.is_inbox
FROM items i
LEFT JOIN projects p ON i.project_id = p.id
WHERE i.is_completed = 0 AND i.parent_id IS NULL
""")
tasks = cur.fetchall()
needs_improvement = []
well_formed = 0
for task in tasks:
analysis = analyze_smart(dict(task))
if analysis['needs_improvement']:
# Collect issues and suggestions
all_issues = []
all_suggestions = []
for criterion, data in analysis['criteria'].items():
if data['issues']:
all_issues.extend(data['issues'])
if data['suggestions'] and data['score'] < data['max']:
all_suggestions.extend(data['suggestions'][:1]) # Top suggestion per criterion
needs_improvement.append({
'id': task['id'],
'content': task['content'],
'project': task['project_name'],
'smart_score': analysis['percentage'],
'issues': all_issues[:3], # Top 3 issues
'suggestions': all_suggestions[:3], # Top 3 suggestions
'criteria_scores': {k: f"{v['score']}/{v['max']}" for k, v in analysis['criteria'].items()}
})
else:
well_formed += 1
# Store findings
if needs_improvement:
# Sort by score (worst first)
needs_improvement.sort(key=lambda x: x['smart_score'])
cur.execute("""
INSERT INTO triage_findings
(session_id, finding_type, details, suggested_action)
VALUES (?, 'smart_issues', ?, 'improve_clarity')
""", (session_id, json.dumps(needs_improvement)))
conn.commit()
print(f"SMART analysis: {well_formed} well-formed, {len(needs_improvement)} need improvement")
conn.close()
if __name__ == "__main__":
import sys
session_id = sys.argv[1] if len(sys.argv) > 1 else "test"
analyze_all_smart(session_id)
-- Count findings by type
SELECT finding_type, COUNT(*) as count
FROM triage_findings
WHERE session_id = '{SESSION_ID}'
GROUP BY finding_type
ORDER BY count DESC;
After analysis, present findings to user:
Use AskUserQuestion with:
- Header: "Triage Summary"
- Question: "Analysis complete! Found:\n• Q quick wins (2-5 min tasks)\n• B tasks need breakdown\n• X exact duplicates\n• Y fuzzy duplicates\n• Z stale tasks\n• W tasks need clarity (SMART)\n• N task clusters\n• P patterns to learn\n\nWhat would you like to tackle?"
- Options:
- "Quick wins (Q)" - Build momentum with easy tasks
- "Break down large (B)" - Split overwhelming tasks
- "Duplicates (X+Y)" - Handle duplicate tasks
- "Stale tasks (Z)" - Review old/overdue tasks
- "Improve clarity (W)" - Make tasks SMART
- "Task clusters (N)" - Group related tasks
- "Full triage" - Work through all categories
For exact duplicates, offer batch handling:
Use AskUserQuestion with:
- Header: "Exact Duplicates"
- Question: "Found X tasks with identical content:\n• 'Task A' (Project1, Project2)\n• 'Task B' (Inbox, Work)\n...\n\nHow should I handle these?"
- Options:
- "Keep newest, delete old" - Automatic resolution
- "Keep oldest, delete new" - Preserve original
- "Merge (combine notes)" - Keep one, merge descriptions
- "Review each" - Manual decision per duplicate
Present fuzzy matches individually or in small batches:
Use AskUserQuestion with (batch up to 4 questions):
- Question 1:
- Header: "Similar 1/N"
- Question: "'Buy groceries' vs 'Get groceries'\n85% similar | Both in Personal"
- Options: "Keep first", "Keep second", "Keep both", "Merge"
- Question 2: ...
Group stale tasks by category and offer batch actions:
Use AskUserQuestion with:
- Header: "Stale Tasks"
- Question: "Found 15 tasks older than 60 days with no due date:\n• 'Learn Spanish' (145 days)\n• 'Organize photos' (89 days)\n...\n\nQuick action?"
- Options:
- "Review list" - See all and decide individually
- "Move to Someday" - Defer all to backlog
- "Complete all" - Mark as done (they're probably done)
- "Delete all" - Remove from system
- "Skip" - Leave as-is
Use AskUserQuestion with:
- Header: "Missing Dates"
- Question: "'Finish quarterly report'\nPriority: P1 | Project: Work\n\nWhen should this be due?"
- Options:
- "Today"
- "Tomorrow"
- "This week"
- "Next week"
- "Custom..." (user types date)
- "Skip"
Identify tasks with common keywords and suggest organization:
Use AskUserQuestion with:
- Header: "Grouping"
- Question: "Found 6 tasks about 'taxes' scattered across projects:\n• 'Gather tax documents' (Finances)\n• 'File state taxes' (Inbox)\n...\n\nCreate a 'Taxes' project?"
- Options:
- "Create project" - New project, move all
- "Create section" - Add section in existing project
- "Add label" - Tag with #taxes
- "Skip" - Leave scattered
Tasks created in rapid succession (within 10 minutes) are often related. Present these clusters for grouping:
Use AskUserQuestion with:
- Header: "Task Cluster"
- Question: "Found 5 tasks created together on Jan 5 at 2:30 PM:\n• 'Review Q1 budget' (Work)\n• 'Update forecast spreadsheet' (Inbox)\n• 'Email Sarah re: projections' (Inbox)\n• 'Schedule finance meeting' (Work)\n• 'Check last year's numbers' (Inbox)\n\nThese look related. Group them?"
- Options:
- "Create project" - New project for all (ask for name)
- "Move to Work" - All belong in existing Work project
- "Add label" - Tag all with a common label
- "Leave as-is" - They're organized correctly
- "Review individually" - Decide per task
If user says "Create project", follow up:
Use AskUserQuestion with:
- Header: "Project Name"
- Question: "What should I call this project?\nBased on the tasks, suggestions:\n• 'Q1 Budget Review'\n• 'Finance Planning'\n• 'Budget 2026'"
- Options:
- "Q1 Budget Review" - Use suggested name
- "Finance Planning" - Use suggested name
- "Custom..." - I'll type my own name
Ask the user about discovered patterns to build organizational rules for future sessions:
People Names:
Use AskUserQuestion with:
- Header: "Person: Sarah"
- Question: "I noticed 'Sarah' appears in 4 tasks:\n• 'Email Sarah re: projections'\n• 'Call Sarah about timeline'\n• 'Sarah's feedback on draft'\n• 'Lunch with Sarah'\n\nWho is Sarah? This helps me organize related tasks."
- Options:
- "Work colleague" - Tasks mentioning Sarah → Work project
- "Personal friend/family" - Tasks mentioning Sarah → Personal
- "Client/customer" - Create @sarah label for tracking
- "Multiple contexts" - Sarah appears in different areas
- "Skip" - Don't create a rule
Recurring Concepts:
Use AskUserQuestion with:
- Header: "Concept: website"
- Question: "The word 'website' appears in 6 tasks across Work, Inbox, and Side Project:\n• 'Update website copy'\n• 'Fix website bug'\n• 'Website analytics review'\n\nShould tasks about 'website' have special handling?"
- Options:
- "Move to specific project" - All website tasks → one project
- "Add label" - Tag with @website
- "It's too broad" - Website relates to multiple things
- "Skip" - Don't create a rule
Action Patterns:
Use AskUserQuestion with:
- Header: "Pattern: Call tasks"
- Question: "You have 8 tasks starting with 'Call':\n• Call dentist\n• Call mom\n• Call insurance company\n• Call John about project\n\nWant to organize 'Call' tasks specially?"
- Options:
- "Add @phone label" - Tag all call tasks
- "Add @calls label" - Tag all call tasks
- "Group by context" - Work calls vs personal calls
- "No pattern" - They're unrelated
Store Learned Patterns:
After user confirms patterns, record them in TRIAGE_MEMORY.md:
# In TRIAGE_MEMORY.md - Known Patterns section
known_people:
Sarah:
context: work
action: move_to_project
project: "Work"
Mom:
context: personal
action: add_label
label: "@family"
known_concepts:
website:
action: add_label
label: "@website"
taxes:
action: move_to_project
project: "Finances"
action_rules:
call:
action: add_label
label: "@phone"
email:
action: none # Too common to be useful
Based on all decisions made during triage, suggest auto-rules:
Use AskUserQuestion with:
- Header: "Auto-Rules"
- Question: "Based on today's decisions, should I remember these rules?\n\n• Tasks mentioning 'Sarah' → Work project\n• Tasks about 'taxes' → Finances project\n• Tasks starting with 'Call' → Add @phone label\n\nThese will apply automatically in future sessions."
- Options:
- "Save all rules" - Remember everything
- "Review each" - Let me approve individually
- "Save none" - Don't auto-organize
EF Principle: Starting is the hardest part. Quick wins build dopamine and momentum.
Present small, actionable tasks the user can knock out quickly:
Use AskUserQuestion with:
- Header: "🚀 Quick Wins"
- Question: "Here are 5 tasks you could finish in the next 15 minutes:\n\n1. 'Call dentist' (Personal)\n2. 'Reply to John's email' (Work)\n3. 'Order printer paper' (Shopping)\n4. 'Schedule haircut' (Personal)\n5. 'Pay electric bill' (Finances)\n\nWant to knock some out now? I can help you batch similar ones."
- Options:
- "Show me all quick wins" - See the full list
- "Batch by type" - Group calls, emails, etc.
- "Start with #1" - Let's do it now
- "Add to Today" - Mark all as due today
- "Skip momentum" - Move to other triage
If user chooses "Batch by type":
Use AskUserQuestion with:
- Header: "Batch Tasks"
- Question: "I grouped your quick wins by action type:\n\n📞 Calls (3): dentist, insurance, mom\n📧 Emails (2): reply to John, send invoice\n🛒 Orders (2): printer paper, vitamins\n\nWhich batch do you want to tackle?"
- Options:
- "📞 Calls first" - 3 tasks, ~10 min total
- "📧 Emails first" - 2 tasks, ~5 min total
- "🛒 Orders first" - 2 tasks, ~5 min total
- "Do all today" - Mark all as due today
After completing quick wins, celebrate progress:
Use AskUserQuestion with:
- Header: "Progress! 🎉"
- Question: "You just cleared 3 tasks in one session!\n\nTotal today: 3 tasks completed\nMomentum: Building\n\nKeep going or take a break?"
- Options:
- "Keep going" - More quick wins
- "Bigger tasks" - Ready to tackle something larger
- "Take a break" - Done for now
EF Principle: Vague tasks create paralysis. Concrete next actions enable progress.
For tasks flagged as "too large" or "project-sized":
Use AskUserQuestion with:
- Header: "Break Down Task"
- Question: "'Plan vacation to Japan'\n\nThis looks like a project, not a task. It has:\n• Vague verb: 'plan'\n• Multiple implied steps\n• No clear completion point\n\nLet me help you break it down. What's the VERY NEXT physical action?"
- Options:
- "Research flights" - Search dates/prices
- "Pick dates" - Check calendar first
- "Ask travel buddy" - Coordinate with someone
- "Help me think" - Walk me through it
If user chooses "Help me think":
Use AskUserQuestion with:
- Header: "Next Action"
- Question: "Let's find the next action for 'Plan vacation to Japan'.\n\nImagine you're about to work on this. You sit down and...\n\nWhat's the first thing you'd do? (Be specific - 'research' is too vague)"
- Options:
- "Open browser" - Search for flight prices
- "Open calendar" - Check available dates
- "Text friend" - Ask about their schedule
- "Other..." - I'll describe it
After identifying subtasks:
Use AskUserQuestion with:
- Header: "Create Subtasks"
- Question: "Great! Here's your breakdown for 'Plan vacation to Japan':\n\n1. Check calendar for possible weeks in March\n2. Text Alex about her availability\n3. Search flights on Google Flights\n4. Research Tokyo neighborhoods\n5. Make list of must-see places\n\nShould I create these as subtasks?"
- Options:
- "Create all" - Add as subtasks under main task
- "Create as project" - Make 'Japan Vacation' project
- "Edit list first" - Let me adjust
- "Just first one" - Only create the next action
EF Principle: Clear tasks are easier to start. Vague tasks cause avoidance.
Present tasks with low SMART scores for improvement:
Use AskUserQuestion with:
- Header: "Improve Task"
- Question: "'Work on website stuff'\n\nSMART Score: 30% (needs work)\n• ❌ Specific: 'stuff' is vague\n• ❌ Measurable: No clear completion\n• ⚠️ Attainable: Might be too broad\n• ✓ Relevant: In correct project\n• ❌ Timebound: No due date\n\nLet's make this clearer. What specific outcome do you want?"
- Options:
- "Fix the contact form" - Specific bug to fix
- "Update homepage copy" - Specific page to edit
- "Add new blog post" - Specific content to create
- "It's actually multiple things" - Help me split it
After clarifying:
Use AskUserQuestion with:
- Header: "Rewrite Task"
- Question: "How about this rewrite?\n\nOld: 'Work on website stuff'\nNew: 'Fix contact form not sending emails'\n\nSMART Score: 90%\n• ✓ Specific: Clear what to fix\n• ✓ Measurable: Form works or doesn't\n• ✓ Attainable: Single task\n• ✓ Relevant: In Website project\n• ⚠️ Timebound: Add due date?\n\nUpdate the task?"
- Options:
- "Update + add due date" - Fix it and set deadline
- "Just update text" - Change task name only
- "Different wording" - Let me suggest something
- "Keep original" - Leave as-is
For batch SMART improvements:
Use AskUserQuestion with:
- Header: "Clarity Review"
- Question: "Found 8 tasks below 60% SMART score. Want to improve them?\n\nWorst offenders:\n• 'Deal with taxes' (25%)\n• 'Figure out insurance' (30%)\n• 'Think about career' (35%)\n• 'Handle house stuff' (40%)\n\nThese are hard to start because they're unclear."
- Options:
- "Improve all 8" - Walk through each one
- "Just top 3" - Focus on worst ones
- "Quick rewrites" - I'll suggest, you approve
- "Skip clarity" - Leave them vague
EF Principle: Reduce cognitive load by automating repetitive decisions.
For tasks flagged as automation candidates:
Use AskUserQuestion with:
- Header: "Automate?"
- Question: "These tasks might not need to be tasks at all:\n\n• 'Remind me to take vitamins' → Use phone reminder\n• 'Check flight status' → Airline app notifications\n• 'Follow up on invoice' → Email scheduled send\n\nWant me to help set up automations?"
- Options:
- "Show me options" - Explain automation for each
- "Complete these" - They're done or not needed
- "Keep as tasks" - I prefer manual tracking
- "Skip" - Move on
If user wants automation help:
Use AskUserQuestion with:
- Header: "Vitamins Reminder"
- Question: "'Remind me to take vitamins'\n\nBetter handled by:\n• iPhone reminder (recurring daily)\n• Habit app (streaks for motivation)\n• Delete task (if reminder exists)\n\nWhat works for you?"
- Options:
- "Delete task" - I already have a reminder
- "Convert to recurring" - Make it a Todoist recurring task
- "Keep one-time" - This is a one-time reminder
- "Next task" - Show me the next automation candidate
Query all findings with user decisions:
SELECT
CASE user_decision
WHEN 'delete' THEN 'Delete'
WHEN 'complete' THEN 'Complete'
WHEN 'update' THEN 'Update'
WHEN 'move' THEN 'Move'
WHEN 'create' THEN 'Create'
END as action,
COUNT(*) as count
FROM triage_findings
WHERE session_id = '{SESSION_ID}' AND user_decision IS NOT NULL
GROUP BY user_decision;
Use AskUserQuestion with:
- Header: "Confirm"
- Question: "Ready to apply changes:\n• Delete: 8 duplicate tasks\n• Complete: 3 stale tasks\n• Move: 12 tasks to Someday\n• Update: 6 tasks with due dates\n• Create: 'Taxes' project\n\nProceed?"
- Options:
- "Apply all" - Execute changes
- "Dry run" - Show what would happen
- "Review again" - Go back to triage
- "Cancel" - Discard all
Group changes by operation type and execute in batches:
Complete tasks:
Use mcp__todoist__complete-tasks with ids array (up to 50 per call)
Update tasks (due dates, projects, priorities):
Use mcp__todoist__update-tasks with tasks array containing:
- id: task ID
- dueString: "tomorrow", "next week", etc.
- projectId: new project ID
- priority: "p1", "p2", "p3", "p4"
Create projects:
Use mcp__todoist__add-projects with projects array
Delete tasks (with caution):
Use mcp__todoist__delete-object with type="task" and id
Only after explicit user confirmation
| Error | Recovery |
|---|---|
| 403 Forbidden | Project at 300-task limit - suggest sub-project |
| 429 Too Many Requests | Wait 60 seconds, retry |
| Task not found | Skip, already deleted |
After successful MCP operations, update SQLite to reflect changes:
-- Mark applied findings
UPDATE triage_findings
SET applied_at = datetime('now')
WHERE session_id = '{SESSION_ID}' AND user_decision IS NOT NULL;
-- Update or delete items based on actions
-- (handled per-operation in the apply loop)
Append session results to memory file:
### Session {SESSION_ID} - {DATE}
**Changes applied:**
- Deleted N duplicates
- Completed N stale tasks
- Updated N tasks with due dates
- Created "Project Name" project
**User decisions:**
- "Keep both 'Review budget' tasks" (Work and Personal separate)
- Preferred threshold for stale: 60 days
**Patterns learned:**
- Tasks matching `tax.*` → move to Finances
The skill reads configuration from TRIAGE_MEMORY.md:
# Thresholds
stale_thresholds:
no_due_date_days: 60
overdue_days: 14
inbox_days: 7
# Exclusions - never flag these
excluded_projects:
- "Someday/Maybe"
- "Reference"
excluded_labels:
- "@waiting"
- "@recurring"
# Duplicate handling default
duplicate_default: "keep_newest" # keep_newest, keep_oldest, merge, review
# Fuzzy match threshold (0-100)
similarity_threshold: 80
| Error | Cause | Recovery |
|---|---|---|
| 1Password 401 | Session expired | Run op signin |
| Todoist API 401 | Token invalid | Check 1Password item |
| Todoist API 429 | Rate limit | Wait 60s, retry |
| SQLite locked | Concurrent access | Retry with backoff |
| Empty sync response | No changes | Normal for incremental |
| MCP 403 | Project limit (300) | Suggest sub-project |