From code-review-graph
Plan and track implementation work using the Task DAG system. Create structured task trees linked to real code, enforce single-pipeline discipline, and generate handoff context for coders.
npx claudepluginhub demon24ru/code-review-graphThis skill uses the workspace's default tool permissions.
Use the Task DAG system to plan implementation work with full traceability to the code graph.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Use the Task DAG system to plan implementation work with full traceability to the code graph.
One root task at a time. Before creating a new root task, call task_get_active_root to check if a pipeline is already running. Only create a new root task when the current pipeline is fully done or archived.
Workflow phases — strictly in order:
task_validate must pass (0 errors) before implementationtask_execution_ordertask_check_rollup to propagate up the treeAll creation calls use list-based batch mode — always pass a list, even for one item.
task_get_active_root() # confirm pipeline is idle
# Create root task
task_create(tasks=[{"title": "My Feature", "description": "..."}])
# Decompose in one call — all subtasks sharing the same parent
task_create(parent_id=root_id, tasks=[
{"title": "Auth module"},
{"title": "Token service", "description": "JWT-based"},
{"title": "Login endpoint"},
])
# Decompose + wire dependencies atomically — edges use 0-based task indices
task_create(parent_id=root_id, tasks=[
{"title": "OAuth interface"}, # index 0
{"title": "Google OAuth impl"}, # index 1
{"title": "JWT service"}, # index 2
{"title": "Login endpoint"}, # index 3
], edges=[
{"from": 1, "to": 0, "type": "depends_on"}, # Google OAuth needs interface
{"from": 3, "to": 0, "type": "depends_on"}, # Login needs interface
{"from": 3, "to": 2, "type": "depends_on"}, # Login needs JWT
])
# → tasks + edges created atomically; "edges" key in response shows created edges
Rules:
check_isolation score < 0.5Add multiple notes in one call — always pass a list, even for one note.
# All notes for a task in one call (typical after structured brainstorm interview)
note_add(task_id=task_id, notes=[
{"note_type": "decision", "content": "Use JWT", "status": "resolved",
"resolution": "JWT tokens", "rationale": "stateless"},
{"note_type": "question", "content": "WebSocket or polling?"},
{"note_type": "assumption", "content": "User model already exists"},
{"note_type": "constraint", "content": "self-hosted only"},
{"note_type": "risk", "content": "Token refresh race condition"},
])
# Single note
note_add(task_id=task_id, notes=[
{"note_type": "constraint", "content": "No external SaaS dependencies"}
])
include_parent=Truenote_list(task_id, include_children=True)When you have open questions or unverified assumptions that require human input, use the Questions UI instead of waiting in the chat session. This decouples LLM work from human response time.
Note lifecycle:
open → answered → resolved / rejected / deferred
(user in UI) (LLM validates, updates DAG)
open — question or assumption added, no response yetanswered — user wrote a response in the web UI (LLM not yet involved)resolved — LLM validated and accepted; DAG may have been updatedrejected — LLM or user determined the note was invalid/incorrectdeferred — postponed, not blocking current work# Auto-detect active root task
code-review-graph questions
# Specify task and port
code-review-graph questions --task t1 --port 6234 --no-browser
The server opens http://localhost:6234 with three sections:
Answers are persisted to SQLite instantly. The session can close — answers survive.
At the start of every session, call task_roadmap(). The attention.answered_notes field lists notes that need processing:
task_roadmap()
→ attention.answered_notes: [
{ id: "n3", note_type: "question",
content: "WebSocket or polling?",
resolution: "Let's use SSE — simpler than WS, more reliable than polling",
task_id: "t1" },
{ id: "n7", note_type: "assumption",
content: "User model has email field",
resolution: "No! Users have phone only, no email",
task_id: "t1" }
]
For each answered note, you MUST:
note_update(note_id, status="resolved"/"rejected", rationale="...")Decision table:
| Answer type | Example | LLM action |
|---|---|---|
| Simple choice | "Use JWT" | note_update(resolved) + add decision note |
| Design clarification | "SSE not WebSocket" | note_update(resolved) + possibly rename/edit tasks |
| Rejected assumption | "No email field" | note_update(rejected) + task_search for affected tasks + task_archive dead paths + add new questions |
| Direction change | "No templates, hardcode" | note_update(resolved) + task_archive(subtree) + simplify dependencies |
| Scope expansion | "Also needs offline PWA" | note_update(resolved) + task_create new subtasks |
task_validate() will warn if there are unprocessed answered notes. Process all answered notes before marking work ready for implementation.
note_list(task_id=root_id, status="answered", include_children=True)
# → process each, then mark resolved/rejected
note_update(note_id, status="resolved", resolution="Accepted: SSE", rationale="...")
note_update(note_id, status="rejected", resolution="User has no email", rationale="Invalidates t2, t8")
Rule: 1 task = 1 coherent logical change, NOT 1 task = 1 code node.
Too coarse: "Add OAuth" → 50+ nodes → noise
Too fine: "Add expires_at" → 1 node → 50 tasks → management hell
Sweet spot: "JWT token service" → 3-8 nodes → useful, manageable
Target 3–8 code nodes per leaf task:
creates / modifies)modifies)reads)Code links only on leaf tasks. Parent/mid-level tasks are grouping containers — no direct code refs (task_validate warns if violated).
Handoff levels:
task_export(mid_task_id) — sees all leaf subtasks, contracts, notes, plus subtask_code_refs_summary (rollup: how many unique code nodes all leaves collectively touch, which leaves have no code refs yet)task_export(leaf_task_id) — sees exact nodes, line ranges, acceptance criteria# Single symbol
semantic_search_nodes_tool(query="create_task", kind="Function")
→ { id: 655, qualified_name: "code_review_graph/tasks.py::create_task",
line_start: 100, line_end: 162, params: "(...)", ... }
# Multiple symbols in one call — multi-word = FTS5 OR, single round-trip
semantic_search_nodes_tool(query="create_task add_task_edge move_task archive_task add_note")
# Filter to specific file
semantic_search_nodes_tool(query="create", file_path="tasks.py")
# All functions in a file (structural, not text-based)
query_graph_tool(pattern="children_of", target="code_review_graph/tasks.py")
Use line_end to read function bodies efficiently.
MCP search returns line_start and line_end for every node. Use them to make
a targeted read instead of loading the whole file:
# GOOD: read only the function you need
Read(filePath="tasks.py", offset=line_start, limit=line_end - line_start + 1)
# BAD: reading the whole file to find where a function ends bloats input context
# significantly while output context stays the same — avoid on large codebases
Read(filePath="tasks.py")
MCP does NOT replace grep for body content. semantic_search_nodes_tool only
indexes declarations (name, signature, params). To search inside function bodies —
use Grep or ripgrep directly.
Always pass a list, even for one node. Both code_node_id (int) and qualified_name (str) can be mixed.
Use values directly from semantic_search_nodes_tool — no extra lookup needed.
# Single node
task_link_code(task_id=task_id, links=[
{"ref_type": "modifies", "code_node_id": 1791}
])
# Typical leaf task (3-8 nodes in one call)
task_link_code(task_id=task_id, links=[
{"ref_type": "modifies", "code_node_id": 1791},
{"ref_type": "modifies", "qualified_name": "src/auth.py::TokenModel"},
{"ref_type": "reads", "qualified_name": "src/config.py::JWTConfig"},
{"ref_type": "creates", "qualified_name": "src/auth.py::TokenResponse",
"description": "new response schema"},
])
# Returns: {task_id, success_count, error_count, total, linked[], errors[]}
# Partial failures do NOT abort — errors collected, valid items linked.
ref_type guide:
| ref_type | When to use |
|---|---|
modifies | Changing existing function/class |
creates | Adding new function/class |
reads | Reading/querying only |
deletes | Removing code |
tests | Adding/updating tests |
Contracts capture data structures, interfaces, and APIs — even before they exist in code.
# New design entity (doesn't exist yet)
contract_add(
name="OAuthToken",
contract_type="schema", # schema | interface | api | event | data_format
definition="{ access_token: str, refresh_token: str, expires_at: datetime }",
scope_task_id=root_id,
provider_task_id=t3_id, # who creates this
consumer_task_ids=[t5_id, t6_id] # who uses it
)
# Modifying existing code (link to real code node)
contract_add(
name="User_extended",
contract_type="schema",
definition="Add oauth_provider: str, oauth_id: str",
scope_task_id=root_id,
qualified_name="code_review_graph/graph.py::GraphStore" # existing code
)
# Add participants after decomposition
contract_link(contract_id, task_id, role="consumer")
contract_unlink(contract_id, task_id)
# contract_link returns changed:True (new) or changed:False + reason (already linked)
# contract_unlink returns changed:True (removed) or changed:False + reason (wasn't linked)
Find all contracts in a brainstorm:
contract_list(scope_task_id=root_id) # all including orphans
contract_list(task_id=leaf_id) # contracts for specific task
contract_list(name="OAuthToken") # find by name
When to use inline edges vs task_add_edge:
edges= in task_create — use at decomposition time (60% of cases). Atomic: tasks + edges in one call.task_add_edge — use post-factum when tasks already exist ("turns out t7 depends on t4").task_remove_edge — use when a dependency changes ("t5 no longer needs t3 after design change").task_export(task_id) which returns the edges field; task_get_dag for the full subtree.# Post-factum edge (tasks already exist) — always a list, even for one edge
task_add_edge(edges=[{"source_id": child_id, "target_id": blocker_id}],
edge_type="depends_on")
# Pattern A — one task depends on many (all children depend on the base interface)
task_add_edge(edge_type="depends_on", edges=[
{"source_id": google_oauth_id, "target_id": oauth_interface_id},
{"source_id": github_oauth_id, "target_id": oauth_interface_id},
{"source_id": login_endpoint_id, "target_id": oauth_interface_id},
])
# Pattern B — mixed edge types in one call (per-item edge_type overrides default)
task_add_edge(edge_type="depends_on", edges=[
{"source_id": t5_id, "target_id": t2_id},
{"source_id": t6_id, "target_id": t7_id, "edge_type": "shares_context"},
])
Edge types: depends_on | blocks | shares_context | conflicts_with | informs
Cycle detection is automatic — depends_on/blocks edges cannot form cycles (checked atomically).
Duplicate edges are idempotent — adding the same edge twice returns already_exists: true (no overwrite).
blocks edges are cross-checked against depends_on to prevent mutual-wait deadlocks.
task_find_conflicts(root_task_id) # tasks sharing same code nodes
task_check_isolation(task_id) # isolation_score < 0.5 → too coupled, split
task_blast_radius(task_id, depth=2) # code impact radius (affected_nodes_count + uncovered_nodes)
task_execution_order(root_task_id) # parallelism-aware order (levels)
task_suggest_contracts(root_task_id) # hidden code dependencies without contracts
task_find_for_impact(file_paths) # open tasks in blast radius of changed files
Interpret check_isolation:
status: "not_applicable" → task has no code refs yet (link code first)Summary includes both external_dependencies (callees this task calls) and external_dependents (callers of this task).
task_blast_radius response:
affected_nodes_count — scalar count of BFS-reachable nodes (always present)uncovered_nodes — actionable list: nodes in blast radius not covered by any taskinclude_affected_nodes=True — opt-in to get full affected_nodes list (can be large)status: "not_applicable" → task has no code refs linked yettask_validate() # auto-detects active root, runs 9 checks
Required: 0 errors before handing off to coder. Warnings are advisory.
Common errors to fix:
acceptance_criteria on leaf tasks → add via task_updatenote_update(note_id, status="resolved", resolution="...")contract_update(id, status="agreed")
task_link_code(...) or justify in notestask_export() # full context for active root
task_export(task_id, include_analysis=True) # + isolation, conflicts, pipeline_state
task_roadmap() # progress snapshot + attention block
CLI equivalent (generates markdown file for humans + LLM):
code-review-graph task-report
task_update(task_id, status="in_progress")
task_update(task_id, status="done")
task_check_rollup(task_id) # check if parent can be closed
After all leaves are done:
task_validate() # confirm 0 errors still
task_archive(task_ids=[root_task_id], reason="Completed successfully")
Selective archiving — when changing approach mid-brainstorm (archive only what's no longer needed):
task_archive(reason="Switching to in-app only", task_ids=["t2", "t3", "t6"])
Preview before destructive operations — dry_run=True returns what WOULD be affected without executing:
task_delete(task_id, cascade=True, dry_run=True) # → {would_delete: [{id, title}], count: N}
task_archive(task_ids=[...], reason="...", dry_run=True) # → {would_archive: [{id, title}], count: N}
Restructuring — move a group of tasks to a new parent in one call:
task_move(new_parent_id=auth_group_id, task_ids=["t2", "t3", "t4"])
# "Which tasks touch AuthService?" (open tasks only by default)
task_find_by_code_node(code_node_id) # open_only=True by default
task_find_by_code_node(code_node_id, open_only=False) # include done/archived
# "What open tasks are in blast radius of my changes?"
task_find_for_impact(file_paths=["src/auth.py", "src/user.py"])
# "Which task pairs need contracts?"
task_suggest_contracts(root_task_id)
# "Is this task safe to implement alone?"
task_check_isolation(task_id)
# "What code will this task touch transitively?"
task_blast_radius(task_id, depth=2)
# "In what order should I implement leaves?"
task_execution_order(root_task_id) # returns parallel levels
task_create(tasks=[...], parent_id=...) — decomposetask_add_edge(edges=[...], edge_type=...) — add dependenciestask_link_code(task_id=.., links=[...]) — link code nodestask_move(task_ids=[...], new_parent_id=...) — restructure treetask_archive(task_ids=[...], reason=...) — selective archivingnote_add(task_id=.., notes=[...]) — add brainstorm notestask_export(mid_task_id), coder gets task_export(leaf_task_id, include_analysis=True)task_validate — unlisted code refs are a warningnote_list(include_children=True) to search decisions across the whole brainstormtask_suggest_code_links(task_id) — scored by keyword match count, already-linked nodes excluded, limit=20 defaulttask_search(root_task_id, query="auth") finds tasks by title/description texttask_get_dag(root_task_id) returns the full tree with all edges; use compact=True for {id, title, status, depth, parent_id} nodes (faster for large trees)task_delete(task_id) response includes deleted_tasks: [{id, title}] — confirm what was deletedtask_create includes blocking_task_id for direct navigation to the blocking rootcode-review-graph questions to let users answer open questions asynchronously — survives session restarts; answers stored in SQLiteanswered notes: after user answers via UI, task_roadmap() shows them in attention.answered_notes; process each with note_update before task_validate passes cleanlystatus auto-upgrades when provider task status changes (draft→proposed→acknowledged→implemented)