From craft-skills
Use when investigating bugs, errors, failing tests, or unexpected behavior. Enforces systematic root-cause investigation BEFORE attempting any fix. Invoke this skill before writing any fix code.
npx claudepluginhub alexiolan/craft-skills --plugin craft-skillsThis skill uses the workspace's default tool permissions.
Systematic root-cause debugging. No fixes without investigation first.
Performs root cause analysis for bugs by tracing errors through code, analyzing stack traces, forming and testing hypotheses, then hands off to fix. Auto-triggers on stack traces.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Systematic root-cause debugging. No fixes without investigation first.
Do NOT attempt any fix until you have completed Phase 1 (Root Cause Investigation) and have a clear hypothesis for what is wrong and why. Guessing at fixes wastes time and can introduce new bugs.Before touching any code:
git log --oneline -20 — what changed recently?git diff HEAD~5 — any suspicious changes?Use the graph → LLM → manual priority:
Step 1 — Graph maps the territory (if code-review-graph available): First, ensure the graph is fresh — run build_or_update_graph_tool (incremental, fast if already current). Then use get_impact_radius_tool or query_graph_tool with callers_of/callees_of/imports_of on the suspect file. This instantly returns the full dependency chain — all callers, callees, and impacted files — without reading a single file. Do NOT use get_architecture_overview_tool, list_communities_tool, or detect_changes_tool — all three can overflow context (90-300K chars). Use targeted queries only.
Step 2 — LLM reads the code (MANDATORY). Check LM Studio first (Bash tool, wait for result):
CRAFT_SCRIPTS=$(find ~/.claude/plugins -name "llm-agent.sh" -path "*/craft-skills/*" -exec dirname {} \; 2>/dev/null | head -1) && curl -s --max-time 2 ${LLM_URL:-http://127.0.0.1:1234} > /dev/null 2>&1 && echo "LLM_AVAILABLE:$CRAFT_SCRIPTS" || echo "LLM_UNAVAILABLE"
If LLM_AVAILABLE, run with Bash tool (run_in_background: true, timeout 300000ms):
bash "$CRAFT_SCRIPTS/llm-agent.sh" "Read these files and find where data breaks: [2-3 key files from graph chain]. Report the data flow and any anomalies." <project-root>
Then unload: bash "$CRAFT_SCRIPTS/llm-unload.sh". If LLM_UNAVAILABLE, read the key files directly. Filter out false positives about plugins/skills.
Scoping rule: Always list specific file paths — never ask the agent to "explore" or "search the whole project." Broad prompts cause max-iteration failures.
Graph provides the map (which files matter), agent provides the understanding (what the code does). Together they handle complex multi-service traces that neither could do alone — graph prevents agent from wandering, agent provides code-level insight that graph can't. Claude's role is to interpret the findings, not to read the files.
Fallback — if graph unavailable: Use Grep to trace imports/exports of the suspect file manually, then pass those files to the LLM agent. Fallback — if LLM unavailable: Use graph results to identify the key 2-3 files, then read them directly. Fallback — if both unavailable: Manual trace with Grep + Read.
Also check directly:
Follow the data from source to symptom:
API Response → Service Layer → Data Cache → Component/Handler → Rendered Output
Where in this chain does the data go wrong?
Is this a known pattern?
| Pattern | Check |
|---|---|
| Architecture boundary violation | Import from another business module? |
| Data cache stale | Missing cache invalidation after mutation? |
| Validation gap | Schema doesn't match input fields? |
| Type mismatch | API model differs from app-side type? |
| Missing loading state | Data undefined during fetch? |
If rejected, return to Phase 1 with new information.
Only now do you write the fix:
After 3+ failed fix attempts:
Do NOT: