Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes. Enforces root cause investigation through four phases: investigation, pattern analysis, hypothesis testing, and implementation. Prevents guess-and-check thrashing.
From casaflownpx claudepluginhub casaperks/casaflow --plugin casaflowThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
PURPOSE: Find and fix the root cause of bugs through disciplined investigation. Random fixes waste time and create new bugs. Quick patches mask underlying issues.
Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
Violating the letter of this process is violating the spirit of debugging.
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
If you have not completed Phase 1, you cannot propose fixes.
Use for ANY technical issue:
Use this ESPECIALLY when:
Do not skip when:
You MUST complete each phase before proceeding to the next.
BEFORE attempting ANY fix:
Read Error Messages Carefully
Reproduce Consistently
Check Recent Changes
Gather Evidence in Multi-Component Systems
WHEN the system has multiple components (CI -> build -> signing, API -> service -> database, frontend -> API -> cache -> database):
BEFORE proposing fixes, add diagnostic instrumentation:
For EACH component boundary:
- Log what data enters the component
- Log what data exits the component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify the failing component
THEN investigate that specific component
Example (multi-layer system):
# Layer 1: Entry point
log("=== Request received ===")
log("Input:", sanitize(input))
# Layer 2: Processing
log("=== Processing layer ===")
log("Config loaded:", config.isValid)
log("Dependencies available:", checkDeps())
# Layer 3: External call
log("=== External call ===")
log("Request:", sanitize(request))
log("Response status:", response.status)
log("Response body:", sanitize(response.body))
# Layer 4: Output
log("=== Final output ===")
log("Result:", sanitize(result))
This reveals: Which layer fails (e.g., entry -> processing OK, processing -> external FAILS).
Trace Data Flow
WHEN the error is deep in a call stack:
Backward tracing technique:
Find the pattern before fixing:
Find Working Examples
Compare Against References
Identify Differences
Understand Dependencies
Scientific method:
Form Single Hypothesis
Test Minimally
Verify Before Continuing
When You Do Not Know
Fix the root cause, not the symptom:
Create Failing Test Case
tdd skill for writing proper failing testsImplement Single Fix
Verify Fix
verify skill before claiming the fix worksIf Fix Does Not Work
If 3+ Fixes Failed: Question Architecture
Pattern indicating architectural problem:
STOP and question fundamentals:
Discuss with the user before attempting more fixes.
This is NOT a failed hypothesis -- this is a wrong architecture.
If you catch yourself thinking any of these, STOP and return to Phase 1:
| Red Flag | What It Means |
|---|---|
| "Quick fix for now, investigate later" | You are skipping root cause |
| "Just try changing X and see if it works" | You are guessing, not investigating |
| "Add multiple changes, run tests" | You cannot isolate what worked |
| "Skip the test, I'll manually verify" | Manual verification is unreliable |
| "It's probably X, let me fix that" | "Probably" means you do not know |
| "I don't fully understand but this might work" | Partial understanding = wrong fix |
| "Pattern says X but I'll adapt it differently" | Deviation without understanding = bugs |
| "Here are the main problems: [lists fixes]" | Proposing solutions before investigating |
| Proposing solutions before tracing data flow | You skipped the investigation |
| "One more fix attempt" (when already tried 2+) | 3+ failures = architectural problem |
| Each fix reveals new problem in different place | Architecture is wrong, not the code |
Watch for these redirections from the user:
When you see these: STOP. Return to Phase 1.
| Excuse | Reality |
|---|---|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes do not stick. Test first proves it. |
| "Multiple fixes at once saves time" | Cannot isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms does not equal understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, do not fix again. |
| Phase | Key Activities | Success Criteria |
|---|---|---|
| 1. Root Cause | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| 2. Pattern | Find working examples, compare, identify differences | Differences listed and understood |
| 3. Hypothesis | Form theory, test minimally, one variable at a time | Confirmed hypothesis or new one formed |
| 4. Implementation | Create failing test, single fix, verify | Bug resolved, tests pass, no regressions |
If systematic investigation reveals the issue is truly environmental, timing-dependent, or external:
But: 95% of "no root cause" cases are incomplete investigation.
Called by:
kickoff routes bugs through light brainstorm then hereRelated skills:
tdd -- for creating the failing test case (Phase 4, Step 1)verify -- for verifying the fix worked before claiming successFrom debugging sessions: