Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
Systematically debugs issues by enforcing root cause investigation before proposing any fixes.
npx claudepluginhub harmaalbers/claude-requirements-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/condition-based-waiting.mdreferences/defense-in-depth.mdreferences/root-cause-tracing.mdRandom fixes waste time and create new bugs. Quick patches mask underlying issues.
Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
Violating the letter of this process is violating the spirit of debugging.
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
If you haven't completed Phase 1, you cannot propose fixes.
Use for ANY technical issue:
Use this ESPECIALLY when:
Don't skip when:
You MUST complete each phase before proceeding to the next.
BEFORE attempting ANY fix:
Read Error Messages Carefully
Reproduce Consistently
Check Recent Changes
Gather Evidence in Multi-Component Systems
WHEN system has multiple components (CI → build → signing, API → service → database):
BEFORE proposing fixes, add diagnostic instrumentation:
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
THEN investigate that specific component
Example (multi-layer Python system):
# Layer 1: Entry point
logger.debug("=== Request received ===")
logger.debug(f"Headers: {dict(request.headers)}")
# Layer 2: Service layer
logger.debug(f"=== Service called with: {params}")
logger.debug(f"Config state: {config.as_dict()}")
# Layer 3: Data layer
logger.debug(f"=== Query: {query}")
logger.debug(f"Connection: {conn.status}")
Trace Data Flow
WHEN error is deep in call stack:
See root-cause-tracing.md in this skill's references for the complete backward tracing technique.
Quick version:
Find the pattern before fixing:
Find Working Examples
Compare Against References
Identify Differences
Understand Dependencies
Scientific method:
Form Single Hypothesis
Test Minimally
Verify Before Continuing
When You Don't Know
Fix the root cause, not the symptom:
Create Failing Test Case
requirements-framework:test-driven-development skill for writing proper failing testsImplement Single Fix
Verify Fix
If Fix Doesn't Work
If 3+ Fixes Failed: Question Architecture
Pattern indicating architectural problem:
STOP and question fundamentals:
Discuss with your human partner before attempting more fixes
If you catch yourself thinking:
ALL of these mean: STOP. Return to Phase 1.
If 3+ fixes failed: Question the architecture (see Phase 4, Step 5)
pdb / breakpoint():
# Add breakpoint in code
breakpoint() # Drops into interactive debugger
# Or use pytest with debugger
pytest tests/test_file.py -s --pdb # Break on failure
pytest verbose output:
pytest tests/test_file.py -v --tb=long # Full tracebacks
pytest tests/test_file.py -v --tb=short # Compact tracebacks
pytest tests/test_file.py -x # Stop on first failure
traceback analysis:
import traceback
try:
risky_operation()
except Exception:
traceback.print_exc() # Full stack trace to stderr
| Excuse | Reality |
|---|---|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
| Phase | Key Activities | Success Criteria |
|---|---|---|
| 1. Root Cause | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY |
| 2. Pattern | Find working examples, compare | Identify differences |
| 3. Hypothesis | Form theory, test minimally | Confirmed or new hypothesis |
| 4. Implementation | Create test, fix, verify | Bug resolved, tests pass |
These techniques are part of systematic debugging and available in this skill's references:
root-cause-tracing.md — Trace bugs backward through call stack to find original triggerdefense-in-depth.md — Add validation at multiple layers after finding root causecondition-based-waiting.md — Replace arbitrary timeouts with condition pollingRelated skills:
requirements-framework:test-driven-development — For creating failing test case (Phase 4, Step 1)requirements-framework:verification-before-completion — Verify fix worked before claiming successWhen this skill completes, it auto-satisfies the debugging_systematic requirement (if enabled in your project). This is an opt-in requirement for projects that want debugging discipline enforced.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.