Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes - four-phase framework (root cause investigation with quantitative tracking, pattern analysis, hypothesis testing, implementation) that ensures understanding before attempting solutions
Use when encountering any bug or test failure, before proposing fixes. Applies a four-phase framework: root cause investigation with quantitative tracking, pattern analysis, hypothesis testing, and implementation to ensure understanding before attempting solutions.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Random fixes waste time and create new bugs. Quick patches mask underlying issues.
Core principle: ALWAYS find root cause before attempting fixes. Symptom fixes are failure.
Violating the letter of this process is violating the spirit of debugging.
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
If you haven't completed Phase 1, you cannot propose fixes.
Use for ANY technical issue:
Use this ESPECIALLY when:
Don't skip when:
You MUST complete each phase before proceeding to the next.
BEFORE attempting ANY fix:
Shannon tracking: Record reproduction steps in Serena:
serena.write_memory(f"debugging/{bug_id}/reproduction", {
"steps": ["Step 1", "Step 2", "Step 3"],
"reproducible": True/False,
"frequency": "100%" / "50%" / "intermittent",
"timestamp": ISO_timestamp
})
Shannon integration: Use git log with metrics:
git log --since="1 week ago" --oneline --stat
# Track: files_changed, lines_added, complexity_diff
WHEN system has multiple components (CI → build → signing, API → service → database):
BEFORE proposing fixes, add diagnostic instrumentation:
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
THEN investigate that specific component
Example (multi-layer system):
# Layer 1: Workflow
echo "=== Secrets available in workflow: ==="
echo "IDENTITY: ${IDENTITY:+SET}${IDENTITY:-UNSET}"
# Layer 2: Build script
echo "=== Env vars in build script: ==="
env | grep IDENTITY || echo "IDENTITY not in environment"
# Layer 3: Signing script
echo "=== Keychain state: ==="
security list-keychains
security find-identity -v
# Layer 4: Actual signing
codesign --sign "$IDENTITY" --verbose=4 "$APP"
This reveals: Which layer fails (secrets → workflow ✓, workflow → build ✗)
Shannon tracking: Save diagnostic results:
serena.write_memory(f"debugging/{bug_id}/diagnostics", {
"layer_results": {
"layer1": {"status": "PASS", "evidence": "..."},
"layer2": {"status": "FAIL", "evidence": "..."},
},
"failing_layer": "layer2",
"timestamp": ISO_timestamp
})
WHEN error is deep in call stack:
REQUIRED SUB-SKILL: Use shannon:root-cause-tracing for backward tracing technique
Quick version:
Shannon enhancement: Quantitative trace tracking:
serena.write_memory(f"debugging/{bug_id}/trace", {
"trace_depth": 5, # How many layers traced
"root_cause_layer": "user_input_validation",
"trace_path": ["handler", "service", "validator", "input", "user"],
"time_to_trace": "15 minutes",
"timestamp": ISO_timestamp
})
Find the pattern before fixing:
Shannon integration: Use codebase_search:
# Find working examples automatically
working_examples = codebase_search(
query="working implementation of authentication",
target_directories=["src/"]
)
Shannon requirement: Must use forced-reading-protocol for reference docs
Shannon tracking: Quantify differences:
differences = {
"config_differences": 3,
"code_differences": 7,
"dependency_differences": 2,
"total_diff_lines": 45
}
Shannon integration: Use mcp-discovery skill to find relevant MCPs:
# Need database debugging? Find db MCPs
relevant_mcps = mcp_discovery.find_for_domain("database debugging")
Scientific method:
Shannon requirement: Quantify confidence:
hypothesis = {
"theory": "Missing environment variable causes auth failure",
"reasoning": "Layer 2 diagnostics show IDENTITY unset",
"confidence": 0.85, # 85% confidence (0.00-1.00)
"evidence_strength": "STRONG", # Based on diagnostic evidence
"timestamp": ISO_timestamp
}
serena.write_memory(f"debugging/{bug_id}/hypothesis", hypothesis)
Shannon tracking: Record test result:
test_result = {
"hypothesis_id": hypothesis_id,
"change_made": "Set IDENTITY env var in workflow",
"outcome": "SUCCESS" / "FAILURE",
"confidence_update": 0.95 if success else 0.10,
"timestamp": ISO_timestamp
}
Shannon enhancement: Track attempt count:
attempts = get_attempt_count(bug_id) # From Serena
if attempts >= 3:
# CRITICAL: Question architecture (see Phase 4.5 below)
alert_architectural_smell(bug_id)
Shannon integration: Use research MCPs:
# Use Tavily for best practices
research = tavily.search("best practices for X")
# Use Context7 for library docs
docs = context7.get_library_docs("library/name")
# Save research to Serena
serena.write_memory(f"debugging/{bug_id}/research", research)
Fix the root cause, not the symptom:
Shannon requirement: Test must use real systems:
// ✅ GOOD: Real system test
test('auth fails with missing env var', async () => {
// DON'T mock - use real auth system
const result = await realAuthSystem.authenticate(credentials);
expect(result.error).toBe('Missing IDENTITY');
});
// ❌ BAD: Mocked test
test('auth fails', () => {
const mock = jest.fn().mockRejection('error');
// This doesn't prove real system behavior
});
Shannon tracking: Record fix details:
fix = {
"root_cause": "Missing IDENTITY env var",
"fix_applied": "Added IDENTITY to workflow secrets",
"files_changed": 1,
"lines_changed": 3,
"fix_type": "configuration",
"timestamp": ISO_timestamp
}
Shannon requirement: Use verification-before-completion skill:
MANDATORY: Run all 3 validation tiers
- Tier 1 (Flow): Code compiles
- Tier 2 (Artifacts): Tests pass
- Tier 3 (Functional): Real system verification
NO claims without verification evidence.
STOP
Count: How many fixes have you tried?
If < 3: Return to Phase 1, re-analyze with new information
If ≥ 3: STOP and question the architecture (step 5 below)
DON'T attempt Fix #4 without architectural discussion
Pattern indicating architectural problem:
Shannon tracking: Architectural smell detection:
debugging_history = serena.query_memory(f"debugging/{bug_id}/*")
attempts = len([h for h in debugging_history if h["type"] == "fix_attempt"])
if attempts >= 3:
pattern = analyze_failure_pattern(debugging_history)
architectural_smell = {
"attempts": attempts,
"pattern": pattern, # "each_fix_reveals_new_problem" etc
"recommendation": "REFACTOR_ARCHITECTURE",
"confidence": 0.95,
"timestamp": ISO_timestamp
}
serena.write_memory(f"debugging/{bug_id}/architectural_smell", architectural_smell)
STOP and question fundamentals:
Discuss with your human partner before attempting more fixes
This is NOT a failed hypothesis - this is a wrong architecture.
If you catch yourself thinking:
ALL of these mean: STOP. Return to Phase 1.
If 3+ fixes failed: Question the architecture (see Phase 4.5)
Watch for these redirections:
When you see these: STOP. Return to Phase 1.
| Excuse | Reality |
|---|---|
| "Issue is simple, don't need process" | Simple issues have root causes too. Process is fast for simple bugs. |
| "Emergency, no time for process" | Systematic debugging is FASTER than guess-and-check thrashing. |
| "Just try this first, then investigate" | First fix sets the pattern. Do it right from the start. |
| "I'll write test after confirming fix works" | Untested fixes don't stick. Test first proves it. |
| "Multiple fixes at once saves time" | Can't isolate what worked. Causes new bugs. |
| "Reference too long, I'll adapt the pattern" | Partial understanding guarantees bugs. Read it completely. |
| "I see the problem, let me fix it" | Seeing symptoms ≠ understanding root cause. |
| "One more fix attempt" (after 2+ failures) | 3+ failures = architectural problem. Question pattern, don't fix again. |
Track debugging metrics in Serena:
debugging_session = {
"bug_id": bug_id,
"start_time": start_timestamp,
"end_time": end_timestamp,
"duration_minutes": 45,
# Phase completion
"phases_completed": ["investigation", "analysis", "hypothesis", "implementation"],
# Quantitative metrics
"reproduction_attempts": 3,
"reproduction_success": True,
"trace_depth": 5,
"hypotheses_tested": 2,
"fix_attempts": 1,
"success": True,
# Pattern recognition
"similar_bugs_resolved": 3, # From Serena history
"root_cause_category": "configuration",
"architectural_smell": False,
# Evidence quality
"diagnostic_evidence_strength": "STRONG",
"hypothesis_confidence": 0.95,
"fix_verification": "3/3 tiers PASS",
"timestamp": ISO_timestamp
}
serena.write_memory(f"debugging/sessions/{session_id}", debugging_session)
Pattern learning over time:
# Query historical debugging data
auth_bugs = serena.query_memory("debugging/*/root_cause_category:auth")
# Learn patterns
pattern = {
"category": "authentication",
"total_bugs": len(auth_bugs),
"avg_resolution_time": average([b["duration_minutes"] for b in auth_bugs]),
"common_root_causes": {
"missing_env_vars": 12, # 80% of auth bugs
"expired_tokens": 2,
"config_mismatch": 1
},
"architectural_refactors_triggered": 0, # No refactors yet
"recommendation": "Add env var validation at startup"
}
| Phase | Key Activities | Success Criteria | Shannon Enhancement |
|---|---|---|---|
| 1. Root Cause | Read errors, reproduce, check changes, gather evidence | Understand WHAT and WHY | Quantitative tracking in Serena |
| 2. Pattern | Find working examples, compare | Identify differences | Automated pattern search |
| 3. Hypothesis | Form theory, test minimally | Confirmed or new hypothesis | Confidence scoring (0.00-1.00) |
| 4. Implementation | Create test, fix, verify | Bug resolved, tests pass | 3-tier validation gates |
If systematic investigation reveals issue is truly environmental, timing-dependent, or external:
But: 95% of "no root cause" cases are incomplete investigation.
This skill requires using:
Complementary skills:
Shannon integration:
From debugging sessions:
Shannon tracking proves this: Query debugging_sessions with success:true vs failed attempts.
Systematic > Random. Evidence > Guessing. Root Cause > Symptom.
Shannon's quantitative tracking turns debugging from art into science.
Measure everything. Learn from patterns. Never fix without understanding.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.