Systematically capture, analyze, and track errors with HtmlGraph spike-based investigation workflow
Investigates errors systematically using HtmlGraph spike-based workflow with research-first debugging methodology.
/plugin marketplace add Shakes-tzd/htmlgraph/plugin install htmlgraph@htmlgraphSystematically capture, analyze, and track errors with HtmlGraph spike-based investigation workflow.
/htmlgraph:error-analysis [error_context]
error_context (optional): Brief description of the error or error message/htmlgraph:error-analysis "PreToolUse hook failing with 'No such file'"
Capture and analyze a hook error with HtmlGraph tracking
/htmlgraph:error-analysis
Interactive error capture workflow
CRITICAL: This command implements systematic error investigation using HtmlGraph spikes.
This command follows the research-first debugging methodology from .claude/rules/debugging.md. It ensures errors are properly documented, investigated systematically, and tracked in HtmlGraph.
**DO THIS:**
1. **Capture error details:**
- If error_context provided, use it as starting point
- Otherwise, use AskUserQuestion to gather:
- Exact error message
- When did it occur (what operation)
- What changed recently (code, config, plugins)
- Can it be reproduced consistently?
- Expected vs actual behavior
2. **Categorize the error:**
Identify error type:
- **Hook Error** - PreToolUse, PostToolUse, SessionStart failures
- **API Error** - Network, authentication, rate limits
- **Build Error** - Compilation, linting, type checking
- **Runtime Error** - Exceptions, crashes, unexpected behavior
- **Configuration Error** - Plugin, settings, environment issues
- **Integration Error** - External services, databases, APIs
3. **Gather relevant context:**
Collect diagnostic information based on error type:
For Hook Errors:
```bash
/hooks # List all active hooks
/hooks PreToolUse # Show specific hook type
claude --debug <command> # Verbose output
For Build Errors:
uv run ruff check # Linting errors
uv run mypy src/ # Type errors
uv run pytest -v # Test failures
For Runtime Errors:
Create HtmlGraph spike for investigation:
from htmlgraph import SDK
sdk = SDK(agent='claude-code')
# Determine spike title based on error category
title = f"Error Investigation: {error_category} - {brief_description}"
spike = sdk.start_planning_spike(
title=title,
context=f"""
## Error Details
**Type:** {error_category}
**Message:** {error_message}
**Occurred:** {when_occurred}
**Reproducible:** {is_reproducible}
## Context
{relevant_context}
## Expected Behavior
{expected_behavior}
## Actual Behavior
{actual_behavior}
## Recent Changes
{recent_changes}
""",
timebox_hours=2.0 # Default 2-hour investigation timebox
)
Provide systematic investigation prompts: Based on error category, guide investigation:
Hook Errors:
API Errors:
Build Errors:
uv run ruff check --fixuv run mypy src/uv run pytest -vRuntime Errors:
Configuration Errors:
Offer debugging agent integration: Based on investigation needs:
## Debugging Resources Available
### Research Agent
Use when you need to understand unfamiliar concepts or APIs:
- Research Claude Code hook behavior
- Look up library documentation
- Find best practices for error handling
### Debugger Agent
Use for systematic error analysis:
- Reproduce errors consistently
- Isolate root causes
- Test hypotheses systematically
### Test Runner Agent
Use to validate fixes:
- Run quality gates (lint, type, test)
- Verify error is resolved
- Prevent regression
Document investigation workflow:
# Add investigation steps to spike
investigation_steps = [
"Gather diagnostic information",
"Research root cause (if unfamiliar)",
"Form hypothesis about cause",
"Test hypothesis systematically",
"Implement minimal fix",
"Validate fix resolves error",
"Document learning"
]
with sdk.spikes.edit(spike.id) as s:
for step in investigation_steps:
s.add_step(step)
Output structured investigation plan: Show spike details and next steps
### Output Format:
Spike ID: {spike.id} Title: {spike.title} Category: {error_category} Timebox: 2 hours
{formatted_error_details}
{category_specific_checklist}
/hooks - List active hooksclaude --debug <command> - Verbose output/doctor - System diagnosticsuv run ruff check && uv run mypy src/ && uv run pytest/htmlgraph:spike {spike.id}Use these commands to begin:
# View spike in dashboard
uv run htmlgraph serve
# Open: http://localhost:8080
# Research if needed (unfamiliar error)
/htmlgraph:research "{error topic}"
# Document findings as you investigate
# Findings are auto-tracked in the spike
Remember: Research first, implement second. Don't make trial-and-error attempts.
### Error Category Mappings
Use these patterns to categorize errors:
```python
error_categories = {
"hook": [
"PreToolUse", "PostToolUse", "SessionStart", "SessionEnd",
"hook", "plugin", "marketplace"
],
"api": [
"API", "authentication", "rate limit", "network", "timeout",
"HTTP", "request failed", "connection"
],
"build": [
"ruff", "mypy", "pytest", "lint", "type error", "test failed",
"compilation", "syntax error"
],
"runtime": [
"Exception", "Error:", "Traceback", "crash", "failed",
"unexpected", "assertion"
],
"config": [
"configuration", "settings", "environment", "missing",
"invalid", "not found", ".env", "credentials"
]
}
def categorize_error(error_text: str) -> str:
error_lower = error_text.lower()
for category, keywords in error_categories.items():
if any(kw.lower() in error_lower for kw in keywords):
return category
return "unknown"
This command implements the systematic debugging workflow:
1. /htmlgraph:error-analysis "error message" → Capture & categorize
2. [Complete investigation checklist] → Gather evidence
3. /htmlgraph:research "topic" (if needed) → Research unfamiliar concepts
4. [Test hypothesis systematically] → Debug root cause
5. [Implement minimal fix] → Fix the issue
6. [Run quality gates] → Validate fix
7. [Document learning in spike] → Capture knowledge
Before marking investigation complete, verify:
ALWAYS use for:
SKIP for:
Scenario: Hook error "No such file"
# Step 1: Capture error
/htmlgraph:error-analysis "PreToolUse hook failing with 'No such file'"
# Creates spike with:
# - Error category: hook
# - Investigation checklist
# - Debugging resources
# Step 2: Research (if unfamiliar with hooks)
/htmlgraph:research "Claude Code hook loading and file paths"
# Finds:
# - Hooks load from .claude/hooks/ and plugin directories
# - File paths must be absolute or relative to hook location
# - Common issue: incorrect ${CLAUDE_PLUGIN_ROOT} usage
# Step 3: Debug systematically
/hooks PreToolUse # List all PreToolUse hooks
# Shows duplicate hooks from plugin and .claude/settings.json
# Step 4: Fix
# Remove duplicate hook definition
# Step 5: Validate
claude --debug <command> # Test with verbose output
# Error resolved!
# Step 6: Document
# Add finding to spike: "Hook duplication caused conflict"
# Mark investigation complete
CRITICAL: Always research before implementing fixes.
❌ Wrong approach:
✅ Correct approach:
/htmlgraph:error-analysis to capture error/htmlgraph:research to understand root causeImpact: