Systematic, hypothesis-driven debugging workflow with triage-based track routing. Use when asked to "fix this bug", "debug this", "why is this failing", "this is broken", "investigate this error", "track down this issue", or any debugging situation. Supports --deep flag to force full investigation.
Executes systematic debugging workflows with hypothesis journals and parallel investigation agents to find root causes before fixing.
/plugin marketplace add sequenzia/agent-alchemy/plugin install agent-alchemy-dev-tools@agent-alchemyThis skill is limited to using the following tools:
references/general-debugging.mdreferences/python-debugging.mdreferences/typescript-debugging.mdExecute a systematic debugging workflow that enforces investigation before fixes. Every bug gets a hypothesis journal, evidence gathering, and root cause confirmation before any code changes.
CRITICAL: Complete ALL 5 phases. The workflow is not complete until Phase 5: Wrap-up & Report is finished. After completing each phase, immediately proceed to the next phase without waiting for user prompts.
Goal: Understand the bug, reproduce it, and decide the investigation track.
Extract from $ARGUMENTS and conversation context:
--deep is present, skip triage and go directly to deep track (jump to Phase 2 deep track)Attempt to reproduce before investigating:
# Run the specific test to confirm the failure
<test-runner> <test-file>::<test-name>
Capture the exact error output — this is your primary evidence.
If the bug cannot be reproduced:
Based on the error message and context, form your first hypothesis:
### H1: [Title]
- Hypothesis: [What you think is causing the bug]
- Evidence for: [What supports this — error message, stack trace, etc.]
- Evidence against: [Anything that contradicts it — if none yet, say "None yet"]
- Test plan: [Specific steps to confirm or reject]
- Status: Pending
Quick-fix signals (ALL must be true):
Deep-track signals (ANY one triggers deep track):
Present your assessment via AskUserQuestion:
Track escalation rule: If during quick track execution, 2 hypotheses are rejected, automatically escalate to deep track. Preserve all hypothesis journal entries when escalating.
Goal: Gather evidence systematically, guided by language-specific techniques.
Detect the primary language of the bug's context and load the appropriate reference:
| Language | Reference File |
|---|---|
| Python | Read ${CLAUDE_PLUGIN_ROOT}/skills/bug-killer/references/python-debugging.md |
| TypeScript / JavaScript | Read ${CLAUDE_PLUGIN_ROOT}/skills/bug-killer/references/typescript-debugging.md |
| Other / Multiple | Read ${CLAUDE_PLUGIN_ROOT}/skills/bug-killer/references/general-debugging.md |
Always also load general-debugging.md as a supplement when using a language-specific reference.
For quick-track bugs, investigate directly:
git log --oneline -5 -- <file> for the affected filesProceed to Phase 3 (quick track).
For deep-track bugs, use parallel exploration agents:
Plan exploration areas — identify 2-3 focus areas based on the bug:
Launch code-explorer agents:
Spawn 2-3 code-explorer agents from core-tools:
Use Task tool with subagent_type: "agent-alchemy-core-tools:code-explorer"
Prompt for each agent:
Bug context: [description of the bug and error]
Focus area: [specific area for this agent]
Investigate this focus area in relation to the bug:
- Find all relevant files
- Trace the execution/data path
- Identify where behavior diverges from expected
- Note any suspicious patterns, recent changes, or known issues
- Report structured findings
Launch agents in parallel for independent focus areas.
Synthesize exploration results:
Proceed to Phase 3 (deep track).
Goal: Confirm the root cause through systematic hypothesis testing.
For quick-track bugs:
Verify the hypothesis:
If confirmed (Status → Confirmed):
If rejected (Status → Rejected):
For deep-track bugs:
Prepare hypotheses for testing:
Launch bug-investigator agents:
Spawn 1-3 bug-investigator agents to test hypotheses in parallel:
Use Task tool with subagent_type: "bug-investigator"
Prompt for each agent:
Bug context: [description of the bug and error]
Hypothesis to test: [specific hypothesis]
Test plan:
1. [Step 1 — e.g., run this specific test with these arguments]
2. [Step 2 — e.g., check git blame for this function]
3. [Step 3 — e.g., trace the data from input to error site]
Report your findings with verdict (confirmed/rejected/inconclusive),
evidence, and recommendations.
Launch agents in parallel when they test independent hypotheses.
Evaluate results:
Update hypothesis journal with each agent's findings
If one hypothesis is confirmed → proceed to Phase 4
If all are rejected/inconclusive → apply 5 Whys technique:
Take the strongest "inconclusive" finding and ask "why?" iteratively:
Observed: [what actually happens]
Why? → [first-level cause]
Why? → [second-level cause]
Why? → [root cause]
Form new hypotheses from 5 Whys analysis and repeat investigation
If stuck after 2 rounds of investigation:
Goal: Fix the root cause and prove the fix works.
Before writing any code:
Run the originally failing test — it should now pass:
<test-runner> <test-file>::<test-name>
Run related tests — tests in the same file and nearby test files:
<test-runner> <test-directory>
If tests fail:
Write a test that would have caught this bug:
Deep track only — skip on quick track.
Load code-quality skill:
Read ${CLAUDE_PLUGIN_ROOT}/skills/code-quality/SKILL.md
Review the fix against code quality principles.
Check for related issues:
Grep for the pattern that caused the bug
Goal: Document the investigation trail and capture learnings.
Present to the user:
## Bug Fix Summary
### Bug
[One-line description of the bug]
### Root Cause
[What was actually wrong and why]
### Fix Applied
[What was changed, with file:line references]
### Tests
- [Originally failing test]: Now passing
- [Regression test added]: [test name and location]
- [Related tests]: All passing
### Track
[Quick / Deep] [Escalated from quick: Yes/No]
Present the complete hypothesis journal showing the investigation trail:
### Investigation Trail
#### H1: [Title]
- Status: Confirmed / Rejected
- [Key evidence summary]
#### H2: [Title] (if applicable)
- Status: Confirmed / Rejected
- [Key evidence summary]
[... additional hypotheses ...]
Load the project-learnings skill to evaluate whether this bug reveals project-specific knowledge worth capturing:
Read ${CLAUDE_PLUGIN_ROOT}/skills/project-learnings/SKILL.md
Follow its workflow to evaluate the finding. Common debugging discoveries that qualify:
Deep track only:
If the investigation revealed broader concerns, present recommendations:
Offer the user options via AskUserQuestion:
The hypothesis journal is the core artifact of this workflow. Maintain it throughout all phases.
## Hypothesis Journal — [Bug Title]
### H1: [Descriptive Title]
- **Hypothesis:** [What's causing the bug — be specific]
- **Evidence for:** [Supporting observations with file:line references]
- **Evidence against:** [Contradicting observations]
- **Test plan:** [Concrete steps to confirm or reject]
- **Status:** Pending / Confirmed / Rejected
- **Notes:** [Additional context, timestamps, agent findings]
### H2: [Descriptive Title]
[Same format]
| Aspect | Quick Track | Deep Track |
|---|---|---|
| Investigation | Read error location + 1-2 callers | 2-3 code-explorer agents in parallel |
| Hypotheses | Minimum 1 | Minimum 2-3 |
| Root cause testing | Manual verification | 1-3 bug-investigator agents in parallel |
| Fix validation | Run failing + related tests | Tests + code-quality skill + related issue scan |
| Auto-escalation | After 2 rejected hypotheses | N/A |
| Typical complexity | Off-by-one, typo, wrong argument, missing null check | Race condition, state corruption, multi-file logic error |
Use cross-plugin agent reference:
subagent_type: "agent-alchemy-core-tools:code-explorer"
These are Sonnet-model read-only agents that explore codebase areas. Give each a distinct focus area related to the bug. They report structured findings.
Use same-plugin agent reference:
subagent_type: "bug-investigator"
These are Sonnet-model agents with Bash access for running tests and git commands, but no Write/Edit — they investigate and report evidence, they don't fix code. Give each a specific hypothesis to test.
If any phase fails:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Expert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.