From cape
Systematic debugging workflow that investigates bugs with tools before guesses and evidence before hypotheses. Use when the user reports something broken, shares a stack trace, encounters a test failure, sees unexpected output, says "this doesn't work", "something is wrong", "why is X happening", or pastes an error message. Also use when the user asks to debug, diagnose, trace, or find the root cause of a problem. This skill investigates only -- it finds the root cause and documents it as a br bug issue with evidence and reproduction steps. Do NOT use for applying fixes (use fix-bug after investigation), quick config tweaks where the solution is already known, performance profiling without a specific defect, or feature design (use brainstorm).
npx claudepluginhub sqve/cape --plugin capeThis skill uses the workspace's default tool permissions.
<skill_overview> Investigate bugs to find root causes. Uses tools before guesses, evidence before
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
<skill_overview> Investigate bugs to find root causes. Uses tools before guesses, evidence before
hypotheses. Produces a br bug issue with root cause analysis, evidence trail, and reproduction
steps -- ready for handoff to fix-bug.
Core contract: no hypothesis without evidence. No br issue without a confirmed root cause or a clear "cause unknown" with documented dead ends. </skill_overview>
<rigidity_level> MEDIUM FREEDOM -- Adapt investigation depth and tool choices to the bug's complexity. Rigid rules: always reproduce before hypothesizing, always gather evidence before concluding, always create a br bug issue with findings, always confirm before creating the issue. </rigidity_level>
<when_to_use>
Don't use for:
</when_to_use>
<critical_rules>
</critical_rules>
<the_process>
Clarify the symptom:
If the user provided a stack trace or error message, parse it for file paths, line numbers, and error types.
Reproduce the bug:
Do not proceed to hypotheses until the symptom is confirmed or the inability to reproduce is documented.
Checkpoint gate (only when a br bug already exists): Read .beads/<bug-id>/verify.json. If the
key evidence records a SHA that matches git rev-parse HEAD, skip evidence gathering and report:
"Evidence gathering already passed at HEAD — skipping." If no br bug exists yet (first
run), skip this gate. If the file is missing or malformed, proceed normally.
Use tools, not intuition.
Dispatch cape:bug-tracer to:
git log --oneline -20 -- <files>, git blame)If broader code understanding is needed (architecture, patterns, unrelated modules), dispatch
cape:codebase-investigator as a secondary agent.
If the error involves external APIs, libraries, or unfamiliar behavior, dispatch
cape:internet-researcher to:
If agents are unavailable, investigate manually with Glob/Grep/Read, git log, and WebSearch/WebFetch.
Build an evidence trail. As you investigate, maintain a running list:
Evidence:
1. [file:line] - [what you found, why it matters]
2. [command output] - [what it reveals]
3. [git log entry] - [relevant change]
Each piece of evidence should either support or eliminate a hypothesis. Evidence without interpretation is noise -- always note what each finding means.
After evidence gathering completes, record the SHA in .beads/<bug-id>/verify.json under key
evidence. Read the existing file (or start from {}), set the key to the current HEAD SHA, and
write it back. Create the directory with mkdir -p ".beads/<bug-id>" if needed.
Form hypotheses from evidence, not from guesses.
For each hypothesis:
Narrow the search systematically:
When a hypothesis is refuted, document it as a dead end and move to the next. Dead ends narrow the search space and prevent re-investigation.
Distinguish symptoms from causes.
A NullPointerException is a symptom -- the root cause is why the value is null. A failing test is
a symptom -- the root cause is the code change or logic error. An incorrect output is a symptom --
the root cause is the flawed logic or data.
Keep asking "why" until you reach a cause that, if fixed, prevents the symptom from recurring.
Symptom: test_auth fails with 401
Why? Token is expired
Why? Token refresh isn't called
Why? Refresh condition checks `<` instead of `<=` at auth.ts:47
Root cause: off-by-one in token expiry comparison
If root cause cannot be determined, document:
You MUST stop here and get user approval before creating the br bug.
Present findings for approval:
## Investigation summary
**Symptom:** [What the user observed]
**Root cause:** [The underlying reason, with file:line reference]
**Evidence:** [Key findings that confirm the root cause]
**Reproduction:** [Steps to trigger the bug]
I'll create a br bug issue with these findings. Proceed?
Do not call br create until the user responds. Wait for explicit approval, then create the issue:
br create "Bug: [Concise root cause description]" \
--type bug \
--priority <0-4> \
--labels "debug-issue" \
--description "$(cat <<'EOF'
## Finding
[Root cause with file:line references]
## Evidence
1. [file:line] - [what was found]
2. [command output] - [what it revealed]
3. [git log] - [relevant change]
## Reproduction steps
1. [Step to trigger]
2. [Observe: symptom]
## Expected behavior
[What should happen]
## Actual behavior
[What happens instead]
## Dead ends investigated
- [Hypothesis] - [why refuted]
## Suggested fix
[Direction for fix-bug skill]
## Success criteria
- [ ] [Root cause addressed]
- [ ] [Regression test added]
EOF
)"
cape br validate <bug-id>
Priority assessment:
| Priority | Criteria |
|---|---|
| P0 | Security vulnerability, data loss, production down |
| P1 | Broken core functionality, blocking other work |
| P2 | Broken non-critical functionality, test failures (default) |
| P3 | Cosmetic issues, edge cases with workarounds |
| P4 | Nice-to-have, backlog |
After creation:
Created br-N: "Bug: [title]"
Priority: P[N] | Label: debug-issue
Ready for fix-bug when you want to address it.
</the_process>
<agent_references>
</agent_references>
User pastes a stack traceUser: "Getting this error: TypeError: Cannot read property 'id' of undefined at handlers/order.ts:42"
Wrong: "The issue is that order is undefined. Let me add a null check at line 42." Jumps to a
fix without understanding WHY the order is undefined. The null check masks the real bug and the
problem resurfaces elsewhere.
Right:
order is populated -- find the database query at
services/order.ts:28findOne without joining the items relation, but line 42 accesses
order.items[0].idUser: "test_session_cleanup fails about 30% of the time in CI"
Wrong: "Flaky tests are usually timing issues. Let me add a retry or increase the timeout." Treats the symptom without investigating the cause. Retries mask the real bug.
Right:
setInterval -- first run happens AFTER 100ms, not
AT 100msUser: "The user profile page shows the wrong email after updating it"
Wrong: "Let me check the profile rendering template for display bugs." Starts at the symptom instead of tracing the data flow.
Right:
<key_principles>
</key_principles>