Use when documenting a bug fix, planning a hotfix, or performing root cause analysis before implementation. Also triggers on 'bug spec', 'fix plan', 'root cause analysis', 'hotfix spec', 'fix this bug', 'debug plan', or 'incident report'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (required — auto-invoke if missing):
docs/ets/projects/{project-slug}/discovery/opportunity-pack.md — Bug mode now starts from a
failure-understanding ideation package so the fix plan is grounded in
reproduction, impact, anti-journeys, and fallback understanding.ENRICHES (improves output — use if available):
docs/ets/projects/{project-slug}/state/coverage-matrix.yaml — Helps verify failure states,
affected actors, and fallback expectations.docs/ets/projects/{project-slug}/architecture/architecture-diagram.md — Helps identify which components are affected.docs/ets/projects/{project-slug}/architecture/tech-spec.md — Existing NFRs help assess whether the fix needs to maintain specific targets.docs/ets/projects/{project-slug}/data/database-spec.md — Helps understand data model when the bug involves data corruption or incorrect queries.Resolution protocol:
Use full version when:
Use short version when:
Why this matters: A documented fix plan prevents hasty patches that introduce regressions. The tech spec serves as the record of what was wrong, why, and how it was fixed — useful for future debugging and post-mortems.
mkdir -p if neededdocs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.mdIf the Write fails: Report the error to the user and do not proceed.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — suggest alternatives, challenge assumptions, and explore what-ifs instead of only extracting information.
One question per message — Ask one question, wait for the answer, then ask the next. Bug investigation benefits from methodical questioning — each answer narrows the search space. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When presenting fix approaches, always offer 2-3 alternatives with tradeoffs. Highlight your recommendation based on the risk/effort balance.
Propose approaches before generating — Before proposing a fix, present 2-3 approaches with pros, cons, effort estimate, and risk level. Let the user choose the direction before documenting the fix plan.
Present output section-by-section — Present each section (problem, root cause, fix approach, test plan, rollback plan) individually. Ask for approval before moving to the next.
Track outstanding questions — If something cannot be determined without more investigation:
Multiple handoff options — At completion, present 3-4 next steps as options (see CLOSING SUMMARY).
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing tech-spec at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Create a single-document specification for a bug fix or hotfix. This is the Bug mode equivalent of the entire Product mode pipeline, but it now derives from an upstream failure-understanding ideation step rather than a raw symptom-only conversation.
The tech spec standalone answers five questions:
Load context in this order of priority:
docs/ets/projects/{project-slug}/discovery/opportunity-pack.md
first.docs/ets/projects/{project-slug}/state/coverage-matrix.yaml if
it exists.This interview is 5 core questions, asked one at a time. The goal is to build a complete picture of the bug before proposing fixes.
"What's happening that shouldn't be happening? Describe the bug from the user's perspective."
Follow-up probes (ask one at a time only if needed):
"How do you reproduce this bug? Walk me through the exact steps."
Follow-up probes:
"What should happen (expected behavior) vs. what actually happens (actual behavior)?"
Follow-up probes:
"How critical is this? How many users are affected, and what's the business impact?"
Present severity options:
"Based on what you've described, I'd assess this as:
- Critical — System down, data loss, or security vulnerability. Fix immediately.
- High — Major feature broken, significant user impact. Fix this sprint.
- Medium — Feature partially broken, workaround exists. Fix soon.
- Low — Cosmetic or edge case. Fix when convenient.
I'm leaning toward [severity] because [reason]. Does that feel right?"
"Do you have any clues about the root cause? Suspicious code paths, recent changes, error stack traces?"
Follow-up probes:
After the interview, propose a root cause analysis:
"Based on what you've described, here's my analysis:
Most likely root cause: [description] Confidence: [HIGH/MEDIUM/LOW] Evidence: [what supports this]
Does this match your intuition, or should we explore other possibilities?"
After root cause is confirmed, propose 2-3 fix approaches:
For each approach, provide:
Highlight your recommendation and explain why.
"I see [2-3] ways to fix this:
Approach A: [Name] (Recommended) [description] Pros: [why it's best] | Cons: [tradeoffs] | Effort: [estimate] | Risk: Low
Approach B: [Name] [description] Pros: [advantages] | Cons: [tradeoffs] | Effort: [estimate] | Risk: Medium
I recommend Approach A because [reason]. Which approach do you want to go with?"
The generated docs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.md follows the template in knowledge/template.md.
knowledge/template.md for the tech-spec-standalone document template and standard structure.Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display the summary and offer multiple next steps:
tech-spec-{slug}.md saved to `docs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.md`
Status: [COMPLETE | DRAFT]
Severity: [Critical | High | Medium | Low]
Fix approach: [chosen approach name]
What would you like to do next?
1. Implement the fix (Recommended) — Use /ce:work to start fixing
2. Create a Linear issue — Track this fix in your project management tool
3. Refine this spec — Adjust root cause analysis or fix approach
4. Pause for now — Save and return later
Wait for the user's choice before proceeding. Do not auto-advance.
$ARGUMENTS or user description + ENRICHES documents (if available)docs/ets/projects/{project-slug}/bugs/ — create if missingdocs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.md using the Write tooldocs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.md) + paths to upstream documents (none — bug specs have no BLOCKS dependencies)"Document saved to
docs/ets/projects/{project-slug}/bugs/tech-spec-{slug}.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| Can't reproduce the bug | Medium | Document known conditions, mark as intermittent | Proceed with best available info |
| Root cause unclear | Medium | Document hypotheses with confidence levels, recommend investigation | Mark root cause as HYPOTHESIS, proceed |
| User has no context about the system | Medium | Ask focused architecture questions inline | Proceed with user-provided info only |
| Output validation fails | High | Mark as DRAFT, flag gaps | Proceed with DRAFT status |
| Multiple bugs discovered during investigation | Medium | Suggest splitting into separate specs | Focus on the primary bug first |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness >= 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |