From dev-workflow
Use when the user reports an error with stack trace or screenshot, describes unexpected behavior, or build/test failures occur. Systematically diagnoses through reproduction, hypothesis, value domain tracing, and parallel path detection.
npx claudepluginhub n0rvyn/indie-toolkit --plugin dev-workflowThis skill uses the workspace's default tool permissions.
Trigger this command when:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Trigger this command when:
If input is incomplete, ask for:
Parse input and read GitHub Issue (if reference provided)
If input contains #N or issue N (e.g., /fix-bug #5, /fix-bug issue 5):
gh issue view N --json title,body,labels,milestonegh issue view returns an error (issue not found, gh not installed, no network): inform the user of the error and fall through to Step 0.5 without prior hypotheses### Prior Hypotheses sectionIf input does not contain an issue reference, skip to Step 0.5.
0.5. Retrieve historical context
search(query="<keywords from bug description>", source_type=["error","lesson"], project_root="<current working directory>")Reproduce first
Understand the error
2.5. Understand intent (trigger: bug location involves non-trivial logic — conditional branches, state machines, multi-step transformations)
Before generating assertions, answer these questions by reading code + git history:
[Intent Analysis]
- Original intent: what was this code trying to achieve?
(cite: comments, function name, commit message, surrounding context)
- Actual behavior: what does it actually do? (cite: code trace)
- Gap type: intentional workaround / unintentional bug /
incomplete implementation / outdated assumption
- Evidence for gap type: {git blame date, TODO comment, related issue, etc.}
- Current goal: what do we want it to do now?
- Delta: {original intent} -> {current goal} — same or different?
- If different: upstream/downstream impact assessment required before proceeding
If gap type = "intentional workaround": Stop. Do not treat as bug. Present finding to user: "This appears to be an intentional workaround from {date/commit}. Confirm: fix it or preserve the workaround?"
BV: Generate falsifiable assertions
Based on error symptoms and code context, generate 3-5 specific, falsifiable assertions. Each assertion must include: hypothesis + file location + verification method + expected outcome for both cases.
If Step 0 extracted prior hypotheses from a GitHub Issue: prepend them to the assertion list as highest-priority items, marked with source: "Prior hypothesis from issue #N". Verify these first before generating additional assertions.
[Bug Assertion 1] {specific hypothesis}
Location: {file:line}
Verify: read {file:line}; if correct, expect {X}; if wrong, expect {Y}
Assertions must cover different dimensions (pick the 3-5 most suspicious):
Principle: backward verification (testing specific assertions) is significantly more accurate than forward generation (guessing a single cause). Even if all assertions are falsified, the verification process exposes reasoning paths that reveal the root cause.
Verify assertions systematically
4.5. Hypothesis reset (trigger: user negates a core assumption underlying the confirmed assertion)
Active self-check: After each user response during Step 4 verification, ask yourself: "Does the user's response contradict any premise of the current assertions?" If yes, invoke this step.
When the user provides information that invalidates the foundation of the current diagnosis (e.g., "the data is user-provided, not AI-generated"):
Do NOT patch the old hypothesis. A negated foundation requires new assertions. Do NOT skip ahead to implementation — reset means the full gate chain (Step 3 → 4 → 7) restarts.
Trace value domain — MANDATORY GATE for value-related bugs
⛔ If bug symptom is value-related, this step is MANDATORY. Do not proceed to Step 7 without producing the [值域检查] table below.
A bug is value-related if: a wrong number/string/enum appears where a different one was expected, OR a field displays data from the wrong source, OR a computed result is incorrect. When in doubt, treat as value-related.
[值域检查] {file:line} — {生产/消费} — 假设值域 {X} — ✅ 一致 / ❌ 同类问题
Check for parallel paths (trigger: Step 5 finds multiple producers, or same core function has multiple upstream callers)
[路径检查] {核心函数}
- 路径 A: {file:line} → {file:line} → {核心函数}
- 路径 B: {file:line} → {核心函数}
- 协调机制: {具体代码位置 / 无}
Plan the fix — MANDATORY GATE
⛔ DO NOT write any fix code until this step is completed and the user has approved the plan.
Pre-check: If bug symptom involves value display/transfer, verify Step 5 [值域检查] table was produced. If not → return to Step 5 before proceeding.
Consumer impact (mandatory for any fix that changes a field's value or source):
Before presenting the plan, list all consumers of the modified field:
[Consumer Impact]
- {consumer file:line} — 当前读取: {X} — 修复后读取: {Y} — 行为变化: {description}
Cannot produce this list = have not traced the data flow = return to Step 5.
Classify fix complexity:
Simple — ALL must be true:
Complex — ANY is true:
Required actions:
→ If Simple: enter Claude Code native plan mode (EnterPlanMode).
Present diagnosis context (confirmed assertions, consumer impact) in the plan.
User reviews and approves within plan mode, then proceed to Step 8.
→ If Complex: invoke /write-plan with the diagnosis context
(confirmed assertions, value domain trace, parallel path analysis) as input.
Wait for plan approval before proceeding.
Proceeding to Step 8 without a user-approved plan is a violation of this skill's protocol.
Fix the root cause
/collect-lesson to record this bug pattern for future retrieval."Verify the fix
gh issue close N. Display the closed issue URL.Tradeoff Report
After verification, produce a fix report. Format depends on complexity (same classification as Step 7):
Simple fix (1-2 locations):
[Fix Summary] 修复 X/Y 项(跳过: {items} — {理由})
[Tradeoff] {修复内容} — 行为变化: {before -> after} — 代价: {known cost} — 验证: {status}
Complex fix (3+ locations):
[Fix Summary] 修复 X/Y 项(跳过: {items} — {理由})
| Issue | 修复前行为 | 修复后行为 | 收益 | 代价 | 验证状态 | 回归风险 |
|-------|-----------|-----------|------|------|---------|---------|
| ... | ... | ... | ... | ... | ... | ... |
Mandatory fields:
verified (ran test/command and saw expected output) or
needs-device-verification: {specific steps} (cannot verify in current environment)Build passing alone is compile-time verification only. Runtime behavioral changes
(conditional rendering, data-dependent logic, network failure paths) require either
a test or explicit needs-device-verification annotation.
If you have attempted 3+ fixes and none resolved the issue:
Stop fixing. Question the architecture.
Signals of an architectural problem:
Action: Discuss with the user before attempting more fixes. This is not a failed hypothesis; this is likely a wrong architecture.
If the user rejects 2 consecutive fix proposals:
Stop proposing. Return to diagnosis.
A rejection is any user response that does not approve proceeding with the proposed fix. Partial approvals ("direction is right but...") count as rejections; they indicate the proposal was insufficient.
[值域检查] or [路径检查] output before making another proposalContinuing to propose without deeper investigation = repeating the same mistake with different words.
When the bug spans multiple components (e.g., CI -> build -> signing, API -> service -> database):
Before proposing any fix, add diagnostic instrumentation at each component boundary:
For EACH component boundary:
- Log what data enters the component
- Log what data exits the component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE the break occurs. Then narrow investigation to the specific failing component. Do not guess which layer is broken.
When the root cause isn't obvious from assertions alone: