Fix bugs using test-driven debugging and verification
Systematically fixes bugs using test-driven debugging with comprehensive verification and quality checks.
npx claudepluginhub mgiovani/cc-arsenal[bug_description_or_issue_id] [--branch name] [--interactive]dev/Fix bugs systematically using test-driven development, root cause analysis, and comprehensive verification across any project type.
CRITICAL: Bug fixes must be based on ACTUAL code and VERIFIED test results:
$ARGUMENTS
Use TodoWrite to track progress through each phase.
Before any debugging, discover how this project works:
Use Task tool with Explore agent:
- prompt: "Discover the development workflow for this project:
1. Read CLAUDE.md if it exists - extract debugging and testing conventions
2. Check for task runners: Makefile, justfile, package.json scripts, pyproject.toml scripts
3. Identify the test command (e.g., make test, just test, npm test, pytest, bun test)
4. Identify how to run a single test or test file
5. Identify the lint command (e.g., make lint, npm run lint, ruff check)
6. Identify the type-check command if applicable
7. Identify the dev server command if this is a web app
8. Check for debugging tools (pytest -v, npm run test:debug, etc.)
9. Note any pre-commit hooks or quality gates
Return a structured summary of all available commands."
- subagent_type: "Explore"
Store discovered commands for use in later phases. Example output:
Project Commands:
- Test All: `make test` or `pytest`
- Test Single: `pytest path/to/test.py::test_name` or `npm test -- path/to/test.spec.ts`
- Test Watch: `pytest --watch` or `npm run test:watch`
- Lint: `make lint` or `ruff check`
- Type Check: `make type-check` or `tsc --noEmit`
- Dev Server: `make dev` or `npm run dev`
- Debug: `pytest -vv -s` or `npm run test:debug`
IMPORTANT: Never assume which test framework or tools are available. Use only the discovered commands.
Goal: Understand the bug, locate it in code, and reproduce it reliably.
Use TodoWrite to track analysis:
TodoWrite:
- [ ] Understand bug symptoms and expected behavior
- [ ] Locate failing test or create reproduction test
- [ ] Run test to confirm failure (with evidence)
- [ ] Identify bug location in codebase
- [ ] Analyze root cause with evidence
Step 1.1: Understand Bug Symptoms
If the user provided an issue ID or bug description:
gh issue view or jira issue view if available)If the user didn't provide clear symptoms, use AskUserQuestion:
Ask user:
1. What is the expected behavior?
2. What actually happens (error messages, incorrect output)?
3. How to reproduce (steps, inputs, conditions)?
4. Is there an existing failing test?
Step 1.2: Locate or Create Failing Test
Search for existing test coverage:
Use Grep tool:
- pattern: [relevant test pattern based on bug description]
- output_mode: "files_with_matches"
- path: "tests/" or "test/" or "__tests__/" (discovered in Phase 0)
If no test exists for this bug:
Step 1.3: Reproduce the Bug
Run the specific test using discovered test command:
# Example (use actual discovered command from Phase 0):
pytest tests/test_feature.py::test_bug_case -vv
# or
npm test -- path/to/test.spec.ts
CRITICAL: Verify the test actually fails. Capture error output.
Step 1.4: Root Cause Analysis
Use parallel subagents for comprehensive analysis:
Agent 1 - Bug Location:
- prompt: "Based on the failing test output and error message:
[PASTE ERROR OUTPUT]
Use Grep and Read tools to:
1. Find the exact location of the bug (file path, line numbers)
2. Read the buggy code and surrounding context
3. Identify why the code produces the wrong behavior
4. Provide evidence (stack trace, variable values, control flow)
Return: File paths, line numbers, root cause explanation with evidence."
- subagent_type: "Explore"
Agent 2 - Impact Analysis:
- prompt: "For the bug in [LOCATION], search the codebase to find:
1. Other code that might be affected by the same issue
2. Similar patterns that might have the same bug
3. Related tests that might also fail
4. Dependencies or callers of the buggy code
Return: List of potentially affected files and patterns."
- subagent_type: "Explore"
Agent 3 - Research Similar Bugs (if external library involved):
- prompt: "Search for documented solutions or patterns for this type of bug:
[BUG DESCRIPTION]
Focus on: official docs, known issues, recommended fixes.
Return: Best practices and recommended solutions."
- subagent_type: "general-purpose"
Step 1.5: Confirm Root Cause
Before proceeding, verify your analysis:
If uncertain, use AskUserQuestion to validate your understanding.
Goal: Design a minimal, focused fix that addresses the root cause without side effects.
TodoWrite:
- [ ] Design fix approach
- [ ] Identify files to modify
- [ ] Plan test coverage for fix
- [ ] Consider edge cases and side effects
- [ ] Get user approval if fix is non-trivial
Step 2.1: Design Fix Strategy
Use a subagent for fix design:
Agent - Fix Design:
- prompt: "Design a fix for this bug:
Root Cause: [EXPLANATION FROM PHASE 1]
Bug Location: [FILE:LINE]
Failing Test: [TEST DESCRIPTION]
Design a fix that:
1. Addresses ONLY the root cause (no refactoring)
2. Follows existing code patterns in this project
3. Has minimal scope (fewest lines changed)
4. Doesn't introduce breaking changes
5. Handles edge cases identified
Consider multiple approaches and recommend the best one.
Return: Proposed fix with explanation and trade-offs."
- subagent_type: "general-purpose"
Step 2.2: Plan Test Coverage
Ensure the fix will be properly tested:
Step 2.3: Get Approval for Non-Trivial Fixes
If the fix involves:
Use AskUserQuestion to present the plan and get approval:
Ask user:
"I've identified the root cause: [EXPLANATION]
Proposed fix:
[DESCRIPTION]
This will modify:
- [FILE 1]: [CHANGES]
- [FILE 2]: [CHANGES]
Alternatives considered:
- [ALTERNATIVE 1]: [WHY NOT CHOSEN]
Proceed with this approach? (y/n/suggest alternative)
Goal: Implement the fix following the plan, ensuring tests pass.
TodoWrite:
- [ ] Implement the fix
- [ ] Verify failing test now passes
- [ ] Run full test suite
- [ ] Fix any new failures
- [ ] Verify no linting errors
Step 3.1: Implement the Fix
Use the Edit tool to make minimal, focused changes:
For each file identified in Phase 2:
1. Read the file to confirm current state
2. Apply the fix using Edit tool
3. Verify the edit matches the plan
Step 3.2: Verify the Fix Locally
Run the specific failing test using discovered command:
# Use actual command from Phase 0
pytest tests/test_feature.py::test_bug_case -vv
CRITICAL: The test must now PASS. If it doesn't:
Step 3.3: Test for Side Effects
Check if the fix broke anything else:
For projects with test suites:
# Use discovered test command from Phase 0
make test
# or
npm test
# or
pytest
For projects with browser/UI testing capability:
Step 3.4: Fix Any New Failures
If tests fail:
Goal: Ensure the fix meets all quality standards before committing.
TodoWrite:
- [ ] All tests pass (including the previously failing test)
- [ ] No linting errors
- [ ] No type errors (if applicable)
- [ ] Code follows project conventions
- [ ] No unnecessary changes
Run all quality checks using discovered commands from Phase 0:
# Example (use actual discovered commands):
make test && make lint && make type-check
# or
npm test && npm run lint && npm run build
# or
pytest && ruff check . && pyright
Quality Gates Checklist:
If any check fails: Fix the issue before proceeding. Do not commit broken code.
Goal: Create a proper conventional commit documenting the fix.
If /cc-arsenal:git:commit skill is available:
/cc-arsenal:git:commit
Otherwise, create a conventional commit manually:
git add [files modified]
git commit -m "fix: [concise description of what was fixed]
- [Detail about root cause]
- [Detail about solution approach]
- [Reference to issue/ticket if applicable]
Closes #[ISSUE_NUMBER]"
Commit Message Guidelines:
fix: (for bug fixes)(module) if applicableExample:
fix(auth): prevent token expiration on page reload
The authentication middleware was not refreshing tokens when
the user reloaded the page, causing premature logouts.
This fix adds token refresh logic to the middleware that
runs on every page load, checking token expiration and
refreshing when needed.
Closes #123
Report to the user:
## Bug Fix Complete ✅
**Bug**: [DESCRIPTION]
**Root Cause**: [EXPLANATION with file:line references]
**Solution**: [WHAT WAS CHANGED]
**Files Modified**:
- [FILE 1]: [CHANGE DESCRIPTION]
- [FILE 2]: [CHANGE DESCRIPTION]
**Test Results**:
- Previously failing test: ✅ PASS
- Full test suite: ✅ [X/X tests passed]
- Linting: ✅ No errors
- Type checking: ✅ No errors
**Commit**: [COMMIT SHA] - [COMMIT MESSAGE]
**Next Steps**:
- [ ] Push to remote: `git push`
- [ ] Create PR: `/cc-arsenal:git:create-pr` (if available)
- [ ] Manual testing: [STEPS if applicable]
Parse optional arguments from the command invocation:
Optional Flags:
--branch name or -b name: Create fix on a specific branch (default: current branch)--interactive or -i: Prompt for confirmation at each phase--test-only: Only reproduce and analyze the bug, don't implement fixExamples:
/dev:fix-bug User login fails with invalid session error
/dev:fix-bug #123 --branch fix/auth-session-bug
/dev:fix-bug "Payment webhook returns 500 error" --interactive
/dev:fix-bug JIRA-456 --test-only
If the bug affects a web UI and the agent-browser skill is available:
After Phase 3 (Implementation), before Phase 4:
# Use discovered dev command from Phase 0
make dev
# or
npm run dev
Use agent-browser skill to:
1. Navigate to the affected page/component
2. Reproduce the original bug scenario
3. Verify the bug is now fixed
4. Test edge cases
5. Take screenshots as evidence
When to Skip Browser Testing:
If you encounter issues during any phase:
Test Failures After Fix:
Ambiguous Bug Description:
AskUserQuestion to clarify:
Cannot Reproduce Bug:
Multiple Root Causes:
/dev:fix-bug User profile page throws 404 on refresh
Process:
npm test and npm run lintfix(routing): add profile route configuration/dev:fix-bug #789 --branch fix/payment-webhook
Process:
gh issue view 789make test and make lintfix(webhooks): validate payment signatures\n\nCloses #789/dev:fix-bug "Auth tokens expire too quickly" --interactive
Process:
/dev:fix-bug Database query timeout in reports --test-only
Process:
Before reporting completion, verify:
Reference: Based on /dev:implement-feature pattern with test-driven debugging workflow
Output: Bug fix with passing tests, quality checks, and conventional commit