Use when debugging regressions, finding which commit introduced a bug, or answering "when did this break" / "this used to work" questions. Provides systematic git bisect workflow with automated test scripts, manual verification, or hybrid approaches. Activates for performance regressions, test failures that appeared recently, or any issue known to have worked at a previous commit. Can be invoked from systematic-debugging or used standalone. Not for general debugging without a known-good commit or regression history.
Systematically identifies which commit introduced a bug using git bisect with automated, manual, or hybrid verification workflows.
npx claudepluginhub sjungling/sjungling-claude-pluginsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Systematically identify which commit introduced a bug or regression using git bisect. This skill provides a structured workflow for automated, manual, and hybrid bisect approaches.
Core principle: Binary search through commit history to find the exact commit that introduced the issue. Main agent orchestrates, subagents execute verification at each step.
Announce at start: "I'm using git-bisect-debugging to find which commit introduced this issue."
| Phase | Key Activities | Output |
|---|---|---|
| 1. Setup & Verification | Identify good/bad commits, verify clean state | Confirmed commit range |
| 2. Strategy Selection | Choose automated/manual/hybrid approach | Test script or verification steps |
| 3. Execution | Run bisect with subagents | First bad commit hash |
| 4. Analysis & Handoff | Show commit details, analyze root cause | Root cause understanding |
This skill focuses on straightforward scenarios. It does NOT handle:
--first-parent)For these scenarios, manual git bisect with user guidance is recommended.
These are non-negotiable. No exceptions for time pressure, production incidents, or "simple" cases:
ANNOUNCE skill usage at start:
"I'm using git-bisect-debugging to find which commit introduced this issue."
CREATE TodoWrite checklist immediately (before Phase 1):
VERIFY safety checks (Phase 1 - no skipping):
git status)USE AskUserQuestion for strategy selection (Phase 2):
LAUNCH subagents for verification (Phase 3):
HANDOFF to systematic-debugging (Phase 4):
If you catch yourself thinking ANY of these, you're about to violate the skill:
git status to verify.All of these mean: STOP. Follow the 4-phase workflow exactly.
Copy this checklist to track progress:
Git Bisect Progress:
- [ ] Phase 1: Setup & Verification (good/bad commits identified)
- [ ] Phase 2: Strategy Selection (approach chosen, script ready)
- [ ] Phase 3: Execution (first bad commit found)
- [ ] Phase 4: Analysis & Handoff (root cause investigation complete)
Purpose: Ensure git bisect is appropriate and safe to run.
Steps:
Verify prerequisites:
git status)Identify commit range:
git log --oneline, git tag, git log --since="last week"HEAD or a specific commit where issue confirmedVerify the range:
Safety checks:
Output: Confirmed good commit hash, bad commit hash, estimated steps
Purpose: Choose the most efficient bisect approach.
Assessment: Can we write an automated test script that deterministically identifies good vs bad?
MANDATORY: Use AskUserQuestion to present three approaches:
If automated or hybrid selected:
Write test script following this template:
#!/bin/bash
# Exit codes: 0 = good, 1 = bad, 125 = skip (can't test)
# Setup/build (required for each commit)
npm install --silent 2>/dev/null || exit 125
# Run the actual test
npm test -- path/to/specific-test.js
exit $?
Script guidelines:
If manual selected:
Write specific verification steps for subagent:
Good example:
1. Run `npm start`
2. Open browser to http://localhost:3000
3. Click the "Login" button
4. Check if it redirects to /dashboard
5. Respond 'good' if redirect happens, 'bad' if it doesn't
Bad example:
See if the login works
Output: Selected approach, test script (if automated/hybrid), or verification steps (if manual)
Architecture: Main agent orchestrates bisect, subagents verify each commit in isolated context.
Main agent responsibilities:
start, good, bad, reset)Subagent responsibilities:
Execution flow:
Main agent: Run git bisect start <bad> <good>
Loop until bisect completes:
a. Git checks out a commit to test
b. Main agent launches subagent using Task tool:
For automated:
Prompt: "Run this test script and report the result:
<script content>
Report 'good' if exit code is 0, 'bad' if exit code is 1, 'skip' if exit code is 125.
Include the output of the script in your response."
For manual:
Prompt: "We're testing commit <hash> (<message>).
Follow these verification steps:
<verification steps>
Report 'good' if the issue doesn't exist, 'bad' if it does exist.
Explain what you observed."
For hybrid:
Prompt: "Run this test script:
<script content>
If exit code is 0 or 1, report that result.
If exit code is 125 or script is ambiguous, perform manual verification:
<verification steps>
Report 'good', 'bad', or 'skip' with explanation."
c. Subagent returns: Result ("good", "bad", or "skip") with explanation
d. Main agent: Run git bisect good|bad|skip based on result
e. Main agent: Update progress
git bisect log | grep "# .*step" | tail -1f. Repeat until git bisect identifies first bad commit
Main agent: Run git bisect reset to cleanup
Main agent: Return to original branch/commit
Error handling during execution:
git bisect skipgit bisect reset runs in cleanupgit bisect reset on success or failure, return to original branchOutput: First bad commit hash, bisect log showing the path taken
Purpose: Present findings and analyze root cause.
Steps:
Present the identified commit:
Found first bad commit: <hash>
Author: <author>
Date: <date>
<commit message>
Files changed:
<list of files from git show --stat>
Show how to view details:
View full diff: git show <hash>
View file at that commit: git show <hash>:<file>
Handoff to root cause analysis:
<hash>, I'm using systematic-debugging to analyze why this change caused the issue."Output: Root cause understanding of why the commit broke functionality
git log --since="2 weeks ago" to find starting point| Issue Type | Recommended Approach | Script/Steps Example |
|---|---|---|
| Test failure | Automated | npm test -- failing-test.spec.js |
| Crash/error | Automated | `node app.js 2>&1 |
| Performance | Automated | time npm run benchmark | awk '{if ($1 > 5.0) exit 1}' |
| UI/UX change | Manual | "Click X, verify Y appears" |
| Behavior change | Manual or Hybrid | Script to check, manual to confirm subjective aspects |
User: "The login test started failing sometime in the last 50 commits."
[Phase 1] git status -> clean. Good: v1.2.0 tag, Bad: HEAD. Verified both. 47 commits, ~6 steps.
[Phase 2] AskUserQuestion -> User selects Automated.
Script: npm install --silent 2>/dev/null || exit 125 && npm test -- tests/login.spec.js
[Phase 3] Subagent tests at each bisect step:
abc123 -> bad (~3 left), def456 -> good (~2 left), ghi789 -> bad (~1 left), jkl012 -> good
Result: ghi789 is first bad commit
[Phase 4] ghi789: "feat: update authentication middleware" (src/auth/middleware.js)
-> Handoff to systematic-debugging for root cause analysis
User: "The dashboard layout looks wrong, but I'm not sure when it broke."
[Phase 1] git status -> clean. Good: 2 weeks ago, Bad: HEAD. 89 commits, ~7 steps.
[Phase 2] AskUserQuestion -> User selects Manual.
Steps: Run `npm run dev`, check sidebar/content layout at localhost:3000/dashboard
[Phase 3] Subagent presents verification steps at each commit, user reports good/bad:
abc123 -> good, def456 -> bad, ... narrows to mno345
Result: mno345 is first bad commit
[Phase 4] mno345: "refactor: migrate to CSS Grid layout" (Dashboard.css)
-> Handoff to systematic-debugging for root cause analysis
git bisect reset, then git checkout main. Restart with better range/script.When to use: Historical bugs, regressions, "when did this break" questions
Key strengths:
Remember:
git bisect resetExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.