From bee
Analyzes Git history for code hotspots, temporal coupling, and applies Adam Tornhill's code-as-crime-scene methodology in code reviews.
npx claudepluginhub incubyte/ai-plugins --plugin beeThis skill uses the workspace's default tool permissions.
Not all tech debt is equal. Code that changes often and is hard to understand costs you every sprint. Code that's ugly but stable and rarely touched costs you nothing. **Focus where it hurts.**
Identifies riskiest codebase files using git churn analysis, complexity metrics, coupling, and lenskit risk scores for technical debt hotspots.
Analyzes git history to find code hotspots, temporal coupling between files, contributor knowledge distribution, and bus factor risks. Useful for queries on code ownership, frequent changes, or evolution.
Identifies and explains churn hotspots and unstable code clusters in git repos. Groups by directory, filename stems (e.g. foo.go + foo_test.go), and co-changes; classifies as unstable, buggy, tightly-coupled, etc.
Share bugs, ideas, or general feedback.
Not all tech debt is equal. Code that changes often and is hard to understand costs you every sprint. Code that's ugly but stable and rarely touched costs you nothing. Focus where it hurts.
This methodology combines two lenses:
skills/clean-code/SKILL.md, skills/architecture-patterns/SKILL.md, skills/tdd-practices/SKILL.md)A hotspot is a file that is both frequently changed AND complex. Frequent changes alone don't mean trouble (a config file that gets updated often is fine). Complexity alone doesn't mean trouble (a complex algorithm that never changes is stable). The intersection is where bugs breed.
Use git log --format=format: --name-only over a meaningful time window (3-6 months is typical) to count how often each file appears in commits. Rank by frequency. The top 10-20% are your high-churn files.
For scoped reviews (a folder, a module), run the analysis on the scoped path but also check whether the scoped files appear in the repo-wide top 20%.
Full cyclomatic complexity analysis requires language-specific tooling. As a practical proxy, use:
Combine change frequency and complexity into a simple risk score:
Files that consistently appear in the same commits — especially files in different modules — reveal hidden dependencies. The code doesn't import each other, but a change in one requires a change in the other.
Analyze commit history for file co-occurrence:
Same-directory co-occurrence is expected (related files change together). Cross-directory co-occurrence often reveals:
Beyond temporal coupling (behavioral), analyze structural coupling:
Files/modules with many dependents are high-impact change targets. A change here ripples outward. These should be stable and well-tested.
Files/modules that depend on many others are fragile — any dependency changing can break them. These are candidates for simplification.
A single logical change (e.g., "add a new user role") that requires touching 5+ files is a design smell. The concept of "user role" is scattered instead of centralized. Look for:
if role === 'admin') in multiple placesIssues that should be fixed before the next release. Evidence of active harm.
Issues worth addressing in the next few sprints. They slow the team down or make the codebase harder to work with.
Issues that would make the code nicer but aren't causing active problems. Address opportunistically.
Every finding gets an effort tag so the team can plan:
Good commit messages are a form of documentation. They explain WHY a change was made, not just WHAT changed. A codebase with good commit messages is easier to debug (git bisect), easier to review (git log), and easier to onboard into.
Red flags:
Good patterns:
feat(auth): add rate limiting to login endpointCode review is how teams learn from each other. Rubber-stamp reviews ("LGTM", approval with no comments) provide zero value — they're a process checkbox, not a learning opportunity.
Red flags:
What good reviews look like:
The review should end with an actionable roadmap, ordered by impact-to-effort ratio:
Quick wins with high impact — do these first. They're cheap and make the codebase meaningfully better. Examples: rename a confusing function in a hotspot file, add a missing test for a critical path, extract duplicated business logic.
Moderate efforts with high impact — schedule these. They need focused time but pay for themselves quickly. Examples: refactor a god class that's a change-frequency hotspot, add integration tests for an untested critical flow.
Significant investments — plan these as stories. They're expensive but address structural problems. Examples: decouple temporally-coupled modules, introduce an architectural boundary, redesign a subsystem that's a persistent hotspot.
Skip these (for now) — tech debt that's not actively hurting. Mention it for awareness but explicitly recommend NOT prioritizing it. The team's time is better spent on items 1-3.