From superpowers-plus
Proactively hunts worst latent bugs in a codebase via adversarial sub-agent, verifies candidates to reduce false positives, ranks top N by severity with file, line, mechanism, and failure mode. For audits, not known failures.
npx claudepluginhub bordenet/superpowers-plus --plugin superpowers-plusThis skill uses the workspace's default tool permissions.
> **Wrong skill?** Known failure to debug → `sp-debug`. Security secrets/vulns → `sp-scan`. Code review of a diff → `code-review-battery`. PR inline review → `sp-review`.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Wrong skill? Known failure to debug →
sp-debug. Security secrets/vulns →sp-scan. Code review of a diff →code-review-battery. PR inline review →sp-review.
Proactively find the highest-severity latent bugs in a codebase — bugs that cause silent failures, data corruption, incorrect behavior, or security issues — without waiting for them to surface in production.
NOT for: debugging a known failure (sp-debug), reviewing a PR diff (code-review-battery),
security credential scanning (sp-scan).
| Parameter | Default | Description |
|---|---|---|
N | 2 | Number of worst bugs to return |
scope | current repo | Directory or file glob to search |
focus | all | logic, security, data-loss, performance, or all |
Parse from user message. If ambiguous, use defaults and note them.
logic / security / data-loss / performance)Dispatch a single explore sub-agent with this instruction template:
You are doing an adversarial bug audit of <SCOPE>.
Find the worst <N> bugs — bugs that cause silent failures, data corruption,
incorrect behavior, or security issues. Focus on <FOCUS>.
For each candidate:
- Read the relevant code carefully (do not skim)
- Explain the exact lines involved
- Explain WHY it is a bug (not a style issue)
- Describe the failure mode (what actually goes wrong)
- Rate severity: CRITICAL / HIGH / MEDIUM / LOW
Rank all candidates by severity. Return the top <N> with exact file paths
and line numbers. Be thorough — read as many files as you need.
Do NOT attempt to verify candidates yourself during this phase — just collect.
For EACH candidate the sub-agent returned:
Present findings ranked by verified severity. For each confirmed bug:
### Bug #N — <title> (<VERIFIED_SEVERITY>)
**File:** `<path>`, `<function>()`, line <N>
**Mechanism:** <one sentence: what the code does wrong>
**Failure mode:** <what actually happens to the user/system>
**Evidence:** <exact code snippet, ≤8 lines>
**Fix sketch:** <one-paragraph description of the correct approach>
Include a brief note for each ❌ false positive explaining why it was rejected.
| Failure | Symptom | Recovery |
|---|---|---|
| Accepted sub-agent output without verification | Reported wrong line numbers or non-bugs | Re-read actual code; mark as unverified until confirmed |
| Searched only obvious files | Missed bugs in utility code or error paths | Expand scope; check all callers of the suspicious function |
| Confused style issues for bugs | Low-severity "findings" crowd out real bugs | Re-apply severity rubric: must have a concrete failure mode |
| Sub-agent timed out or missed files | Incomplete exploration | Manually read the high-risk files from Phase 1's list |