Reviews GitHub PRs for bugs with 5 parallel AI passes, majority voting, Opus validation, inline comments, autofixes, and resolution rate tracking. Use /bug-review or PR bug checks.
npx claudepluginhub joshuarweaver/cascade-code-general-misc-1 --plugin pproenca-dot-skills-1This skill uses the workspace's default tool permissions.
Multi-pass PR review agent with 5 parallel review passes, majority voting, independent Opus validation, and resolution rate learning. Posts inline PR comments and optionally generates autofix commits. Tracks whether findings get resolved at merge time and uses that signal to improve future reviews.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
Multi-pass PR review agent with 5 parallel review passes, majority voting, independent Opus validation, and resolution rate learning. Posts inline PR comments and optionally generates autofix commits. Tracks whether findings get resolved at merge time and uses that signal to improve future reviews.
/bug-review <PR-number-or-URL>/bug-review:resolve <PR> to classify resolutions after merge/bug-review:report for resolution rate statisticsOn first run, verify:
gh CLI is installed and authenticated (gh auth status)jq is installed (for JSON processing)bc is installed (for resolution rate calculations; pre-installed on most systems)Read config.json for configuration (passes, vote threshold, models, category weights).
/bug-review <PR>
|
v
Fetch PR context + gather-context.sh
|
v
5 parallel passes (shuffled diffs, Sonnet) --> Aggregate & vote (3/5 majority)
|
v
Independent Opus validator --> Dedup --> Present findings --> Post + store
|
(later, after merge)
v
/bug-review:resolve <PR> --> Classify resolutions --> Update category weights
${CLAUDE_PLUGIN_DATA}/bug-review/cache/pr-{N}/ — if cache exists for the same head commit, offer to resume from the last checkpointscripts/fetch-pr.sh <pr-identifier> to get PR diff + metadata as JSONscripts/gather-context.sh <changed-files-json> to get prioritized context (callers, types, tests, repo rules).bug-review.md from repo root if it exists${CLAUDE_PLUGIN_DATA}/bug-review/cache/pr-{N}/context.jsonFor each pass (1-5), prepare a shuffled diff:
scripts/shuffle-diff.sh <pass-number> < pr.diff > pass-<N>.diff
Launch 5 Agent subprocesses in parallel. Read review-passes.md for the exact prompt for each pass.
Use model from config.json agent_model (default: "sonnet").
Each agent returns a JSON array of findings.
Save checkpoint: Write all pass results to ${CLAUDE_PLUGIN_DATA}/bug-review/cache/pr-{N}/pass-results.json
vote_threshold)final_score = votes × severity_weight × category_weightIf only 1-2 passes found bugs and the others found none, present findings but note they lack consensus.
Save checkpoint: Write voted findings to cache.
Launch a separate Agent using validator_model from config.json (default: "opus").
This agent has NOT seen the review passes. It receives only the voted findings and the original code. Read the Validator section in review-passes.md for the prompt.
For each finding, the validator outputs: {id, verdict: "KEEP"|"DISCARD", confidence, reasoning}
Remove DISCARDed findings. Multiply each finding's score by the validator's confidence.
Compute each finding's final confidence field:
confidence = (votes / total_passes) × validator_confidence
Findings with confidence < 0.5 are shown with a "low confidence" warning.
Save checkpoint: Write validated findings to cache.
Run scripts/dedup.sh <pr-number> to get existing [bug-review] comments.
Match by location proximity (file + line within +/-10) and category — not text similarity.
Display a table:
| # | Severity | Confidence | File | Line | Title | Votes |
|---|
For each finding, show full description, trigger scenario, suggested fix, and validator reasoning.
Ask the user (using AskUserQuestion with multiSelect):
If no findings survived voting + validation: "No bugs found across 5 review passes. The changes look clean."
Write approved findings to a temporary JSON file, then run:
scripts/post-review.sh <pr-number> <findings-json-file>
Then persist findings for resolution tracking:
scripts/store-findings.sh <pr-number> <findings-json-file> <head-commit-sha>
For each finding selected for autofix:
git diff --stat — verify only the finding's file was modified and diff is under 20 lines. If exceeded, revert and warn.npm test, go test ./..., pytest, etc.)fix: {title} [bug-review]git checkout -- <file>) and report to userSafety: one commit per fix, run tests between fixes, never force-push, scope-validate every fix.
Run after a PR is merged to classify whether findings were resolved.
scripts/classify-resolutions.sh <pr-number>
${CLAUDE_PLUGIN_DATA}/bug-review/findings/pr-{N}.jsonscripts/update-weights.sh to adjust category weightsDisplay resolution rate statistics across all tracked PRs.
Run scripts/resolution-report.sh which outputs:
Teams can create .bug-review.md at their repo root:
## Focus Areas
- Pay special attention to authentication flows
- Check all database queries for SQL injection
## Ignore
- Don't flag issues in generated files (*.generated.ts)
- Ignore style-only concerns
## Invariants
- All API endpoints must check req.user before accessing user data
- Database migrations must be reversible
## Severity Overrides
- Treat any auth bypass as CRITICAL regardless of category default
Read workflow.md for detailed step-by-step with error handling. Read review-passes.md for all 5 review pass prompts and the validator. Read categories.md for bug categories and learned weights.