From paad
Performs multi-agent code review of current git branch against main: detects bugs via specialist agents, verifies findings, ranks severity, generates persistent report before push/merge.
npx claudepluginhub ovid/paad --plugin paadThis skill uses the workspace's default tool permissions.
Multi-agent bug-hunting review of the current branch against main. Dispatches specialist agents in parallel, verifies findings to filter false positives, ranks by severity, and produces a persistent report.
Orchestrates parallel multi-agent code reviews with ≥80% confidence filtering for quality, security, and auto-detected discipline-specific issues via git diffs.
Reviews local git changes, PRs/MRs, or branch diffs against coding guidelines using 5-7 parallel agents for bugs, security/logic, compliance, simplification, tests, and contracts. Deep mode auto-fixes iteratively.
Reviews code changes using parallel personas for correctness, testing, maintainability, and conditional areas like security, performance, APIs. Merges into P0-P3 severity reports for PR prep and iterative feedback.
Share bugs, ideas, or general feedback.
Multi-agent bug-hunting review of the current branch against main. Dispatches specialist agents in parallel, verifies findings to filter false positives, ranks by severity, and produces a persistent report.
This is a technique skill. Follow the phases in order. Do not skip verification.
/paad:agentic-review accepts optional $ARGUMENTS:
/paad:agentic-review — review all changes on the current branch against main/paad:agentic-review develop — review against a different base branch (e.g., develop instead of main)/paad:agentic-review main src/auth/ — review against main, but only for files under src/auth/When a base branch is provided, use it instead of main in all git diff commands. When a path is provided, filter the diff and manifest to only include files within that scope.
digraph preflight {
"Conversation has history?" [shape=diamond];
"On main/master?" [shape=diamond];
"Uncommitted changes?" [shape=diamond];
"Proceed to Phase 1" [shape=box];
"STOP: recommend new session" [shape=box, style=bold];
"STOP: nothing to review" [shape=box, style=bold];
"WARN: ask user" [shape=box];
"Conversation has history?" -> "STOP: recommend new session" [label="yes"];
"Conversation has history?" -> "On main/master?" [label="no"];
"On main/master?" -> "STOP: nothing to review" [label="yes"];
"On main/master?" -> "Uncommitted changes?" [label="no"];
"Uncommitted changes?" -> "WARN: ask user" [label="yes"];
"Uncommitted changes?" -> "Proceed to Phase 1" [label="no"];
"WARN: ask user" -> "Proceed to Phase 1" [label="user decides"];
}
/paad:agentic-review to avoid context rot." Stop and wait.Run these commands and collect results:
git diff --stat <base>...HEAD — files and line countsgit diff <base>...HEAD — full diff contentdocs/plans/, aidlc-docs/, or similarCLAUDE.md, AGENTS.md, etc.Steering file caveat: Include in every agent prompt: "Steering files (CLAUDE.md, etc.) describe conventions but may be stale. If you find a contradiction between steering files and actual code, flag it as a finding."
Dispatch these agents simultaneously using the Agent tool. Each receives: the diff, manifest of files to review, steering file contents, and their specialist focus.
| Agent | Lens | Scope |
|---|---|---|
| Logic & Correctness | Wrong conditions, off-by-one, null paths, state transitions, algorithm errors, new code paths that skip processing/validation/cleanup present in sibling paths | Changed code + surrounding functions |
| Error Handling & Edge Cases | Missing catches, swallowed exceptions, boundary validation, silent failures | Changed code + error paths in callers |
| Contract & Integration | Signature vs callers, type mismatches, broken API contracts, data shape drift, logic duplication | Changed code + callers/callees one level |
| Concurrency & State | Races, shared mutable state, cache invalidation, ordering assumptions | Changed code + shared state access |
| Security | Injection, auth gaps, data exposure, OWASP top 10 | Changed code + input/output boundaries |
Conditionally (if plan/design docs found):
| Agent | Lens | Input |
|---|---|---|
| Plan Alignment | Changes vs plan, deviations, partial completion | Diff + plan docs |
Plan Alignment must use neutral tone for unimplemented items — partial implementation is expected.
Agent prompt template:
Each specialist agent prompt must include:
Error Handling & Edge Cases additional instruction: "When code parses external output (API responses, LLM completions, user input) using exact string matching (equals, switch, regex), check whether realistic output variations — trailing punctuation, extra whitespace, mixed casing, surrounding formatting — would cause silent misclassification or wrong defaults."
Contract & Integration additional instruction: "Also flag: new code that reimplements logic already available in the codebase (check for existing utilities, helpers, or services that do the same thing). Flag duplicated code blocks within the diff that could be parameterized into a single function. Frame these as integration issues — duplicated logic diverges over time and causes bugs."
Scaling for large diffs (500+ lines): Partition files across 2 instances of each specialist (e.g., Logic-A gets half the files, Logic-B gets the other half).
After all specialists complete, dispatch a single Verifier agent with all findings. The verifier:
Verifier prompt must include: "You are verifying bug reports. For each finding, read the actual code and confirm the bug exists. Be skeptical — reject anything you cannot confirm by reading the code. A finding reported by multiple specialists is more likely real."
Write verified findings to paad/code-reviews/<branch>-<YYYY-MM-DD-HH-MM-SS>-<short-sha>.md.
Create the paad/code-reviews/ directory if it doesn't exist.
Report template:
# Agentic Code Review: <branch-name>
**Date:** YYYY-MM-DD HH:MM:SS
**Branch:** <branch> -> <base>
**Commit:** <full-sha>
**Files changed:** N | **Lines changed:** +X / -Y
**Diff size category:** Small / Medium / Large
## Executive Summary
2-3 sentences: overall assessment, highest-severity finding if any, general confidence level.
## Critical Issues
### [C1] <title>
- **File:** `path/to/file:line`
- **Bug:** What's wrong
- **Impact:** Why it matters
- **Suggested fix:** Concrete recommendation
- **Confidence:** High/Medium
- **Found by:** <specialist name(s)>
(Repeat for each critical issue, or "None found.")
## Important Issues
(Same structure as Critical, or "None found.")
## Suggestions
One-line entries only. Omit section if none.
## Plan Alignment
(Only if plan/design docs were found)
- **Implemented:** Plan items reflected in this diff
- **Not yet implemented:** Remaining items (neutral — partial is expected)
- **Deviations:** Anything contradicting the plan
## Review Metadata
- **Agents dispatched:** <list with focus areas>
- **Scope:** <files reviewed — changed + adjacent>
- **Raw findings:** N (before verification)
- **Verified findings:** M (after verification)
- **Filtered out:** N - M
- **Steering files consulted:** <list or "none found">
- **Plan/design docs consulted:** <list or "none found">
These patterns produce low-quality reviews. Avoid them:
| Mistake | What to do instead |
|---|---|
| Single-agent review (no parallel dispatch) | Always dispatch 5+ specialist agents in parallel via Agent tool |
| Skipping verification | Always run verifier — unverified findings have high false positive rates |
| Reporting style/quality nits | Specialists hunt bugs, not code style. "Missing test" is a suggestion at best, not a bug. |
| Not tracing callers/callees | The best bugs hide at integration boundaries. Always trace one level deep. |
| Not reading adjacent test files | Tests that pass accidentally (via catch-all mocks, wrong stubs) are real bugs. Check sibling tests. |
| Skipping steering files | Read CLAUDE.md etc. for context, but flag contradictions rather than trusting blindly |
| Reporting without file:line references | Every finding must reference exact code location — unanchored findings are not actionable |
| Ignoring logic duplication | New code reimplementing existing helpers is a bug waiting to happen — Contract & Integration agent must check for this |
| Ignoring test infrastructure | When production infrastructure changes (schema migrations, build configs, environment templates), check if parallel test infrastructure exists and needs matching updates |
After writing the report:
receiving-code-review skill and point it at this report for a guided workflow."