From replan
Validates code changes against original plan using git diffs and full file reads, then launches parallel subagents for quality, security, and test coverage checks. Use post-implementation or /recheck.
npx claudepluginhub kojott/claude-replanThis skill is limited to using the following tools:
You've finished implementing a plan. Your job is to validate that the code matches the plan and passes multi-perspective quality checks by dispatching parallel review subagents.
Reviews and verifies code before merge via triage-first checks (up to 16 parallel agents). Pipeline mode verifies vs plans; general mode for PRs/branches/staged changes. Flags findings only.
Dispatches 5 specialized agents for multi-perspective code review on correctness, architecture, security, production readiness, and test quality. Merges findings, auto-fixes Critical/Important issues up to 3 rounds.
Conducts single-pass code reviews enforcing test evidence gates (e.g., passing E2E tests), spec compliance, quality checks, and confidence-based issue reporting.
Share bugs, ideas, or general feedback.
You've finished implementing a plan. Your job is to validate that the code matches the plan and passes multi-perspective quality checks by dispatching parallel review subagents.
Announce: "Reviewing the implementation against the plan with parallel subagents..."
Figure out what changed and what the plan was.
Try these in order:
git log --oneline -10 to understand recent history, then:
git rev-parse --abbrev-ref --symbolic-full-name @{u} 2>/dev/null to find the upstream tracking branch. If that fails, try git remote show origin 2>/dev/null | grep 'HEAD branch' to find the default branch. Fall back to checking if origin/main or origin/master exists. If none work, ask the user for the base branch.git diff and git diff --cachedgit diff <detected-base>...HEADgit log --name-only --format='' | awk 'NF' | awk '!seen[$0]++' | head -30) and note the truncation in the Scope section of the synthesis output.Before branching to either the single-agent or multi-agent path, always gather context first:
You are performing a comprehensive review of a small code change against an implementation plan. Since this is a trivial change, you're covering plan compliance, code quality, and security in a single pass.
## The Original Plan
[FULL PLAN TEXT]
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[CHANGED FILES IN FULL]
## Your Review Mandate
1. **Plan compliance**: Does this change match what the plan specified? Any plan items missing or only partially done?
2. **Code quality**: DRY, naming, error handling, unnecessary complexity?
3. **Security**: Any injection, auth, secrets, or validation concerns?
Be precise — reference specific plan items and file:line locations.
## Output Format
### Single-Agent Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Plan Compliance:**
- [x] or [ ] [Plan item] — [status]
**Critical Issues** (must fix):
- [file:line]: [issue] → [category: plan/quality/security] → [suggested fix]
**Important Issues** (should fix):
- [file:line]: [issue] → [category: plan/quality/security] → [suggested fix]
**Minor Issues** (informational):
- [file:line]: [observation]
Based on what changed, select which review perspectives are needed. The plan compliance agent is always primary — this is what makes /recheck different from /code-review.
Pick from these perspectives:
| Perspective | When to include | What it checks |
|---|---|---|
| Plan compliance | Always (primary agent) | Did implementation match the plan? Missing steps? Scope creep? Deviations? Were all plan items addressed? |
| Code quality | Always | DRY, dead code, complexity, reuse opportunities, unnecessary abstractions, naming clarity |
| Security | Code touches user input, auth, file I/O, exec, eval, network | OWASP top 10: injection, broken auth, sensitive data exposure, XXE, broken access control, misconfig, XSS, insecure deserialization, known vulnerabilities, insufficient logging |
| Test coverage | Always for code changes | Are new code paths tested? Edge cases covered? Meaningful assertions? Missing test scenarios? |
| Performance | Loops, DB queries, API calls, data processing | N+1 queries, unnecessary allocations, missing caching, O(n²) algorithms, unbounded operations |
| Fresh perspective | Always | Reads the code cold: "Does this actually work? What breaks first in production? What did the implementer assume that isn't guaranteed?" |
| Project standards | CLAUDE.md / linting config / conventions exist | Naming, file structure, patterns, error handling conventions, import ordering, test structure |
Rules for agent count:
Launch all review agents simultaneously using the Agent tool. Each agent gets:
You are reviewing a code implementation to verify it matches the original plan. This is the PRIMARY review — plan compliance is the unique value of this check.
## The Original Plan
[FULL PLAN TEXT]
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
Go through the plan point by point:
1. For each plan item/task/step, verify it was implemented
2. Flag any plan items that were NOT implemented or were only partially done
3. Flag any code that was added but was NOT in the plan (scope creep)
4. Check that the implementation approach matches what the plan specified
5. Verify any specific requirements the plan called out (error handling, edge cases, etc.)
Be precise — reference specific plan items and specific code locations (file:line).
## Output Format
### Plan Compliance Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Plan Items Status:**
- [ ] or [x] [Plan item] — [status: implemented / missing / partial / deviated] — [file:line if implemented]
**Critical Issues** (plan violations that must be addressed):
- [issue]: [which plan item] → [what's wrong] → [what should be done]
**Scope Creep** (code added beyond the plan):
- [file:line]: [what was added] — [is this justified?]
**Recommendations** (improvements aligned with plan intent):
- [recommendation]: [why] → [suggested approach]
**Observations** (informational):
- [observation]
You are reviewing code changes for quality issues. Focus on the CHANGED code, not pre-existing patterns.
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
Check the changed code for:
1. DRY violations — is code duplicated that should be extracted?
2. Dead code — anything added but never called?
3. Unnecessary complexity — over-engineering, premature abstractions?
4. Naming — do names accurately describe what things do?
5. Error handling — are errors caught, propagated, and handled appropriately?
6. Code organization — are things in the right files/modules?
Only flag issues in the CHANGED code. Do not review unchanged surrounding code.
## Output Format
### Code Quality Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Critical Issues** (bugs, logic errors, broken functionality):
- [file:line]: [issue] → [why it matters] → [suggested fix]
**Recommendations** (should fix):
- [file:line]: [issue] → [why it matters] → [suggested approach]
**Observations** (informational):
- [observation]
You are reviewing code changes for security vulnerabilities. Apply OWASP top 10 and general security best practices.
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
Check for:
1. Injection (SQL, command, LDAP, XPath) — any user input reaching queries/commands without sanitization?
2. Broken authentication — weak session handling, credential exposure?
3. Sensitive data exposure — secrets in code, unencrypted sensitive data, excessive logging?
4. XML External Entities — unsafe XML parsing?
5. Broken access control — missing authorization checks, IDOR?
6. Security misconfiguration — debug enabled, default credentials, unnecessary features?
7. Cross-Site Scripting — unescaped user content in output?
8. Insecure deserialization — untrusted data deserialized?
9. Dependency changes — flag any newly added dependencies AND dependency version changes (in lockfiles, package manifests, etc.) and recommend they be audited (e.g., `npm audit`, `pip-audit`)
10. Insufficient logging — security events not logged?
Also check: path traversal, race conditions, SSRF, open redirects.
Only flag issues you're confident about. False positives waste time.
## Output Format
### Security Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Critical Issues** (exploitable vulnerabilities):
- [file:line]: [vulnerability type] — [how it could be exploited] → [fix]
**Recommendations** (hardening):
- [file:line]: [concern] → [suggested mitigation]
**Observations** (informational):
- [observation]
You are reviewing whether the code changes have adequate test coverage.
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
1. Identify all new code paths, branches, and error cases in the changed code
2. Check if corresponding tests exist for these paths
3. Evaluate test quality — are assertions meaningful or just smoke tests?
4. Identify missing edge case tests
5. Check that error paths are tested, not just happy paths
6. Verify test naming describes the scenario being tested
## Output Format
### Test Coverage Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Untested Code Paths:**
- [file:line]: [code path] — [suggested test scenario]
**Weak Tests:**
- [test file:line]: [issue with test] → [how to improve]
**Missing Edge Cases:**
- [scenario]: [why it matters] → [suggested test]
**Observations** (informational):
- [observation]
You are reviewing code changes for performance issues.
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
Check for:
1. N+1 query patterns — loops that trigger individual DB/API calls
2. Unnecessary memory allocation — creating large objects/arrays unnecessarily
3. Missing caching — repeated expensive computations or fetches
4. O(n²) or worse algorithms — nested loops over potentially large collections
5. Unbounded operations — no limits on result sets, file reads, or iterations
6. Blocking operations — sync I/O in async contexts, missing concurrency
Only flag issues where performance impact is material, not micro-optimizations.
## Output Format
### Performance Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Critical Issues** (will cause visible performance problems):
- [file:line]: [issue] — [expected impact] → [fix]
**Recommendations** (should improve):
- [file:line]: [concern] → [suggested optimization]
**Observations** (informational):
- [observation]
You are reviewing this code with completely fresh eyes. You haven't been part of any discussion about it. Look at the code cold and ask yourself:
- Does this code actually do what the variable names and function names suggest?
- What breaks first when this hits production?
- Are there assumptions baked in that aren't guaranteed?
- What happens with unexpected input, network failures, or concurrent access?
- Is there a simpler way to achieve the same thing?
- What will the person maintaining this code in 6 months curse about?
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Your Review Mandate
Be constructively critical. You're the last line of defense. The goal is to find what everyone else missed — the "oh no" moments that happen in production, not in code review.
Focus on:
1. Logic errors that would pass a surface read
2. Hidden assumptions (hardcoded values, expected state, timing dependencies)
3. Error scenarios that aren't handled
4. Concurrency or race conditions
5. Things that work in dev but break at scale or in prod
## Output Format
### Fresh Perspective Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Critical Issues** (will break in production):
- [file:line]: [what will happen] → [why] → [fix]
**Concerns** (might break, depends on context):
- [file:line]: [concern] → [what to verify]
**Observations** (not bugs, but worth noting):
- [observation]
If everything looks solid, say PASS and explain what gives you confidence.
You are reviewing code changes for compliance with project conventions and standards.
## Changed Files
[LIST OF FILES WITH DIFFS]
## Full File Contents
[KEY CHANGED FILES IN FULL]
## Project Conventions
[CLAUDE.md CONTENTS AND/OR DETECTED CONVENTIONS]
## Your Review Mandate
Check that changed code follows project conventions:
1. Naming conventions (variables, functions, files, classes)
2. File/directory structure patterns
3. Error handling patterns used elsewhere in the codebase
4. Import ordering and module structure
5. Test file naming and organization
6. Documentation/comment style
7. Any explicit rules from CLAUDE.md or project configuration
Only flag deviations from established patterns. Don't invent new standards.
## Output Format
### Project Standards Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Issues** (convention violations):
- [file:line]: [violation] — [convention] → [fix]
**Observations** (informational):
- [observation]
Once all agents return:
## Implementation Review Results
### Scope
[If scope was truncated (e.g., >30 files narrowed to 30), note it here. Otherwise omit this section.]
### Agents Dispatched
| Agent | Verdict |
|---|---|
| Plan Compliance | PASS / ISSUES FOUND / CONCERNS |
| Code Quality | ... |
| ... | ... |
### Critical Issues
1. [file:line]: [issue] — found by [agent]
### Important Issues
1. [file:line]: [issue] — found by [agent]
### Minor Issues
1. [file:line]: [issue] — found by [agent]
### What's Solid
- [positive finding from agents — what was done well]
### Plan Compliance Summary
- [X/Y plan items fully implemented]
- [any deviations or scope creep noted]
If the user wants fixes:
Note: Fix agents are subagents and can use Edit/Write tools even though this skill's allowed-tools excludes Edit. This is intentional — the confirmation gate above is the safety mechanism, not tool restrictions.
/code-review — that checks code for bugs and CLAUDE.md compliance/recheck's unique value is plan compliance — verifying that what was built matches what was planned, while also catching quality/security issues through parallel multi-perspective review.