From replan
Reviews and validates plans (implementation, research, design, migration, etc.) using parallel subagents for codebase alignment, best practices, standards, feasibility, and fresh perspectives.
npx claudepluginhub kojott/claude-replanThis skill is limited to using the following tools:
You have a plan in the current conversation (or at a file path the user provides). Your job is to validate it thoroughly by dispatching parallel review subagents, then update the plan based on their findings.
Orchestrates parallel architecture and experience reviews of implementation plans, scores findings across dimensions like data flow and UX, consolidates ranked fixes for user approval and auto-application. Use after planning, before non-trivial coding.
Reviews implementation plans for completeness, feasibility, risks, scope adherence, and alignment with codebase patterns. Provides structured feedback with strengths, concerns, gaps, and improvements before execution.
Performs multi-agent review of implementation plans using PoLL consensus protocol. Independent expert panels surface diverse issues and blind spots before coding.
Share bugs, ideas, or general feedback.
You have a plan in the current conversation (or at a file path the user provides). Your job is to validate it thoroughly by dispatching parallel review subagents, then update the plan based on their findings.
Announce: "Reviewing the plan with parallel subagents..."
Read the plan carefully. Identify:
Based on what the plan actually covers, decide which review perspectives are needed. The goal is full coverage of the plan's scope — not a fixed set of agents.
Pick from these perspectives (and invent new ones if the plan demands it):
| Perspective | When to use | What it checks |
|---|---|---|
| Codebase alignment | Always for code-touching plans | Do referenced files/functions/APIs actually exist? Does the plan match current code structure and patterns? Are there existing utilities it should reuse? |
| Best practices & architecture | Always for code-touching plans | SOLID, DRY, YAGNI, separation of concerns, error handling, security (OWASP top 10), performance implications |
| Project standards | When CLAUDE.md / linting / conventions exist | Naming conventions, file structure patterns, test conventions, commit style, tech stack alignment |
| Feasibility & risks | Always | Missing steps, implicit dependencies between tasks, ordering issues, underestimated complexity, things that could block execution |
| Fresh perspective | Always — this is the wildcard | An agent that reads the plan cold and asks "what's missing that nobody thought of?" — edge cases, user experience gaps, operational concerns, things the plan author's tunnel vision missed |
| Research & fact-checking | Plans involving external APIs, libs, protocols | Are the APIs/libraries/versions referenced real and current? Do they work the way the plan assumes? |
| Design & UX | Plans with UI/visual components | Visual consistency, accessibility, responsive behavior, interaction patterns, loading/error states |
| Data & schema | Plans touching databases or data models | Migration safety, backwards compatibility, indexing, data integrity, query performance |
| Security | Plans touching auth, user input, APIs | Authentication/authorization flows, input validation, secrets handling, attack surface |
| Operations | Plans touching infra, deployment, monitoring | Rollback strategy, monitoring gaps, configuration management, failure modes |
Rules for agent count:
Launch all review agents simultaneously using the Agent tool. Each agent gets:
Each agent should receive a prompt structured like this:
You are reviewing an implementation plan from the perspective of [PERSPECTIVE].
## The Plan
[FULL PLAN TEXT]
## Your Review Mandate
[SPECIFIC INSTRUCTIONS FOR THIS PERSPECTIVE]
## Project Context
[RELEVANT CONTEXT: CLAUDE.md contents, file paths, tech stack, etc.]
## Output Format
Return your review as:
### [PERSPECTIVE] Review
**Verdict: PASS | ISSUES FOUND | CONCERNS**
**Critical Issues** (must fix before execution):
- [issue]: [why it matters] → [suggested fix]
**Recommendations** (should fix, but not blocking):
- [recommendation]: [why it matters] → [suggested approach]
**Observations** (informational, no action needed):
- [observation]
If everything looks good, say PASS and briefly explain why the plan is solid from your perspective.
For the fresh perspective agent specifically, use this framing:
You are reviewing this plan with completely fresh eyes. You haven't been part of any discussion about it. Read it cold and ask yourself:
- What would go wrong that nobody anticipated?
- What's missing that seems obvious once you think about it?
- Are there simpler alternatives to any of the proposed approaches?
- What will the person executing this plan wish they'd known upfront?
- Are there edge cases, error states, or user scenarios the plan ignores?
Be constructively critical. The goal is to find the blind spots.
Once all agents return:
The plan should be noticeably better after this process. If agents found nothing critical, that's a good sign — say so.
The beauty of this approach is that it adapts to whatever the plan contains:
Read the plan. Think about what could go wrong. Design agents to catch those things.