From claudekit
Orchestrates parallel architecture and experience reviews of implementation plans, scores findings across dimensions like data flow and UX, consolidates ranked fixes for user approval and auto-application. Use after planning, before non-trivial coding.
npx claudepluginhub duthaho/claudekit --plugin claudekitThis skill uses the workspace's default tool permissions.
The plan-review orchestrator. Dispatches `plan-review-architecture` and
Reviews implementation plans through quality lenses (architecture, security, test coverage, code quality, standards, usability) and collaboratively iterates before implementation.
Reviews and validates plans (implementation, research, design, migration, etc.) using parallel subagents for codebase alignment, best practices, standards, feasibility, and fresh perspectives.
Performs multi-agent review of implementation plans using PoLL consensus protocol. Independent expert panels surface diverse issues and blind spots before coding.
Share bugs, ideas, or general feedback.
The plan-review orchestrator. Dispatches plan-review-architecture and
plan-review-experience in parallel, collects scored findings from each (0-10
on five sub-dimensions), consolidates them into a single ranked fix list, asks
the user to approve fixes, and applies the approved ones to the plan file. The
skill exists because plans fail in two distinct directions — architectural
soundness (data flow, failure modes, edge cases) and human factors (UX hierarchy,
DX touchpoints, error states) — and a single reviewer rarely covers both well.
Splitting the review into two specialist passes catches more, faster. Used
between write-plan and implementation.
docs/claudekit/plans/<basename>-plan.md (or equivalent) and
implementation hasn't startedwrite-plan first)Goal: Confirm the plan file exists and meets the minimum bar to be reviewed.
Inputs: A path or filename for the plan.
Actions:
docs/claudekit/plans/.Acceptance: lines present, ## Risks section present.write-plan. Do not run review
on an underdeveloped plan — the reviewers will flag the same things in two
different voices and waste cycles.Output: Confirmation that the plan is review-ready, or a list of return-to-plan items.
Goal: Get two independent reviews, each scored on 5 sub-dimensions.
Inputs: The plan file.
Actions:
claudekit:architect agent with the plan file. Sub-dimensions to
score: data flow, failure modes, edge cases, test matrix, rollback safety.claudekit:experience-reviewer agent with the plan file.
Sub-dimensions to score: information hierarchy, state coverage (loading/empty/
error), accessibility, DX (error copy, API/CLI ergonomics), AI-slop avoidance.Output: Two reviewer reports.
Goal: Merge the two reports into one ranked fix list.
Inputs: Both reviewer reports.
Actions:
[arch] or [exp]).[both] tag and
higher priority — two independent passes flagging the same thing is signal.docs/claudekit/reviews/<plan-basename>-review-<YYYY-MM-DD>.md with sections:
## Architecture (with sub-dim scores), ## Experience (with sub-dim scores),
## Consolidated Fixes (the ranked list).Output: A single review artifact with a ranked fix list.
Goal: Get the user's call on which fixes to apply.
Inputs: The consolidated fix list.
Actions:
Apply or Acknowledge and skip with rationale.
For important and nice-to-have, the option is Apply or Skip.Output: A list of fixes to apply, with skip rationales for any skipped blockers.
Goal: Edit the plan file to reflect the approved fixes.
Inputs: The apply-list.
Actions:
Applied: <fix description> → <plan section affected>.Reviewed and updated YYYY-MM-DD via /claudekit:plan-review.Output: Updated plan file plus updated review artifact. Plan ready to execute.
| Excuse | Why it sounds reasonable | Why it's wrong | What to do instead |
|---|---|---|---|
| "Plan-review is overhead — let's just start coding." | Some plans really are simple. Adding ceremony for trivial work is bad. | "Just start coding" is fine for one-file changes; plan-review exists for the cases that aren't. The cost of a 20-minute review against a 4-day implementation is the cheapest insurance you'll buy that week. The cases that feel trivial enough to skip review are also the cases where the buried gotcha hits hardest in the third PR. | If your plan has more than 5 tasks or touches more than one module, run plan-review. The 20 minutes saves a round trip later. |
| "I only need one reviewer — architect is enough." | Architectural review is the one most engineers think of when they think "review." | One reviewer covers half the failure modes. The architecture reviewer won't notice that your error copy says "Internal error" instead of telling the user what to do; the experience reviewer won't notice that your DB migration has no rollback. Two independent passes catch ~2x the issues. | Run both reviewers. They're parallel; the wall-clock cost is the slower of the two, not the sum. |
| "I'll skip the blockers I disagree with — they don't apply here." | Sometimes reviewers really are wrong, and an author's domain knowledge can override review. | Skipping a blocker silently is how plan reviews become advisory. The discipline is: skip is fine, but the rationale gets written down in the review artifact. If you can't write a one-line rationale, you don't disagree, you're rationalizing. | Apply Step 4's rule: every skipped blocker gets a one-line rationale. The rationale is the receipt for your choice. Reviewers reading the plan downstream will see the skip and the reason, not just the absence. |
| "I'll fix the plan in my head and not bother editing the file." | Mental updates feel faster than file edits. | The plan you implement against is the plan in the file, not the one in your head. The mental version drifts during the days between review and implementation. The teammate who picks up a task sees the unfixed version and implements the unfixed plan. | Edit the file. Use the Edit tool, not "I'll rewrite it cleanly." Each change is small; the cumulative edit takes minutes. |
| "I'll re-read the plan after applying fixes — but I'm sure it's consistent." | After 5 surgical edits, "I'm sure it's still consistent" is a comfortable belief. | Surgical edits drift. A fix that retitles task 4 may leave a Blocked by: Task 4 reference dangling somewhere. A fix that splits a task into two may leave the numbering inconsistent. The drift is invisible to the author but obvious on a fresh read. | After Step 5's edits are applied, re-read the plan top to bottom. Catch the dangling references before the implementer does. |
| Checkpoint | Required artifact | What "no evidence" looks like |
|---|---|---|
| End of Step 1 | Confirmation note or list of return-to-plan items | "Plan looks fine to me." |
| End of Step 2 | Two reviewer reports, each with 0-10 scores per sub-dim | "Reviewers said it's mostly OK." |
| End of Step 3 | Review artifact at docs/claudekit/reviews/<plan>-<date>.md with consolidated ranked fixes | "I'll keep the findings in my head." |
| End of Step 4 | A list of Apply / Skip decisions; skipped blockers each have a rationale | "I picked the ones I felt good about." |
| End of Step 5 | Plan file updated with each approved fix; review artifact appended with Applied: lines | "I made the changes; should be good." |