Use when a plan, architecture doc, or execution plan exists and needs expert review before implementation. Triggers on /plan-review, review my plan, document review.
From shieldnpx claudepluginhub infraspecdev/tesseract --plugin shieldThis skill uses the workspace's default tool permissions.
personas.mdscoring.mdtemplates.mdImplements structured self-debugging workflow for AI agent failures: capture errors, diagnose patterns like loops or context overflow, apply contained recoveries, and generate introspection reports.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Dispatch parallel expert reviewer agents against a plan document to produce a scored analysis with prioritized recommendations and an enhanced plan.
All review output goes into the feature's plan-review directory:
{output_dir}/{feature}/plan-review/{N}-{slug}/
├── summary.md ← scored analysis (main output)
├── enhanced-plan.md ← enhanced plan with feedback applied
└── detailed/
└── <agent-name>.md ← one file per dispatched agent
Where {output_dir} comes from .shield.json output_dir field (default docs/shield), {feature} is the feature folder name ({feature-name}-YYYYMMDD), {N} is a sequential number, and {slug} is a short kebab-case descriptor. Do NOT use any other path or directory structure. The Write tool creates directories automatically.
/plan-review/review instead (dispatches agents in infra-code/app-code mode)At startup, call execute-steps to register these steps. Execute them in order, updating status after each.
| Step | Action | Condition | Mandatory |
|---|---|---|---|
| 1 | Load plan document | always | Yes |
| 2 | Select reviewer personas | always | Yes |
| 3 | Dispatch selected agents in parallel | always | Yes |
| 4 | Parse grades + calculate scores | always | Yes |
| 5 | Generate enhanced plan | always | Yes |
| 6 | Write summary + detailed findings | always | Yes |
| 7 | Update manifest | always | Yes |
The skill reads plan data from (in priority order):
{output_dir}/{feature}/plan.json) — if name provided or only one feature exists. If multiple features exist and no name given, list them and ask.{output_dir}/{feature}/ — architecture docs, research findings (glob for {output_dir}/{feature}/plan/ and {output_dir}/{feature}/research/)Always start by checking for plan sidecar in {output_dir}/*/plan.json and docs in {output_dir}/{feature}/. If no plans exist, ask the user for the plan location or check the project root.
See personas.md for the full catalog, weights, and dynamic selection flowchart.
Read each selected agent's markdown file from agents/ and scoring.md, then launch all agents in parallel using the Agent tool. See templates.md for the dispatch prompt structure.
Use subagent_type matching the agent name (e.g., shield:architecture-reviewer) when available, otherwise general-purpose.
After all agents return, write each agent's full raw output to plan-review/{N}-{slug}/detailed/<agent-name>.md with a header and back-link:
# <Agent Name> — Detailed Findings
> Back to [summary](../summary.md)
<full agent output>
After all agents return:
scoring.mdscoring.md thresholdsscoring.mdWrite to {output_dir}/{feature}/plan-review/{N}-{slug}/:
summary.md — scored evaluation with consolidated recommendationsenhanced-plan.md — enhanced version of original plan with feedback applieddetailed/<agent-name>.md — full output from each dispatched agentThe summary should include a "Detailed Agent Findings" section linking to each detailed file.
After writing, update {output_dir}/manifest.json and regenerate {output_dir}/index.html.
See templates.md for output formats and enhanced plan rules.
Do NOT proceed until the user explicitly confirms.
After writing output files, present the user with three options:
plan-review/{N}-{slug}/enhanced-plan.mdplan-review/{N}-{slug}/enhanced-plan.md first, re-read before applyingThe user may also edit plan-review/{N}-{slug}/summary.md, ask for changes to specific recommendations, or reject recommendations. Wait for explicit confirmation before overwriting anything.
| Mistake | Fix |
|---|---|
| Dispatching all 7 agents for a simple app plan with no infra | Follow trigger keyword matching — skip Cloud Architect and Cost/FinOps if no infra keywords |
| Grading infra points F on a non-infrastructure plan | Only activated personas grade — don't penalize for out-of-scope concerns |
| Applying enhanced plan without user review | Always wait for Step 5 confirmation — never auto-apply |
| Repeating scoring logic instead of referencing scoring.md | All grade math lives in scoring.md — reference it, don't inline it |
| Generating plan.md in different format than original | HTML in → HTML out, markdown in → markdown out |
| Softening grades because the user is under time pressure | Grade what the plan SAYS — missing info is F regardless of deadline |
| Giving partial credit for implied or assumed information | Grade only what is explicitly documented — "they probably meant X" is not in the plan |