Reviews PRDs, tech plans, design docs, and specs for issues using reviewer personas with HIGH/MEDIUM/LOW priority findings from parallel sub-agents.
npx claudepluginhub tmchow/tmc-marketplace --plugin iterative-engineeringThis skill uses the workspace's default tool permissions.
Reviews PRDs, brainstorms, and technical plans using dynamically selected reviewer personas. Spawns parallel sub-agents that return structured JSON, then merges and deduplicates findings into a single report.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
Reviews PRDs, brainstorms, and technical plans using dynamically selected reviewer personas. Spawns parallel sub-agents that return structured JSON, then merges and deduplicates findings into a single report.
iterative:brainstorming/iterative:tech-planning skillsAll reviewers use HIGH/MEDIUM/LOW:
| Level | Meaning | Action |
|---|---|---|
| HIGH | Blocks execution; cannot start the next step without resolving | Must fix before proceeding |
| MEDIUM | Creates risk; work can start but likely leads to rework or confusion | Should fix |
| LOW | Improvement opportunity; plan works but could be clearer or tighter | Author's discretion |
6 personas in three tiers. See references/persona-catalog.md for the full catalog.
Document-type (exactly 1 — selected by document type):
| Agent | Selected when | Identity |
|---|---|---|
prd-reviewer | Document is a PRD or brainstorm | Senior product leader evaluating product document quality |
tech-plan-reviewer | Document is a tech plan | Implementer evaluating whether they can code from this plan |
Always-on (every review):
| Agent | Focus |
|---|---|
coherence-reviewer | Internal consistency, contradictions, terminology drift, structural issues |
Conditional (selected per document):
| Agent | Select when document... |
|---|---|
skeptic-reviewer | Proposes abstractions, multi-layer architecture, plugin systems, or infrastructure ahead of need |
feasibility-reviewer | Is a tech plan that proposes architecture, external integrations, or performance constraints |
scope-guardian-reviewer | Is a PRD with multiple priority levels and potential conflicts, unclear scope boundaries, many requirements where goal alignment isn't obvious, or goals that don't connect to requirements |
The document type naturally regulates the review. A simple PRD gets 2 reviewers (prd-reviewer + coherence-reviewer). A complex tech plan with architecture decisions gets 4 (tech-plan-reviewer + coherence-reviewer + skeptic-reviewer + feasibility-reviewer). No separate "mode" is needed.
Identify the document to review from argument, conversation context, or ask user. Determine the document type — PRD, brainstorm, or tech plan — from filename, content, and context. Treat brainstorm documents and PRDs synonymously. Record the document path and document type.
Read the full document content. This is needed for both reviewer selection and spawning.
Read the document content from Stage 1. The document-type reviewer and coherence are automatic. For each conditional persona in the catalog (references/persona-catalog.md), decide whether the document warrants it. This is agent judgment, not keyword matching.
Announce the team before spawning:
Review team:
- prd-reviewer (document-type)
- coherence-reviewer (always)
- scope-guardian-reviewer — PRD has 12 requirements across 3 priority levels with dependency conflicts
This is progress reporting, not a blocking confirmation.
Spawn each selected reviewer as a parallel sub-agent using the template in references/subagent-template.md. Each sub-agent receives:
references/findings-schema.jsonSub-agents are read-only: they review and return structured JSON. They do not edit files, run commands, or propose fixes.
Each sub-agent returns JSON matching references/findings-schema.json:
{
"reviewer": "prd-reviewer",
"findings": [...],
"residual_concerns": [...]
}
Convert multiple reviewer JSON payloads into one deduplicated, confidence-gated finding set.
normalize(section) + line_bucket(line, ±5) + normalize(title). When fingerprints match, merge: keep highest priority, keep highest confidence with strongest evidence, union evidence, note which reviewers flagged it.Assemble the final report using the template in references/review-output-template.md:
Do not include time estimates.
When invoked from iterative:brainstorming or iterative:tech-planning: return findings directly — the calling skill owns the fix loop and workflow transitions. Do not enter the standalone fix loop below.
When invoked standalone: run the standalone fix loop.
After presenting the synthesized findings (Stage 5), offer to fix issues when running standalone. Plan fixes are text edits with low cascading risk — keep the loop simple.
If zero findings, the review is done — no further prompts needed.
Use the platform's interactive question tool — AskUserQuestion (Claude Code) or request_user_input (Codex). Both platforms provide an automatic "Other" free-form option — do not add one manually.
Present a single prompt listing all priority levels with findings. No intermediate "choose which..." step.
Claude Code — use AskUserQuestion with multiSelect: true:
When HIGH issues exist, pre-check the HIGH option:
When only MEDIUM/LOW issues exist, nothing pre-checked:
Only include priority levels that have findings.
Codex — use request_user_input (single-select, build combined options):
When HIGH issues exist:
When only MEDIUM/LOW issues exist:
Only include options where findings exist at those levels. Omit options that would duplicate another (e.g., if no LOW, omit "Fix all" since it equals the line above).
Fix only the selected priorities. Spawn a single subagent with the filtered findings, document path, and document type. The subagent applies targeted fixes, preserves the document's voice and decisions, and commits.
Wait for the subagent to complete.
After fixes land (or user skipped), present an interactive choice:
If another round: run the full Stage 1–Step 8 flow again. No workflow transition options — the user knows what to do next.
If the platform doesn't support parallel sub-agents, run reviewers sequentially. Everything else (stages, output format, merge pipeline) stays the same.