From atv-starter-kit
Reviews requirements and plan documents using parallel persona agents to surface role-specific issues, auto-fix quality problems, and pose strategic questions.
npx claudepluginhub all-the-vibes/atv-starterkit --plugin atv-starter-kitThis skill uses the workspace's default tool permissions.
Review requirements or plan documents through multi-persona analysis. Dispatches specialized reviewer agents in parallel, auto-fixes quality issues, and presents strategic questions for user decision.
Reviews requirements (docs/brainstorms/) or plan (docs/plans/) documents using parallel persona agents to surface role-specific issues, auto-fix quality problems, and present strategic questions. Use with document path or auto-detect.
Reviews requirements, plan, or task-pack documents using parallel persona agents to surface role-specific issues. Use to improve specs when a document exists.
Reviews PRDs, tech plans, design docs, and specs for issues using reviewer personas with HIGH/MEDIUM/LOW priority findings from parallel sub-agents.
Share bugs, ideas, or general feedback.
Review requirements or plan documents through multi-persona analysis. Dispatches specialized reviewer agents in parallel, auto-fixes quality issues, and presents strategic questions for user decision.
Check the skill arguments for mode:headless. Arguments may contain a document path, mode:headless, or both. Tokens starting with mode: are flags, not file paths -- strip them from the arguments and use the remaining token (if any) as the document path for Phase 1.
If mode:headless is present, set headless mode for the rest of the workflow.
Headless mode changes the interaction model, not the classification boundaries. Document-review still applies the same judgment about what has one clear correct fix vs. what needs user judgment. The only difference is how non-auto findings are delivered:
auto fixes are applied silently (same as interactive)present findings are returned as structured text for the caller to handle -- no AskUserQuestion prompts, no interactive approvalThe caller receives findings with their original classifications intact and decides what to do with them.
Callers invoke headless mode by including mode:headless in the skill arguments, e.g.:
Skill("compound-engineering:document-review", "mode:headless docs/plans/my-plan.md")
If mode:headless is not present, the skill runs in its default interactive mode with no behavior change.
If a document path is provided: Read it, then proceed.
If no document is specified (interactive mode): Ask which document to review, or find the most recent in docs/brainstorms/ or docs/plans/ using a file-search/glob tool (the glob tool).
If no document is specified (headless mode): Output "Review failed: headless mode requires a document path. Re-invoke with: Skill("compound-engineering:document-review", "mode:headless ")" without dispatching agents.
After reading, classify the document:
docs/brainstorms/, focuses on what to build and whydocs/plans/, focuses on how to build it with implementation detailsAnalyze the document content to determine which conditional personas to activate. Check for these signals:
product-lens -- activate when the document makes challengeable claims about what to build and why, or when the proposed work carries strategic weight beyond the immediate problem. The system's users may be end users, developers, operators, maintainers, or any other audience -- the criteria are domain-agnostic. Check for either leg:
Leg 1 — Premise claims: The document stakes a position on what to build or why that a knowledgeable stakeholder could reasonably challenge -- not merely describing a task or restating known requirements:
Leg 2 — Strategic weight: The proposed work could affect system trajectory, user perception, or competitive positioning, even if the premise is sound:
design-lens -- activate when the document contains:
security-lens -- activate when the document contains:
scope-guardian -- activate when the document contains:
adversarial -- activate when the document contains:
Tell the user which personas will review and why. For conditional personas, include the justification:
Reviewing with:
- coherence-reviewer (always-on)
- feasibility-reviewer (always-on)
- scope-guardian-reviewer -- plan has 12 requirements across 3 priority levels
- security-lens-reviewer -- plan adds API endpoints with auth flow
Always include:
compound-engineering:document-review:coherence-reviewercompound-engineering:document-review:feasibility-reviewerAdd activated conditional personas:
compound-engineering:document-review:product-lens-reviewercompound-engineering:document-review:design-lens-reviewercompound-engineering:document-review:security-lens-reviewercompound-engineering:document-review:scope-guardian-reviewercompound-engineering:document-review:adversarial-document-reviewerDispatch all agents in parallel using the platform's task/agent tool (e.g., Agent tool in Copilot CLI, spawn in Codex). Each agent receives the prompt built from the subagent template included below with these variables filled:
| Variable | Value |
|---|---|
{persona_file} | Full content of the agent's markdown file |
{schema} | Content of the findings schema included below |
{document_type} | "requirements" or "plan" from Phase 1 classification |
{document_path} | Path to the document |
{document_content} | Full text of the document |
Pass each agent the full document -- do not split into sections.
Error handling: If an agent fails or times out, proceed with findings from agents that completed. Note the failed agent in the Coverage section. Do not block the entire review on a single agent failure.
Dispatch limit: Even at maximum (7 agents), use parallel dispatch. These are document reviewers with bounded scope reading a single document -- parallel is safe and fast.
Process findings from all agents through this pipeline. Order matters -- each step depends on the previous.
Check each agent's returned JSON against the findings schema included below:
Suppress findings below 0.50 confidence. Store them as residual concerns for potential promotion in step 3.4.
Fingerprint each finding using normalize(section) + normalize(title). Normalization: lowercase, strip punctuation, collapse whitespace.
When fingerprints match across personas:
Findings = Auto + Present stays exact.Scan the residual concerns (findings suppressed in 3.2) for:
finding_type from the corroborating above-threshold finding.finding_type: omission (blocking risks surfaced as residual concerns are inherently about something the document failed to address).When personas disagree on the same section:
autofix_class: presentfinding_type: error (contradictions are by definition about conflicting things the document says, not things it omits)Specific conflict patterns:
Severity and autofix_class are independent. A P1 finding can be auto if the correct fix is obvious. The test is not "how important?" but "is there one clear correct fix, or does this require judgment?"
| Autofix Class | Route |
|---|---|
auto | Apply automatically -- one clear correct fix. Includes both internal reconciliation (one part authoritative over another) and additions mechanically implied by the document's own content. |
present | Present individually for user judgment |
Demote any auto finding that lacks a suggested_fix to present.
Auto-eligible patterns: summary/detail mismatch (body is authoritative over overview), wrong counts, missing list entries derivable from elsewhere in the document, stale internal cross-references, terminology drift, prose/diagram contradictions where prose is more detailed, missing steps mechanically implied by other content, unstated thresholds implied by surrounding context, completeness gaps where the correct addition is obvious. If the fix requires judgment about what to do (not just what to write), it belongs in present.
Sort findings for presentation: P0 -> P1 -> P2 -> P3, then by finding type (errors before omissions), then by confidence (descending), then by document order (section position).
Apply all auto findings to the document in a single pass:
List every auto-fix in the output summary so the user can see what changed. Use enough detail to convey the substance of each fix (section, what was changed, reviewer attribution). This is especially important for fixes that add content or touch document meaning -- the user should not have to diff the document to understand what the review did.
Headless mode: Do not use interactive question tools. Output all non-auto findings as a structured text summary the caller can parse and act on:
Document review complete (headless mode).
Applied N auto-fixes:
- <section>: <what was changed> (<reviewer>)
- <section>: <what was changed> (<reviewer>)
Findings (requires judgment):
[P0] Section: <section> — <title> (<reviewer>, confidence <N>)
Why: <why_it_matters>
Suggested fix: <suggested_fix or "none">
[P1] Section: <section> — <title> (<reviewer>, confidence <N>)
Why: <why_it_matters>
Suggested fix: <suggested_fix or "none">
Residual concerns:
- <concern> (<source>)
Deferred questions:
- <question> (<source>)
Omit any section with zero items. Then proceed directly to Phase 5 (which returns immediately in headless mode).
Interactive mode:
Present present findings using the review output template included below. Within each severity level, separate findings by type:
Brief summary at the top: "Applied N auto-fixes. K findings to consider (X errors, Y omissions)."
Include the Coverage table, auto-fixes applied, residual concerns, and deferred questions.
During synthesis, discard any finding that recommends deleting or removing files in:
docs/brainstorms/docs/plans/docs/solutions/These are pipeline artifacts and must not be flagged for removal.
Headless mode: Return "Review complete" immediately. Do not ask questions. The caller receives the text summary from Phase 4 and handles any remaining findings.
Interactive mode:
Ask using the platform's interactive question tool -- do not print the question as plain text output:
AskUserQuestionrequest_user_inputask_userOffer these two options. Use the document type from Phase 1 to set the "Review complete" description:
After 2 refinement passes, recommend completion -- diminishing returns are likely. But if the user wants to continue, allow it.
Return "Review complete" as the terminal signal for callers.
On subsequent passes, re-dispatch personas and re-synthesize. The auto-fix mechanism and confidence gating prevent the same findings from recurring once fixed. If findings are repetitive across passes, recommend completion.
@./references/subagent-template.md
@./references/findings-schema.json
@./references/review-output-template.md