From agent-teams
Runs multi-agent stress test on PRD with market-fit, feasibility, and scope reviewers; outputs readiness scores, issues, conflicts, verdict, and revision checklist.
npx claudepluginhub slgoodrich/agents --plugin agent-teamsThis skill is limited to using the following tools:
Run a multi-agent PRD review to answer: **"Is this PRD ready to build?"** Three reviewers analyze different dimensions in parallel, cross-reference findings, and deliver a consolidated review report.
Reviews Product Requirements Documents (PRDs) for structure, business logic, testability, traceability metadata, and 1:N split compliance using lint script. Use after PRD completion for quality gate.
Reviews PRDs, tech plans, design docs, and specs for issues using reviewer personas with HIGH/MEDIUM/LOW priority findings from parallel sub-agents.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Run a multi-agent PRD review to answer: "Is this PRD ready to build?" Three reviewers analyze different dimensions in parallel, cross-reference findings, and deliver a consolidated review report.
/agent-teams:prd-stress-test path/to/prd.md
This command spawns three specialist reviewers to stress-test a PRD from different angles:
After parallel review, reviewers cross-reference findings to catch conflicts (e.g., scope-reviewer wants to cut a feature that market-fit-reviewer considers critical). The lead compiles everything into a consolidated review report with per-dimension scores and an actionable revision checklist.
What you get:
Read the PRD at: $ARGUMENTS
Verify Agent Teams is available in your Claude Code version. If teammates cannot be spawned, display:
This command requires Claude Code's Agent Teams feature.
Check https://docs.anthropic.com/en/docs/claude-code for setup instructions.
If not available, stop.
Read the PRD file from the path in the "PRD to Review" section above.
Error: PRD file not found at [path].
Please provide a valid path to a PRD markdown file.
Parse the PRD content and identify key sections (features, requirements, target users, etc.).
Check for existing product context:
.claude/product-context/product-info.md if it exists.claude/product-context/competitive-landscape.md if it existsDisplay briefing:
── PRD Stress Test ────────────────────────────────────────
PRD: [file path]
Title: [extracted title or first heading]
Assembling your review team:
1. market-fit-reviewer → Market fit and differentiation
2. feasibility-reviewer → Technical feasibility and requirements clarity
3. scope-reviewer → Scope appropriateness and MVP sizing
Phase 1: Parallel Review (3 reviewers working simultaneously)
Phase 2: Cross-Reference (reviewers check each other's findings)
Phase 3: Consolidated Report (verdict with revision checklist)
Starting review...
───────────────────────────────────────────────────────────
Spawn 3 teammates simultaneously using Agent Teams:
Teammate 1: market-fit-reviewer
Prompt: "Review this PRD for market fit. Score 1-5.
PRD CONTENT:
[full PRD content]
[Include any product context found in Phase 1]
Your job: Evaluate target user clarity, problem validation, value proposition,
differentiation, and market context. Use your market-fit-reviewer expertise.
Deliver your review in the standard market-fit-reviewer output format.
Score 1-5 using the team-deliverables rubric."
Teammate 2: feasibility-reviewer
Prompt: "Review this PRD for technical feasibility and requirements clarity. Score 1-5.
PRD CONTENT:
[full PRD content]
Your job: Evaluate requirements clarity, acceptance criteria, technical
feasibility, edge cases, and integration points. Flag every ambiguity.
Use your feasibility-reviewer expertise.
Deliver your review in the standard feasibility-reviewer output format.
Score 1-5 using the team-deliverables rubric."
Teammate 3: scope-reviewer
Prompt: "Review this PRD for scope appropriateness. Score 1-5.
PRD CONTENT:
[full PRD content]
Your job: Assess total scope, classify every feature as MUST-HAVE / CUT FROM V1 /
DEFER TO V2, identify scope creep, and estimate effort reduction from cuts.
Use your scope-reviewer expertise. Apply the 3-Feature MVP Rule.
Deliver your review in the standard scope-reviewer output format.
Score 1-5 using the team-deliverables rubric."
Wait for all three reviewers to complete their reviews.
Send each reviewer the other two reviewers' findings to flag conflicts.
To market-fit-reviewer:
"Here are your fellow reviewers' findings. Flag any conflicts with your review.
FEASIBILITY REVIEW:
[feasibility-reviewer output]
SCOPE REVIEW:
[scope-reviewer output]
Specifically check:
- Are features you consider critical for differentiation marked 'CUT' by scope-reviewer?
- Do feasibility concerns affect market-critical features?
Flag conflicts and explain your position."
To feasibility-reviewer:
"Here are your fellow reviewers' findings. Flag any conflicts with your review.
MARKET FIT REVIEW:
[market-fit-reviewer output]
SCOPE REVIEW:
[scope-reviewer output]
Specifically check:
- Do features market-fit-reviewer considers critical have clear requirements?
- Do scope cuts remove technically risky components (positive) or create gaps?
Flag conflicts and explain your position."
To scope-reviewer:
"Here are your fellow reviewers' findings. Flag any conflicts with your review.
MARKET FIT REVIEW:
[market-fit-reviewer output]
FEASIBILITY REVIEW:
[feasibility-reviewer output]
Specifically check:
- Are features you marked 'CUT' considered critical by market-fit-reviewer?
- Do your cuts align with feasibility concerns?
Flag conflicts and explain your position. Be willing to reconsider cuts if
market-fit evidence is strong."
Wait for all three cross-reference responses.
As the lead agent, compile all findings into the PRD Review Report.
Read all review reports and cross-reference responses.
Invoke the team-deliverables skill for the PRD review report template.
Score each dimension using the rubrics from team-deliverables:
Compile blocking issues from all three reviewers.
Document reviewer conflicts:
Determine verdict:
Generate revision checklist:
Present the completed report to the user.
Shut down all three teammates.
Display completion:
PRD stress test complete.
Verdict: [READY TO BUILD / NEEDS REVISION / MAJOR REWORK]
Scores: Market Fit [X]/5 | Feasibility [X]/5 | Scope [X]/5
[If NEEDS REVISION or MAJOR REWORK]:
Use the revision checklist above to address the findings,
then run the stress test again to verify.
/agent-teams:validation-sprint - Validate the idea before writing a PRD/agent-teams:competitive-war-room - Research competitors referenced in the PRD