From agent-teams
Provides templates and scoring rubrics for multi-agent team workflows including validation verdicts, PRD review reports, and competitive synthesis. Use for final deliverables from agent debates.
npx claudepluginhub slgoodrich/agents --plugin agent-teamsThis skill uses the workspace's default tool permissions.
Templates and scoring rubrics for the final outputs of multi-agent team workflows.
Spawns three AI agents to run validation sprint on product ideas: researches user problems, market opportunities, risks; delivers scored build/no-build verdict.
Provides PM/PMM frameworks for PRDs, roadmaps, personas, journey maps, business cases, market sizing, competitive analysis, and GTM plans using Cagan's risk domains in discovery or delivery modes.
Provides PRD templates (lean, comprehensive, Amazon PR/FAQ, Google) for problem statements, metrics, requirements, user stories, and specs. Use for feature documentation and team alignment.
Share bugs, ideas, or general feedback.
Templates and scoring rubrics for the final outputs of multi-agent team workflows.
Auto-loaded by all six agents:
idea-researcher, market-researcher, idea-skeptic - For validation verdict and competitive synthesis scoring rubricsmarket-fit-reviewer, feasibility-reviewer, scope-reviewer - For PRD review report scoring rubricsUse when you need:
| Command | Template | When |
|---|---|---|
/agent-teams:validation-sprint | Validation Verdict | After cross-examination of idea investigation |
/agent-teams:prd-stress-test | PRD Review Report | After cross-referencing PRD review dimensions |
/agent-teams:competitive-war-room | Competitive Synthesis | After parallel competitor deep-dives |
User Problem Score
| Score | Meaning |
|---|---|
| 1-2 | No evidence of real user pain. Problem is theoretical. |
| 3-4 | Some users mention this, but it's a mild annoyance. Existing solutions work "well enough." |
| 5-6 | Real problem, but unclear severity or frequency. Some workarounds exist. |
| 7-8 | Clear, validated pain point. Users actively seeking solutions. Workarounds are inadequate. |
| 9-10 | Hair-on-fire problem. Users spending significant time/money on bad workarounds. |
Market Opportunity Score
| Score | Meaning |
|---|---|
| 1-2 | Tiny niche. No evidence of willingness to pay. Market too small to sustain a business. |
| 3-4 | Small market or crowded space with no clear differentiation angle. |
| 5-6 | Viable market but competitive. Differentiation possible but unproven. |
| 7-8 | Attractive market with clear gaps. Evidence of willingness to pay. Timing is right. |
| 9-10 | Large, growing market with underserved segments. Strong demand signals. Clear entry point. |
Defensibility Score
| Score | Meaning |
|---|---|
| 1-2 | No moat. Any competitor could copy this in weeks. Pure feature play. |
| 3-4 | Weak differentiation. First-mover advantage only, which isn't a moat. |
| 5-6 | Some defensibility through domain expertise, data, or network effects. Not bulletproof. |
| 7-8 | Strong differentiation with compounding advantages. Switching costs for users. |
| 9-10 | Deep moat. Proprietary data, strong network effects, or structural advantage. |
Market Fit Score
| Score | Meaning |
|---|---|
| 1 | No clear target user or problem. Fundamental market questions unanswered. |
| 2 | Target user defined but problem validation missing. "If you build it, will they come?" is unaddressed. |
| 3 | Problem and user are clear, but differentiation is weak. Could be any competitor's PRD. |
| 4 | Strong problem-solution fit. Clear differentiation. Minor positioning gaps. |
| 5 | Excellent. Clear user, validated problem, sharp differentiation, compelling value prop. |
Feasibility Score
| Score | Meaning |
|---|---|
| 1 | Major technical unknowns. Requirements are vague or contradictory. Can't estimate effort. |
| 2 | Core approach is clear but many requirements are ambiguous. Multiple "TBD" sections. |
| 3 | Mostly clear. Some edge cases missing, some acceptance criteria need tightening. Buildable with clarification. |
| 4 | Clear requirements, well-defined acceptance criteria. Minor gaps. Ready for engineering review. |
| 5 | Precise, testable requirements. Edge cases covered. Acceptance criteria are specific and measurable. |
Scope Score
| Score | Meaning |
|---|---|
| 1 | Massive scope. Years of work presented as an MVP. No prioritization visible. |
| 2 | Too much for V1. Some nice-to-haves mixed in with must-haves. Needs significant cutting. |
| 3 | Reasonable but could be tighter. A few features could be deferred without losing core value. |
| 4 | Well-scoped. Clear must-haves, reasonable timeline. Only minor fat to trim. |
| 5 | Ruthlessly scoped. 3-5 core features. Clear what's in V1 vs. later. Ships fast. |
assets/:Remember: Templates create consistency, not rigidity. Adapt sections when the findings demand it. A template that forces you to fill in blanks with nothing useful is worse than no template.