From agent-teams
Spawns three AI agents to run validation sprint on product ideas: researches user problems, market opportunities, risks; delivers scored build/no-build verdict.
npx claudepluginhub slgoodrich/agents --plugin agent-teamsThis skill is limited to using the following tools:
Run a multi-agent validation sprint to answer: **"Should I build this?"** Three agents investigate in parallel, cross-examine each other's findings, and deliver a go/no-go verdict.
Validates startup ideas end-to-end: KB/project/web search, manifest alignment check, S.E.E.D. niche analysis, devil's advocate inversion, STREAM 6-layer evaluation, stack selection, PRD generation.
Orchestrates parallel sub-agent validation of business ideas using Lean Canvas, JTBD, market/competitive/feasibility research, SWOT/PESTLE, and weighted scorecard verdict.
Validates startup ideas via competitor analysis, market signals, TAM/SAM/SOM sizing, and risk assessment. Delivers brutally honest GO/TEST/PASS verdicts before building.
Share bugs, ideas, or general feedback.
Run a multi-agent validation sprint to answer: "Should I build this?" Three agents investigate in parallel, cross-examine each other's findings, and deliver a go/no-go verdict.
/agent-teams:validation-sprint "AI-powered code review tool for solo developers"
This command spawns three teammates with competing perspectives to stress-test a product idea:
After parallel investigation, agents cross-examine each other's findings. The lead synthesizes everything into a Validation Verdict with scores and a BUILD / DON'T BUILD / NEEDS MORE EVIDENCE recommendation.
What you get:
$ARGUMENTS
Before starting, verify Agent Teams is available in your Claude Code version. If teammates cannot be spawned, display:
This command requires Claude Code's Agent Teams feature.
Check https://docs.anthropic.com/en/docs/claude-code for setup instructions.
If not available, stop.
Check for existing product context:
.claude/product-context/product-info.md if it exists.claude/product-context/competitive-landscape.md if it exists.claude/product-context/customer-segments.md if it existsDisplay briefing:
── Validation Sprint ──────────────────────────────────────
Idea: [parsed idea description]
Assembling your validation team:
1. idea-researcher → Investigating the user problem
2. market-researcher → Researching the market opportunity
3. idea-skeptic → Trying to kill this idea
Phase 1: Parallel Investigation (3 agents working simultaneously)
Phase 2: Cross-Examination (agents challenge each other)
Phase 3: Synthesis (final verdict with scores)
Starting investigation...
───────────────────────────────────────────────────────────
Spawn 3 teammates simultaneously using Agent Teams:
Teammate 1: idea-researcher
Prompt: "Investigate the user problem behind this idea: [idea description].
[Include any product context found in Phase 1]
Your job: Determine if this is a real, painful problem for real people.
Assess pain severity, frequency, existing workarounds, and who feels this most.
Use your research-ops expertise. Search for evidence in forums, review sites,
and community discussions.
Deliver your findings in the standard idea-researcher output format.
Include confidence levels and evidence gaps."
Teammate 2: market-researcher
Prompt: "Research the market opportunity for this idea: [idea description].
[Include any product context found in Phase 1]
Your job: Assess market size, competitive landscape, and timing.
Use WebSearch to find real data on competitors, pricing, and market indicators.
Estimate TAM/SAM/SOM with transparent methodology.
Deliver your findings in the standard market-researcher validation sprint format.
Include source URLs and data gaps."
Teammate 3: idea-skeptic
Prompt: "Try to kill this idea: [idea description].
[Include any product context found in Phase 1]
Your job: Find every structural reason this idea should NOT be built.
Use your attack patterns: vitamin vs painkiller, feature vs product,
temporary differentiation, vague target market, incumbent response,
TAM inflation, switching costs.
Deliver at least 3 specific attacks in the standard idea-skeptic output format.
Rate each attack's severity. Be honest about which attacks are strong vs weak."
Wait for all three teammates to complete their investigations.
Send each agent the other two agents' findings for one round of structured challenges.
To idea-researcher:
"Here are your teammates' findings. Review them and provide your challenges.
MARKET RESEARCHER'S FINDINGS:
[market-researcher output]
SKEPTIC'S CASE:
[idea-skeptic output]
Challenge their findings from the user problem perspective. Cite specific
claims from their reports. Use the cross-examination patterns from the
team-coordination skill. Focus your 2-3 strongest challenges."
To market-researcher:
"Here are your teammates' findings. Review them and provide your challenges.
IDEA RESEARCHER'S FINDINGS:
[idea-researcher output]
SKEPTIC'S CASE:
[idea-skeptic output]
Challenge their findings from the market perspective. Cite specific claims
from their reports. Use the cross-examination patterns from the team-coordination
skill. Focus your 2-3 strongest challenges."
To idea-skeptic:
"Here are your teammates' findings. Review them and provide your challenges.
IDEA RESEARCHER'S FINDINGS:
[idea-researcher output]
MARKET RESEARCHER'S FINDINGS:
[market-researcher output]
Update your case based on their evidence. Which of your attacks still stand?
Which are weakened? Do their findings reveal new vulnerabilities? Cite specific
evidence from their reports."
Wait for all three cross-examination responses.
As the lead agent, compile all findings into the Validation Verdict.
Read all investigation reports and cross-examination responses.
Invoke the team-deliverables skill for the validation verdict template.
Score each dimension (1-10) using the rubrics from team-deliverables:
Determine verdict:
Fill in the validation verdict template with:
Present the completed verdict to the user.
Shut down all three teammates.
Offer to save the verdict:
Validation sprint complete.
Would you like me to save this verdict to:
.claude/product-context/validation-[idea-slug].md
This will be available to other PM agents for future reference.
/agent-teams:prd-stress-test - After validation, stress-test the PRD/agent-teams:competitive-war-room - Deep-dive specific competitors