From solo
Validates startup ideas end-to-end: KB/project/web search, manifest alignment check, S.E.E.D. niche analysis, devil's advocate inversion, STREAM 6-layer evaluation, stack selection, PRD generation.
npx claudepluginhub fortunto2/solo-factory --plugin soloThis skill is limited to using the following tools:
Validate a startup idea end-to-end: search KB, run Manifest alignment, S.E.E.D. niche check, Devil's Advocate inversion, STREAM 6-layer analysis, pick stack, generate PRD.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Validate a startup idea end-to-end: search KB, run Manifest alignment, S.E.E.D. niche check, Devil's Advocate inversion, STREAM 6-layer analysis, pick stack, generate PRD.
Philosophy: Validation should be honest, not optimistic. Better to kill a bad idea in 5 minutes than waste 3 months building it. The goal is truth, not encouragement.
If MCP tools are available, prefer them over CLI:
kb_search(query, n_results) — search knowledge base for related docsproject_info() — list active projects with stacksweb_search(query) — search for dead startups, competitor failuresIf MCP tools are not available, fall back to Grep/Glob/WebSearch.
Parse the idea from $ARGUMENTS. If empty, ask the user what idea they want to validate.
Search for related knowledge:
If MCP kb_search tool is available, use it directly:
kb_search(query="<idea keywords>", n_results=5)
Otherwise search locally:.md files across the project and knowledge base
Summarize any related documents found (existing ideas, frameworks, opportunities).Deep research (optional): Check if research.md exists for this idea (look in docs/ or the current working directory).
/research <idea> and come back. If no, continue without it.Manifest Alignment Check (with teeth):
Consult references/manifest-checklist.md (bundled with this skill) for the full checklist of 9 principles and 6 red flags. Check the idea against EACH one. This is not a formality — a manifest violation is a soft kill flag.
For each principle, assess: comply or violate? If violating — cite the specific principle.
Key principles (see checklist for details):
Scoring: 0 violations = perfect, 1-2 = caution, 3+ = strong KILL signal.
Be honest. If the idea conflicts with principles, SAY SO. Don't rationalize alignment.
S.E.E.D. niche check (quick, before deep analysis):
Score the idea on four dimensions:
Kill flags (stop immediately if any):
If any kill flag triggers → recommend KILL with explanation. Don't proceed to STREAM.
Devil's Advocate (Inversion):
"Flip the question: how would you guarantee failure?" — STREAM Layer 3 (Inversion)
This step is mandatory — before scoring positively, actively try to kill the idea. The goal is to find reasons NOT to build it.
6a. Inversion — 5 ways this fails: List 5 specific, concrete ways this idea could fail. Not generic risks ("competition") but specific scenarios with evidence:
6b. Dead startup search: Search for startups that tried something similar and failed or pivoted:
"<idea category>" startup failed OR pivoted OR shut down"<competitor>" pivot OR layoffs OR shutdown6c. Unit economics stress test (if research.md exists): Recalculate unit economics with PESSIMISTIC assumptions:
| Metric | Optimistic | Realistic | Pessimistic |
|---|---|---|---|
| Monthly churn | 10% | 30-40% (industry data) | 50%+ (first year) |
| Average lifetime | 10 months | 2.5-3 months | 1.5 months |
| LTV | (price × 10) | (price × 2.5) | (price × 1.5) |
| CAC | <$20 | $30-50 | $50-80 |
| LTV:CAC | >3:1 | ~1:1 | <1:1 (UNPROFITABLE) |
If pessimistic LTV:CAC < 1 → flag as critical risk.
6d. "Empty market" test: If the analysis found an "empty" market segment or pricing gap, ask:
6e. Manifest conflict honesty: Re-check findings from step 4. For each manifest violation found, state the conflict clearly: "This requires X, which violates principle Y because Z." Do NOT rationalize conflicts away. The user decides whether to proceed — not the skill.
STREAM analysis: Walk the idea through all 6 layers.
Consult references/stream-layers.md (bundled with this skill) for the complete 6-layer framework with questions per layer.
For EACH layer, provide BOTH positive and negative assessment. Use the actual framework questions:
Scoring rules:
Beachhead Segment (first target market):
From research personas (if available) or idea description, define the beachhead:
This feeds into PRD ICP section and /launch GTM strategy.
Pricing Model (2-3 options with reasoning):
Based on competitor pricing (from research.md) and manifesto principles (subscription fatigue is real — prefer one-time purchase or honest transparent pricing when possible):
| Model | Price | Reasoning | Manifest Alignment |
|---|---|---|---|
| One-time | $X | No running costs → no recurring charge | Best fit (no lock-in) |
| Freemium | Free / $X/mo | Server costs justify subscription | OK if honest |
| Usage-based | $X per Y | Pay for what you use | Good (transparent) |
Rules from manifesto (templates/principles/manifest.md):
Pick the recommended model and explain why. Include in PRD.
Stack selection: Auto-detect from research data, then confirm or ask.
Auto-detection rules (from research.md product_type field or idea keywords):
product_type: ios → ios-swiftproduct_type: android → kotlin-androidproduct_type: web + mentions AI/ML → nextjs-supabase (or nextjs-ai-agents)product_type: web + landing/static → astro-staticproduct_type: web + content site + needs SSR for some pages (CDN data, transcripts, dynamic) → astro-hybridproduct_type: web (default) → nextjs-supabaseproduct_type: api → python-apiproduct_type: cli + Python keywords → python-mlproduct_type: cli + JS/TS keywords → nextjs-supabase (monorepo)cloudflare-workersIf auto-detected with high confidence, state the choice and proceed.
If ambiguous (e.g., could be web or mobile), ask via AskUserQuestion with the top 2-3 options.
If MCP project_info is available, show user's existing stacks as reference.
docs/prd.md in the current project directory. Use a kebab-case project name derived from the idea.PRD must pass Definition of Done:
/research <idea> — if evidence is weak, get data first/scaffold <name> <stack> — if realistic score ≥ 7, build itreferences/manifest-checklist.md and references/stream-layers.md (bundled with this skill). They contain the actual checklists./research or /swarm to score and generate PRDCause: Idea fails basic niche viability (SERP dominated, no evidence, MVP too complex).
Fix: This is by design — kill flags save time. Consider pivoting the idea or running /research for deeper evidence.
Cause: Skipped /research step.
Fix: Skill asks if you want to research first. For stronger PRDs, run /research <idea> before /validate.
Cause: Ambiguous product type (could be web or mobile). Fix: Skill asks via AskUserQuestion when ambiguous. Specify product type explicitly in the idea description.
Cause: Confirmation bias — you found evidence FOR and stopped looking. Fix: Devil's Advocate step is now mandatory. If you skipped it, the score is invalid. Re-run with full inversion.
Cause: The idea is exciting but conflicts with principles. Fix: State conflicts explicitly. "This violates X because Y" is more useful than silence. The user decides whether to proceed — not the skill.