This skill should be used when the user asks to "assess viability", "analyze the business case", "estimate project costs", "model revenue", "evaluate market opportunity", "do a competitive analysis", "create a go/no-go recommendation", or needs to transform a problem brief and requirements into a business viability analysis with market sizing, competitive landscape, cost estimates, revenue models, and a go/no-go recommendation. This is an optional pipeline step that can be skipped with the --skip-viability flag.
From pm-architect-greenfieldnpx claudepluginhub nbkm8y5/claude-plugins --plugin pm-architect-greenfieldThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Transform PROBLEM_BRIEF.md and REQUIREMENTS.md artifacts into a comprehensive VIABILITY.md artifact containing market analysis (TAM/SAM/SOM), competitive landscape assessment, cost estimation, revenue modeling, and a data-driven go/no-go recommendation. This is an optional step in the greenfield specification pipeline — it provides business context for investment decisions.
--skip-viability flag[Raw Idea] → [PROBLEM_BRIEF.md] → [REQUIREMENTS.md] → ... → [TASKS.md] → [IMPLEMENTATION_PLAN.md]
↓ ↓
**BUSINESS VIABILITY** (optional, parallel path)
↓
[VIABILITY.md]
This skill runs as an optional side-branch. It can be executed:
More upstream artifacts produce more accurate cost estimates.
Input: artifacts/greenfield/<project_name>/PROBLEM_BRIEF.md, optionally artifacts/greenfield/<project_name>/REQUIREMENTS.md
Output: artifacts/greenfield/<project_name>/VIABILITY.md
Skip condition: If the user passes --skip-viability, this skill is not executed and VIABILITY.md is not generated.
Load PROBLEM_BRIEF.md and extract:
If REQUIREMENTS.md is available, also extract:
If TASKS.md is available, also extract:
Estimate the market opportunity using the three-tier model:
TAM (Total Addressable Market):
SAM (Serviceable Addressable Market):
SOM (Serviceable Obtainable Market):
Market sizing rules:
Identify and analyze competitors and alternatives:
Competitor format:
Competitor: [Name]
Type: [Direct | Indirect | Substitute]
Description: [What they do]
Strengths: [What they do well]
Weaknesses: [Where they fall short]
Pricing: [Their pricing model and range]
Market Position: [Leader | Challenger | Niche | Emerging]
Differentiation: [How our solution differs]
Competitor types:
| Type | Definition | Example |
|---|---|---|
| Direct | Solves the same problem for the same users | Competitor CRM for real estate |
| Indirect | Solves a related problem or serves adjacent users | General CRM adapted for real estate |
| Substitute | Solves the problem with a fundamentally different approach | Spreadsheets, manual processes, hiring assistants |
Analysis deliverables:
Competitive moat categories:
Create a cost estimate covering development, infrastructure, and operations:
Cost categories:
| Category | Estimation Method | Formula |
|---|---|---|
| Engineering labor | Effort points * cost per point | [Total points] * [$/point] |
| Design/UX | % of engineering cost | Engineering * 15-25% |
| QA/Testing | % of engineering cost | Engineering * 10-20% |
| Project management | % of engineering cost | Engineering * 10-15% |
| Third-party licenses | Per-license or per-seat | [Count] * [$/license] |
Engineering cost per point estimation:
| Team Composition | Cost per Point |
|---|---|
| Solo developer | $50-100 |
| Small team (2-3 devs) | $100-200 |
| Agency/contract team | $200-400 |
| Enterprise team | $300-600 |
If TASKS.md is not available, estimate development cost from requirement count:
| Component | Estimation Basis |
|---|---|
| Hosting/compute | Based on architecture complexity and expected traffic |
| Database | Based on data model size and query volume |
| Storage | Based on data retention requirements |
| CDN/bandwidth | Based on geographic distribution and traffic |
| Third-party APIs | Based on integration count and usage volume |
| Monitoring/logging | Based on component count |
| Security/compliance | Based on regulatory requirements |
Monthly infrastructure tiers:
| Scale | Monthly Cost Range | Characteristics |
|---|---|---|
| Prototype | $0-50 | Free tiers, single server |
| Early Stage | $50-500 | Small dedicated resources |
| Growth | $500-5,000 | Scaled infrastructure, CDN |
| Scale | $5,000-50,000 | Multi-region, high availability |
| Category | Estimation Basis |
|---|---|
| Customer support | Based on expected user count and support ratio |
| Marketing/growth | Based on CAC targets and SOM goals |
| Maintenance/updates | 15-20% of development cost annually |
| Legal/compliance | Based on regulatory requirements |
Create revenue projections based on pricing strategy and market capture:
Pricing models to evaluate:
| Model | Description | Best For |
|---|---|---|
| Subscription (SaaS) | Recurring monthly/annual fee | B2B tools, platforms |
| Usage-based | Pay per use (API calls, transactions) | APIs, infrastructure |
| Freemium | Free tier + paid upgrades | Consumer apps, developer tools |
| One-time license | Single purchase | Desktop software, plugins |
| Marketplace commission | % of transactions | Marketplaces, platforms |
| Tiered pricing | Multiple plans at different price points | Most SaaS products |
Revenue projection format:
| Metric | Month 3 | Month 6 | Year 1 | Year 2 | Year 3 |
|---|---|---|---|---|---|
| Users (Free) | [N] | [N] | [N] | [N] | [N] |
| Users (Paid) | [N] | [N] | [N] | [N] | [N] |
| Conversion Rate | [%] | [%] | [%] | [%] | [%] |
| ARPU | [$] | [$] | [$] | [$] | [$] |
| MRR | [$] | [$] | [$] | [$] | [$] |
| ARR | [$] | [$] | [$] | [$] | [$] |
| Churn Rate | [%] | [%] | [%] | [%] | [%] |
Key metrics to calculate:
Synthesize all analysis into a clear recommendation:
Recommendation framework:
| Signal | GO | CAUTION | NO-GO |
|---|---|---|---|
| Market Size (SOM) | > $1M ARR potential | $100K-$1M | < $100K |
| Competition | Clear differentiation | Crowded but opportunity exists | Dominated by incumbents |
| Development Cost | < 6 months to MVP | 6-12 months | > 12 months |
| LTV:CAC | > 3:1 | 1:1 to 3:1 | < 1:1 |
| Break-even | < 12 months | 12-24 months | > 24 months |
| Technical Risk | Low/manageable | Medium with mitigations | High/unmitigable |
Recommendation format:
Recommendation: GO | CONDITIONAL GO | NO-GO
Confidence Level: High | Medium | Low
[Based on quality and completeness of available data]
Key Factors:
1. [Most important positive or negative factor]
2. [Second most important factor]
3. [Third factor]
Conditions (if CONDITIONAL GO):
- [Condition 1 that must be met before proceeding]
- [Condition 2]
Next Steps:
- [Immediate action 1]
- [Immediate action 2]
- [Immediate action 3]
Write the artifact following the template at ${CLAUDE_PLUGIN_ROOT}/reference/templates/VIABILITY.template.md.
The artifact must follow the standard artifact template structure:
# VIABILITY: [Project Name]
## Summary
[2-3 sentence summary: market opportunity, estimated cost, revenue potential, recommendation]
## Inputs
- **Problem Brief**: `PROBLEM_BRIEF.md` — [problem statement, target users, success metrics referenced]
- **Requirements**: `REQUIREMENTS.md` (if available) — [REQ count, complexity assessment]
- **Tasks**: `TASKS.md` (if available) — [effort points for cost estimation]
## Outputs
- This document (VIABILITY.md)
- Informs: Investment decisions, resource allocation, go/no-go gate
## Assumptions
- [ASM-0001]: [Business assumption — e.g., "Market data based on 2025 industry reports"]
- [ASM-0002]: [e.g., "Conversion rate of 3% from free to paid tier"]
- ...
## Open Questions
- [OQ-0001]: [Question affecting viability — e.g., "Actual CAC for target market segment?"]
- ...
## Main Content
### Market Analysis
#### Total Addressable Market (TAM)
- **Definition**: [Market boundary]
- **Size**: $[N]
- **Calculation**: [How TAM was derived]
- **Data Sources**: [Where numbers come from]
- **Growth Rate**: [Annual growth %]
#### Serviceable Addressable Market (SAM)
- **Definition**: [Segment boundary]
- **Size**: $[N]
- **Filters Applied**: [Geography, industry, company size, etc.]
- **Calculation**: [How SAM was derived]
#### Serviceable Obtainable Market (SOM)
- **Definition**: [Capture boundary]
- **Size**: $[N] (Year 1), $[N] (Year 3)
- **Capture Rate**: [% of SAM]
- **Rationale**: [Why this capture rate is realistic]
#### Market Opportunity Summary
```mermaid
pie title Market Sizing
"TAM" : [N]
"SAM" : [N]
"SOM Year 1" : [N]
| Competitor | Type | Pricing | Market Position | Key Strength | Key Weakness |
|---|---|---|---|---|---|
| [Name 1] | Direct | $[N]/mo | Leader | [Strength] | [Weakness] |
| [Name 2] | Direct | $[N]/mo | Challenger | [Strength] | [Weakness] |
| [Name 3] | Indirect | $[N]/mo | Niche | [Strength] | [Weakness] |
| [Name 4] | Substitute | Free/DIY | - | [Strength] | [Weakness] |
quadrantChart
title Feature Completeness vs Ease of Use
x-axis Low Feature Completeness --> High Feature Completeness
y-axis Hard to Use --> Easy to Use
quadrant-1 Feature-Rich & Easy
quadrant-2 Simple & Easy
quadrant-3 Simple & Hard
quadrant-4 Feature-Rich & Hard
Competitor A: [0.8, 0.3]
Competitor B: [0.4, 0.7]
Our Product: [0.6, 0.8]
| Category | Estimate | Basis |
|---|---|---|
| Engineering | $[N] | [Effort points * cost per point, or REQ-based estimate] |
| Design/UX | $[N] | [% of engineering] |
| QA/Testing | $[N] | [% of engineering] |
| Project Management | $[N] | [% of engineering] |
| Licenses/Tools | $[N] | [Itemized] |
| Total Development | $[N] |
| Component | Monthly Cost | Annual Cost | Notes |
|---|---|---|---|
| Hosting/Compute | $[N] | $[N] | [Basis] |
| Database | $[N] | $[N] | [Basis] |
| Storage | $[N] | $[N] | [Basis] |
| Third-party APIs | $[N] | $[N] | [Basis] |
| Monitoring | $[N] | $[N] | [Basis] |
| Total Infrastructure | $[N]/mo | $[N]/yr |
| Category | Monthly Cost | Annual Cost | Notes |
|---|---|---|---|
| Support | $[N] | $[N] | [Basis] |
| Marketing | $[N] | $[N] | [Basis] |
| Maintenance | $[N] | $[N] | [15-20% of dev cost annually] |
| Legal/Compliance | $[N] | $[N] | [Basis] |
| Total Operations | $[N]/mo | $[N]/yr |
| Period | Cost |
|---|---|
| Development (one-time) | $[N] |
| Year 1 (dev + infra + ops) | $[N] |
| Year 2 (infra + ops) | $[N] |
| Year 3 (infra + ops) | $[N] |
| 3-Year Total | $[N] |
| Tier | Price | Features | Target Segment |
|---|---|---|---|
| Free | $0 | [Feature list] | [Who this is for] |
| Basic | $[N]/mo | [Feature list] | [Who this is for] |
| Pro | $[N]/mo | [Feature list] | [Who this is for] |
| Enterprise | $[N]/mo | [Feature list] | [Who this is for] |
| Metric | Month 3 | Month 6 | Year 1 | Year 2 | Year 3 |
|---|---|---|---|---|---|
| Total Users | [N] | [N] | [N] | [N] | [N] |
| Paid Users | [N] | [N] | [N] | [N] | [N] |
| Conversion Rate | [%] | [%] | [%] | [%] | [%] |
| ARPU | $[N] | $[N] | $[N] | $[N] | $[N] |
| MRR | $[N] | $[N] | $[N] | $[N] | $[N] |
| ARR | - | - | $[N] | $[N] | $[N] |
| Monthly Churn | [%] | [%] | [%] | [%] | [%] |
| Metric | Value | Benchmark | Assessment |
|---|---|---|---|
| CAC | $[N] | < $[N] | [Good/Needs Work] |
| LTV | $[N] | > 3x CAC | [Good/Needs Work] |
| LTV:CAC Ratio | [N]:1 | > 3:1 | [Good/Needs Work] |
| Payback Period | [N] months | < 12 months | [Good/Needs Work] |
| Gross Margin | [N]% | > 70% for SaaS | [Good/Needs Work] |
| Metric | Value |
|---|---|
| Monthly Break-Even Revenue | $[N] |
| Users Needed for Break-Even | [N] paid users |
| Estimated Break-Even Date | Month [N] |
Confidence Level: [High | Medium | Low]
| Factor | Assessment | Signal |
|---|---|---|
| Market Size (SOM) | $[N] | [GO/CAUTION/NO-GO] |
| Competition | [Assessment] | [GO/CAUTION/NO-GO] |
| Development Cost | $[N], [N] months | [GO/CAUTION/NO-GO] |
| LTV:CAC | [N]:1 | [GO/CAUTION/NO-GO] |
| Break-Even | Month [N] | [GO/CAUTION/NO-GO] |
| Technical Risk | [Assessment] | [GO/CAUTION/NO-GO] |
**Output path**: `artifacts/greenfield/<project_name>/VIABILITY.md`
## Determinism Rules
1. ASM-NNNN IDs: Sort alphabetically by assumption text, assign sequential 4-digit numbers starting from 0001
2. OQ-NNNN IDs: Sort alphabetically by question text, assign sequential 4-digit numbers starting from 0001
3. Competitors listed by type order (Direct first, then Indirect, then Substitute), then alphabetically by name within each type
4. Cost categories listed in order: Development, Infrastructure, Operational
5. Revenue projection columns in chronological order
6. No timestamps in artifact body
7. Sections must appear in template order
## Data Quality and Honesty
This skill produces estimates, not facts. Maintain intellectual honesty:
**Rules for estimates:**
1. Always state confidence level (High/Medium/Low) with rationale
2. Use ranges instead of point estimates where possible ($50K-$80K, not $65K)
3. Clearly label data sources: "Industry report" vs. "Author estimate"
4. Flag when data is unavailable: "[No data — estimate based on comparable products]"
5. Prefer conservative estimates (underestimate revenue, overestimate cost)
6. Document the sensitivity of the recommendation to key assumptions
**Sensitivity analysis:**
For each key assumption, state what happens if it is wrong:
If [assumption] is wrong by [magnitude]:
## Quality Checklist
Before completing, verify:
- [ ] TAM > SAM > SOM (always)
- [ ] SOM is 1-10% of SAM for new entrants
- [ ] At least 3 competitors analyzed
- [ ] At least one substitute (non-product alternative) analyzed
- [ ] Development cost methodology is clear and repeatable
- [ ] Infrastructure costs are based on architecture components
- [ ] Revenue projections include conservative and optimistic scenarios
- [ ] LTV:CAC ratio is calculated and assessed
- [ ] Break-even date is realistic
- [ ] Go/No-Go recommendation is supported by the data presented
- [ ] All assumptions are documented with IDs
- [ ] Sensitivity of key assumptions is discussed
- [ ] Confidence level reflects data quality honestly
- [ ] No made-up statistics or false precision
- [ ] Artifact follows the standard template structure
## Common Pitfalls
1. **Inflated TAM**: Using the broadest possible market definition to make the opportunity look large. Be honest about the realistic addressable market.
2. **Ignoring substitutes**: The biggest competitor is often "doing nothing" or "using a spreadsheet." Always analyze the substitute option.
3. **Underestimating operational costs**: Support, marketing, and maintenance costs often exceed infrastructure costs. Include them.
4. **Hockey stick revenue projections**: Revenue rarely grows exponentially in Year 1. Use realistic adoption curves.
5. **Ignoring churn**: SaaS products have churn. Even a 5% monthly churn rate means replacing half your customers annually. Model it.
6. **False precision**: "$127,493 development cost" implies precision that does not exist. Use "$125K-$135K" instead.
7. **No sensitivity analysis**: If the recommendation changes when one assumption moves by 20%, it is a fragile recommendation. Disclose this.
8. **Competitor analysis bias**: Listing only competitor weaknesses without acknowledging strengths is not analysis, it is advocacy.
9. **Missing the "why now" question**: Why will this product succeed now when previous attempts failed? Market timing matters.
10. **Confusing revenue with profit**: Revenue projections without cost projections are meaningless. Always pair revenue with cost to show profitability timeline.