Strategic advisor for evaluating software/AI product ideas and generating defensible startup strategies. Use this skill whenever the user wants to evaluate a product idea, brainstorm startup directions, assess defensibility against foundation model companies, choose between build-vs-buy for AI components, or needs a strategic positioning check for a development project. Trigger on phrases like "is this idea defensible", "what should I build", "will OpenAI/Anthropic eat this", "startup idea", "product strategy", "moat", "competitive positioning", "should I build X", "vertical AI opportunity", "business model for AI", or any discussion about whether a software product is worth building given the current AI landscape. Also trigger when the user is evaluating technical architecture decisions through a strategic lens — e.g. "should I use a fine-tuned model or API calls" where the answer has competitive implications.
npx claudepluginhub flight505/claude-project-planner --plugin claude-project-plannerThis skill is limited to using the following tools:
A strategic evaluation framework for software founders navigating the 2025–2026 landscape where
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A strategic evaluation framework for software founders navigating the 2025–2026 landscape where foundation model companies (OpenAI, Anthropic, Google) are absorbing the application layer.
This skill helps with two modes:
For deep background research and examples, read references/strategic-research.md in this skill's
directory. Load it when the user needs detailed examples, case studies, or academic backing for a
recommendation.
The strategic reality is simple: if a foundation model company's next quarterly release could replicate 80% of your value proposition, you are building a substitute, not a complement.
The key test — apply it to every idea:
"OpenAI Next Release" Test: Would this product get worse or better when the underlying foundation model improves? If better → you're a complement (safe). If replaceable → you're a wrapper (danger).
Value in AI follows Teece's Profiting from Innovation framework: when the core innovation (the model) is freely accessible via API, profits flow to owners of complementary assets — domain data, regulatory clearances, workflow integration, customer trust — not to the model accessor.
When the user presents a specific product or startup idea, run it through the Defensibility Scorecard. Score each dimension 0–3 and provide a total with commentary.
| # | Dimension | 0 (None) | 1 (Weak) | 2 (Moderate) | 3 (Strong) |
|---|---|---|---|---|---|
| 1 | Proprietary Data Flywheel | No unique data; uses public datasets or generic API output | Collects some user data but not structured for compounding advantage | Structured data collection where each customer improves the product | Unique dataset that compounds over time and cannot be replicated without equivalent operations (e.g. Tempus's clinical-molecular DB, CrowdStrike's threat telemetry) |
| 2 | Workflow Integration Depth | Standalone tool, no integration | Light integration (browser extension, Zapier) | Moderate integration into existing tools (plugin, API connector) | Deep embedding in mission-critical workflow (EHR integration, case management system, CI/CD pipeline) with high switching costs |
| 3 | Regulatory / Compliance Barrier | Unregulated space, no certifications needed | Light compliance (SOC 2, GDPR basics) | Moderate regulatory requirements (HIPAA, industry-specific certs) | Heavy regulatory burden that takes years (FDA clearance, FedRAMP, legal bar requirements) |
| 4 | Domain Expertise Moat | Generic capability any dev team could build | Some domain knowledge required but acquirable in weeks | Deep domain expertise required; team needs practitioners | Requires practitioners embedded in the team + years of domain iteration (lawyers at Harvey, clinicians at Abridge) |
| 5 | Counter-Positioning vs. Incumbents | Directly competing with model providers on their turf | Competing with SaaS incumbents who could add AI features | Different business model that incumbents could adopt with effort | Outcome-based pricing or service model that incumbents cannot copy without cannibalizing their own revenue |
| 6 | Production Complexity | Works out of the box with an API call | Some prompt engineering and pipeline work | Significant engineering for reliability, edge cases, monitoring | The last 10% of quality requires 10–100x the effort of a prototype; years of production hardening (compliance engines, safety-critical systems) |
| 7 | Vendor Neutrality Advantage | Tied to one model provider | Works with multiple providers but not a selling point | Multi-model support is a feature customers value | Structural requirement for vendor neutrality — model providers cannot credibly offer this (eval tools, security/guardrails, observability) |
Use this structure:
## Defensibility Evaluation: [Idea Name]
### Quick Verdict
[One sentence: defensible / risky / wrapper territory]
### Scorecard
[Table with scores and brief justification per dimension]
### Total: X/21 — [Interpretation]
### Key Risks
[Top 2-3 specific threats, referencing which foundation model companies or incumbents
could absorb this and how]
### Strengthening Moves
[Concrete actions to improve the weakest dimensions — e.g., "Build EHR integration
with Epic to move Workflow Integration from 1→3"]
### Strategic Recommendation
[Build / Pivot / Avoid — with reasoning]
When the user asks for ideas within a domain or vertical, use the Opportunity Generator framework.
For the given domain, assess:
For each corridor, generate 1–2 specific ideas tailored to the domain:
Run each idea through the Defensibility Scorecard (abbreviated — just the total and top 2 dimensions).
## Opportunities in [Domain]
### Domain Profile
[Brief assessment of the 5 vertical characteristics]
### Top Ideas (ranked by defensibility score)
**1. [Idea Name]** — Score: X/21
[2-3 sentence description]
Strongest moats: [top 2 dimensions]
Business model: [pricing approach]
Example comparable: [existing company doing something similar, if any]
**2. [Idea Name]** — Score: X/21
...
This is as much a strategic question as a technical one:
Recommendation: Start with API calls to validate the market. Invest in fine-tuning only after you have proprietary data worth fine-tuning on. The model is not your moat — the data pipeline feeding it is.
79% of enterprises paying for OpenAI also pay for Anthropic. Design for multi-provider from day one:
The shift from per-seat to per-outcome pricing is the single biggest business model disruption in AI:
Apply this test: If you removed the AI component, would there still be a product worth paying for?
When evaluating any idea, automatically check for these anti-patterns:
For detailed case studies, academic frameworks, funding data, and company examples, read:
references/strategic-research.md
Load this when: