Guide for creating Opportunity Solution Trees (OST) for pre-PMF startups. Use when discussing product discovery, problem validation, customer research, or when helping startups identify problems worth solving. Use for queries about OST framework, problem-solution mapping, or validating product ideas.
I'll guide you through creating Opportunity Solution Trees to map problems worth solving before building. Use me when you need to structure customer research, validate product ideas, or decide what to build next—I'll help you identify real opportunities, generate diverse solutions, and design experiments that test assumptions quickly.
/plugin marketplace add kasperjunge/30-minute-vibe-coding-challenge/plugin install opportunity-solution-tree@30-minute-vibe-coding-challengeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
The Opportunity Solution Tree is a framework for exploring and mapping the problem space before committing to solutions. For pre-PMF startups, it's less about optimizing existing metrics and more about discovering which problems are worth solving and for whom.
Think of it as a systematic way to avoid building something nobody wants. Instead of jumping from "I have an idea" to "let's build it," the OST forces you to map out: what you're trying to learn or achieve, what problems exist in your target market, what you might build, and how you'll test if you're right.
The tree is a living document—not a one-time planning exercise. As you learn from experiments and customer conversations, opportunities shift in priority, new ones emerge, and solutions evolve or get discarded. This continuous discovery process is what helps pre-PMF startups navigate from uncertainty to product-market fit.
What it should be: Your desired outcome at the pre-PMF stage isn't typically a polished KPI. It's more like a learning goal or an early traction signal that indicates you're onto something real.
Good pre-PMF outcomes:
Not good outcomes:
The key principle: Your outcome should be specific enough to guide decisions but humble enough to acknowledge uncertainty. Pre-PMF, you're often trying to learn something fundamental about your market, not optimize something you've already proven.
What they should be: Real problems, pain points, needs, or "jobs to be done" that your ICP experiences. These come from customer conversations, observations, and research—not from your assumptions.
Critical distinction: Opportunities are problems in the customer's world, not gaps in the market or ideas you have. They should be framed from the customer's perspective.
Good opportunities (for a startup targeting small e-commerce brands):
Not good opportunities:
The "so what?" test: For each opportunity, you should be able to ask "so what?" and get to real consequences. "They don't have good analytics" → So what? → "They waste money on ads that don't work and miss their best opportunities" → That's the real opportunity.
Opportunity altitude—getting it right:
The test: Can you design multiple different solutions for this opportunity? If not, it might be too specific. Does it describe a real situation with real consequences? If not, it might be too generic.
What they should be: Specific ideas for how you might address an opportunity. At pre-PMF, these should range from very lightweight to more built-out, and you should have multiple solutions per opportunity.
Good solutions (for the opportunity "struggle to understand which channels drive profitable customers"):
Not good solutions:
The diversity principle: If all your solutions look similar (all software, all DIY tools, all services), you're probably not exploring widely enough. Pre-PMF, you should be willing to consider solutions that don't scale, manual services, templates, or even concierge approaches.
What they should be: Specific, time-bound tests designed to validate whether a solution actually addresses the opportunity. Each experiment should have a clear hypothesis and defined success criteria you establish before running it.
The structure: "We believe [solution] will [result] for [opportunity]. We'll know we're right when [specific measurable outcome]."
Good experiments (for a solution like "Weekly email digest showing revenue by source"):
Not good experiments:
Key experiment principles:
Small and fast: Pre-PMF experiments should be completable in days or weeks, not months. If an experiment takes a long time, break it into smaller tests.
Test assumptions, not build products: You're testing whether your thinking is correct—about the problem's urgency, the solution's fit, customer willingness to pay, etc.
Failure is valuable: A "failed" experiment that clearly invalidates an assumption saves you months of building the wrong thing. Design experiments where negative results are genuinely informative.
Cheapest test first: Before building anything, can you test with:
Different types of pre-PMF experiments:
The mistake: "Our outcome is to build a mobile app for small retailers."
Why it's wrong: You've smuggled your solution (mobile app) into the outcome position. This blinds you to whether a mobile app is even the right approach.
How to fix it: Ask "why?" repeatedly. Why a mobile app? "To help retailers manage inventory." Why? "So they don't lose sales from stockouts." Now you have a real outcome: "Help retailers reduce lost sales from inventory issues."
The mistake: Listing opportunities like "Need an AI chatbot," "Want automated workflows," "Require real-time dashboards."
Why it's wrong: These are solutions you're excited about, dressed up as customer needs. Real opportunities are solution-agnostic problems.
How to fix it: Go back to actual customer conversations. What were they trying to accomplish? What was frustrating them? Frame it in their language: "I'm constantly interrupted by the same basic questions" is an opportunity. "Need a chatbot" is not.
The mistake: Having only 1-2 opportunities under your outcome, often the ones that match your preconceived solution.
Why it's wrong: You're likely confirming your biases rather than genuinely exploring the problem space. If you've only found one or two problems in your entire ICP, you haven't talked to enough people or you've filtered what you heard through your solution lens.
How to fix it: Aim for 5-10+ opportunities initially. Some will be more important than others, but having multiple forces you to really listen and consider different angles on the problem space.
The mistake: Filling out your tree based on what you think customers experience, competitive research, or online forum browsing.
Why it's wrong: You'll generate hypothetical opportunities that sound plausible but don't reflect real urgency, real budget, or real problem-solving behavior.
The reality check: For each opportunity, you should be able to say: "I heard this from [Name] at [Company], and also [Name] at [Company], and I observed [Name] struggling with exactly this."
The mistake: Treating all opportunities as equally important and trying to generate solutions for everything simultaneously.
Why it's wrong: You have limited resources. Part of the OST's value is helping you choose where to focus.
How to prioritize: Assess opportunities by:
Start experiments on the opportunities that score highest. Keep the others visible but dormant.
The mistake: Only considering solutions that would take months to build, making it impossible to run fast experiments.
Why it's wrong: You can't learn quickly if every solution idea requires a major engineering lift.
The pre-PMF principle: For every opportunity, at least one solution should be testable in 2 weeks or less. This might mean:
The mistake: "Experiments" like "Build MVP," "Launch beta," "Get feedback."
Why it's wrong: These aren't experiments—they're just work. Real experiments have a specific hypothesis and clear success criteria you define upfront.
Additional experiment pitfalls:
Building before testing desirability: Don't start with "Build feature X and see if people use it." Start with "Show mockup of feature X and see if people express genuine interest or commitment."
Only testing with friendlies: Your friend who's "in your target market" is not a good experiment subject. They'll be too nice. Test with people who have no relationship with you and no reason to spare your feelings.
Ambiguous success criteria: "We'll talk to customers and see what they think" leaves too much room for interpretation. Instead: "At least 7 of 10 will say they currently spend money trying to solve this problem."
Not defining success criteria upfront: If you wait until after the experiment to decide what "good" looks like, you'll rationalize whatever results you got. Commit to the threshold beforehand.
No kill criteria: Before running an experiment, decide: "What result would cause us to abandon this solution or opportunity entirely?" If you can't think of any result that would change your plans, you're not really experimenting.
Treating experiments as commitments: Just because you're experimenting with a solution doesn't mean you have to build it. Most experiments should fail or provide learning that changes your direction. That's success.
The mistake: Building your tree once during a planning session and never revisiting it.
Why it's wrong: The entire point is continuous discovery. As you learn, opportunities should shift in priority, new ones should emerge, and solutions should evolve or get discarded.
How to use it: The tree is a living document. Weekly or bi-weekly, you should be adding learnings, pruning dead ends, and adjusting based on what experiments taught you.
The mistake: Thinking the solutions on your tree are your roadmap, in the order you'll build them.
Why it's wrong: Most solutions on your tree will never get built. Many opportunities won't pan out. The tree is an exploration tool, not a commitment.
The right mindset: You're mapping possibilities and systematically invalidating most of them. The tree helps you avoid building the wrong things, not ensure you build everything on it.
The OST is particularly valuable for pre-PMF startups because it resists the urge to build. Your instinct is probably to start coding or designing immediately. The tree forces you to:
The tree isn't about being "complete" or "correct"—it's about being honest about what you know, what you're guessing, and what you need to learn next.
For each opportunity on your tree, you should be able to point to specific customer conversations where you heard about this problem. For each solution, you should be able to articulate why you think it might work and what assumption you're making. For each experiment, you should know beforehand what result would cause you to pivot or persevere.
This level of explicitness feels uncomfortable at first. It's much easier to say "let's just build it and see." But that comfort comes at the cost of months or years building the wrong thing. The OST trades short-term comfort for long-term clarity—and for pre-PMF startups, that clarity is the difference between finding product-market fit and running out of runway.
I'll reference this skill when you: