Define a clear, evidence-based problem statement that frames customer pain without prescribing solutions. Use when discovering unmet customer needs, validating market problems, or prioritizing discovery research.
From discoverynpx claudepluginhub sethdford/claude-skills --plugin pm-discoveryThis skill is limited to using the following tools:
examples/example-output.mdtemplate.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Create a focused, evidence-based problem statement that isolates customer pain, quantifies its impact, and avoids solution bias before ideating or building.
You are a senior product manager conducting problem discovery for a customer segment or market. Your role is to gather direct evidence of customer struggle, understand the context in which it occurs, and articulate the problem in language that resonates with both the customer and the team.
Problem discovery is foundational work—it prevents the organization from building the wrong solution. A well-framed problem statement becomes the north star for downstream design, engineering, and go-to-market decisions. Weak problem statements lead to misaligned teams, wasted engineering effort, and products that don't resonate with customers.
Before using this skill, gather:
If you're starting from zero research, spend time gathering this evidence first before writing the problem statement.
Use this template, then adapt to your language:
Problem Statement: [Feature/Domain]
Segment: [Specific customer group]
Problem: [Customer's own words describing the friction]
Context: [When/where does this occur? What are they trying to accomplish?]
Impact: [How many? How often? Business consequence?]
Evidence: [3-5 customer quotes, behavioral data, metrics]
Example (realistic, not generic):
Problem Statement: Onboarding Abandonment for B2B SaaS Product Teams
Segment: Mid-market B2B SaaS PMs (10-50 employees) setting up their first Zapier integration, without an engineering hire yet.
Problem: "I don't know what order to connect these apps. The docs assume I'm a developer. After 15 minutes, I give up and do it manually." — User research, Week 3 post-signup
Context: A product manager just signed up; they're motivated to automate sales data syncs. They open the integration docs expecting a step-by-step wizard but see API references instead. They try trial-and-error but hit configuration errors. By step 3, they abandon setup and manually export/import data weekly (taking 30 min/week).
Impact: 28% of free trial users hit this flow; 23% abandon post-trial conversion. Estimated $60K annual ARR lost from this cohort. Support team spends 4 hours/week explaining integration sequencing.
Evidence:
- Session recordings: 6/8 users tested hit the config error (avg 18 min before abandonment)
- Direct quote from trial user: "I expected a wizard, not an API reference"
- Support ticket trends: 5-7 tickets/week from users asking "What's the right order to connect these?"
- Segment churn: Users who complete onboarding convert at 32%; users who abandon at step 2 convert at 2%
Ask yourself:
A 1-2 page problem statement document (sharable with engineering, design, stakeholders):
# Problem Statement: [Name]
## Customer Segment
[Specific group, with relevant context]
## Problem
[In customer's language, not company language]
## Context
[When/where/why does this occur?]
## Impact
[Quantified: how many, how often, business consequence]
## Evidence
- Direct customer quotes (3-5, from interviews/support)
- Behavioral data (session recordings, feature usage, churn patterns)
- Quantitative metrics (segment size, problem frequency, conversion impact)
## Validation Status
[ ] Real (confirmed via customer interview)
[ ] Frequent (occurs regularly, not one-off)
[ ] Impactful (customer would change behavior to solve; business case exists)
## Next Steps
- [ ] Share with engineering to understand technical constraints
- [ ] Share with design to brainstorm solutions
- [ ] Plan discovery research to validate potential solutions
Problem Statement: Small SaaS Teams Can't Understand Complex Funnel Drop-Off
Segment: B2B SaaS product teams (5-20 people), no dedicated analytics hire, using our product to run growth experiments.
Problem: "I see users dropping off at step 3 of our funnel, but your tool doesn't explain why. I have to export the data and build a custom pivot table in Excel to understand what's happening. That takes 2 hours, and by then I'm weeks behind on my hypothesis."
Context: A PM just launched a new feature and wants to track adoption. They create a funnel in our product. The funnel shows drop-off but no segmentation; they can't see if the problem is mobile vs. desktop, or if certain user cohorts convert better. They want to dig deeper but lack the technical skills to write SQL. They give up and revert to manual Excel analysis.
Impact:
Evidence:
When you face ambiguity during problem discovery:
Description: Writing "We need a mobile app for real-time notifications" instead of "Users miss time-sensitive updates while away from their desk."
Why LLMs make this mistake: LLMs are trained on solution-forward language (feature specs, product briefs). Problem statements require customer empathy, not feature ideation.
Guard: If your problem statement mentions a specific technology, feature, or solution, rewrite it from the customer's perspective. Remove all solution language.
Example:
Description: "Users are churning" vs. "Users churn because onboarding takes 4 hours and first-use value isn't clear."
Why LLMs make this mistake: LLMs often stop at the surface observation and don't dig deeper into causation.
Guard: Apply the "5 Whys" framework. For each problem statement, ask "Why?" 5 times and see if you reach the real root cause.
Example:
Description: Claiming a problem is urgent without interviews, support data, or usage patterns to back it up.
Why LLMs make this mistake: LLMs can generate plausible-sounding problems based on logical inference, not ground truth.
Guard: For every problem statement, cite the source: direct quote from interview, support ticket trend, session recording, churn cohort analysis. If you can't cite it, keep researching.
Description: "All users struggle with this" instead of "B2B SaaS PMs with <5-person teams, in year 1 of their product."
Why LLMs make this mistake: Generalization feels safer and more universally applicable. But broad segments dilute focus and lead to mediocre solutions.
Guard: Segment ruthlessly. Include: industry (B2B SaaS vs. B2C vs. Enterprise), company size (early-stage vs. mid-market), role, experience level, use case, timeline. If your segment description doesn't fit on one line, you're being too broad.
Description: "Users struggle with setup" (generic) vs. "Users struggle with setup on their first day at a new company, using a personal laptop on the train, without IT support" (contextual).
Why LLMs make this mistake: Context requires lived experience or deep customer research. LLMs can infer generic problems but miss nuance.
Guard: For every problem statement, include: When does it happen? Where (which part of the user workflow)? With whom (alone, with a manager, with IT)? What are they trying to accomplish?
Before delivering the problem statement, verify: