From faos-analyst
<!-- AUTO-GENERATED by export-plugins.py — DO NOT EDIT -->
npx claudepluginhub frank-luongt/faos-skills-marketplace --plugin faos-analystThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
A visual framework from Teresa Torres' Continuous Discovery Habits that connects a desired outcome to customer opportunities, potential solutions, and assumption tests — ensuring you build what matters most.
Prevent the two most common PM failure modes:
The OST forces explicit links: Outcome → Opportunities → Solutions → Experiments.
product-strategy-canvas instead) ┌─────────────────┐
│ Desired Outcome │ ← Single measurable metric
└────────┬────────┘
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌───────────┐ ┌───────────┐ ┌───────────┐
│Opportunity│ │Opportunity│ │Opportunity│ ← Customer needs/pain points
│ 1 │ │ 2 │ │ 3 │
└─────┬─────┘ └─────┬─────┘ └─────┬─────┘
│ │ │
┌────┼────┐ ┌───┼───┐ ┌────┼────┐
▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼ ▼
Sol. Sol. Sol. Sol. Sol. Sol. Sol. Sol. Sol. ← Multiple solutions per opp.
│ │ │
▼ ▼ ▼
Exp. Exp. Exp. ← Assumption tests
Identify a single, measurable outcome that the team owns.
Good outcomes:
Bad outcomes (avoid):
Rules:
Opportunities are customer needs, pain points, or desires — NOT features.
Sources for opportunities:
How to write opportunities:
Example (for "increase 7-day retention"):
Use the Opportunity Score formula:
Opportunity Score = Importance × (1 − Satisfaction)
For each opportunity, rate on a 1–10 scale:
| Opportunity | Importance | Satisfaction | Score |
|---|---|---|---|
| Don't understand product | 9 | 3 | 9 × 0.7 = 6.3 |
| Can't find core feature | 8 | 4 | 8 × 0.6 = 4.8 |
| Few invite teammates | 7 | 2 | 7 × 0.8 = 5.6 |
| No trigger to return | 8 | 2 | 8 × 0.8 = 6.4 |
Select top 2–3 opportunities to focus on.
For each prioritized opportunity, generate at least 3 distinct solutions.
Rules:
Example (for "No trigger to return"):
For each promising solution, identify the riskiest assumption and design a fast test.
Four assumption categories (UVFV):
Experiment types (cheapest first):
| Type | Duration | Example |
|---|---|---|
| Customer interviews | 1–2 days | "Would you use X?" (with mockup) |
| Fake door / smoke test | 2–3 days | Button that measures click-through |
| Prototype test | 3–5 days | Figma prototype with 5 users |
| Concierge MVP | 1–2 weeks | Manual version of the feature |
| A/B test | 2–4 weeks | Live feature with control group |
For each experiment, define:
Produce a complete OST summary:
## Opportunity Solution Tree
### Desired Outcome
[Outcome with baseline → target]
### Prioritized Opportunities
1. **[Opportunity name]** — Score: X.X
- Solution A: [description]
- Experiment: [type] testing [assumption]
- Solution B: [description]
- Experiment: [type] testing [assumption]
- Solution C: [description]
2. **[Opportunity name]** — Score: X.X
- Solution A: [description]
- Solution B: [description]
- Solution C: [description]
### Parked Opportunities (lower priority)
- [Opportunity] — Score: X.X (revisit next quarter)
### Next Steps
1. [First experiment to run]
2. [Second experiment to run]
3. [Decision point / review date]
| Avoid | Why | Instead |
|---|---|---|
| One solution per opportunity | No comparison = confirmation bias | Always generate 3+ solutions |
| Jumping to A/B tests | Expensive and slow for early validation | Start with interviews or prototypes |
| Outcomes you can't measure | Can't tell if you succeeded | Define baseline + target metric |
| Opportunities as features | "Build a dashboard" is a solution, not a need | Reframe: "Users need visibility into X" |
| Stale tree | Discovery is continuous, not quarterly | Review and update weekly |
| Skipping scoring | Everything feels equally important | Use Importance × (1 − Satisfaction) |