From discovery
Guides product discovery through 10 phases to produce a structured PRD with Mermaid wireframes, journeys, and architecture diagrams. Use for idea validation, PRD creation, or product shaping.
npx claudepluginhub incubyte/ai-plugins --plugin discoveryThis skill uses the workspace's default tool permissions.
Run a structured product discovery flow that takes a raw product idea and produces a PM-grade PRD. The skill runs as a guided interview across ten phases. Each phase asks targeted questions, applies the relevant framework, and ends with a checkpoint where the user confirms before moving on.
references/frameworks/competitive-analysis.mdreferences/frameworks/experiment-design.mdreferences/frameworks/gtm-strategy.mdreferences/frameworks/hypothesis.mdreferences/frameworks/interview-synthesis.mdreferences/frameworks/jtbd-canvas.mdreferences/frameworks/monetizing-innovation.mdreferences/frameworks/opportunity-tree.mdreferences/frameworks/persona-canvas.mdreferences/frameworks/positioning-canvas.mdreferences/frameworks/prioritization.mdreferences/frameworks/problem-statement.mdreferences/frameworks/solution-brief.mdreferences/frameworks/stakeholder-summary.mdreferences/frameworks/working-backwards.mdGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Run a structured product discovery flow that takes a raw product idea and produces a PM-grade PRD. The skill runs as a guided interview across ten phases. Each phase asks targeted questions, applies the relevant framework, and ends with a checkpoint where the user confirms before moving on.
The discovery is resumable via discovery-state.md. The deliverable is a single markdown PRD at PRD.md with wireframes, mockups, journeys, and architecture inline as Mermaid/ASCII diagrams. After delivery, revision mode lets the user update specific sections.
Trigger on any of:
These override the default flow when they conflict with it.
AskQuestion tool (askuserquestions) for discovery prompts whenever options can be enumerated. Do not default to adding "something else" and "chat about this" to every question; only add an open option when strictly necessary after trying specific, decision-ready options first.Poor question UX kills completion. Keep interaction crisp so users do not abandon the Q&A.
AskQuestion for selectable prompts. If the user is choosing role, context, trigger, prioritization, or path selection, use AskQuestion instead of open chat.Phase 0: Context → Who's asking, why, who reads the output
Phase 1: Scope & Problem → Goals, segmentation, problem statement, assumptions
Phase 2: Competition → Competitive landscape + kill gate
Phase 3: User Journeys → Happy + edge + error + abandonment paths
Phase 4: Wireframes → Structural screen sketches
Phase 5: Low-Fi Mockups → Annotated mockups with interaction notes
Phase 6: Epics + Functional Requirements → Hierarchy mapped to journey steps with testable requirements
Phase 7: Technical Overview → One paragraph + arch diagram
Phase 8: Metrics Framework → North-star, primary, secondary with rationale
Phase 9: GTM → ICP, buyer persona, positioning, pricing, channels, marketing+sales
Phase 10: PRD Assembly → Compile + optional paths-not-taken + optional TAM/SAM/SOM
Each phase ends with a checkpoint — a summary plus an explicit "ready to move on?" question. The user can say "go back to phase N" anytime.
At every invocation, first check for discovery-state.md in the project root.
Read it. Show the user a short resume summary, then ask: resume, jump back to revise something, or restart? If restart, archive the old state as discovery-state.archived-<date>.md.
Create it (format below) and start at Phase 0.
PRD.md exists and Phase 10 is marked completeThis is revision mode — see "Revision Mode" near the end of this skill.
# Discovery State
**Product:** <one-line name/description>
**Started:** <date>
**Last updated:** <date>
## Current phase
<phase number and name>
## Completed phases
- [ ] Phase 0: Context
- [ ] Phase 1: Scope & Problem
- [ ] Phase 2: Competition
- [ ] Phase 3: User Journeys
- [ ] Phase 4: Wireframes
- [ ] Phase 5: Low-Fi Mockups
- [ ] Phase 6: Epics + Functional Requirements
- [ ] Phase 7: Technical Overview
- [ ] Phase 8: Metrics Framework
- [ ] Phase 9: GTM
- [ ] Phase 10: PRD Assembly
## Phase 0 — Context (answers)
## Phase 1 — Scope & Problem (answers, locked problem statement)
## Assumptions inventory
## Phase 2 — Competition (answers, kill-gate decision)
## Phase 3 — Journeys (paths captured)
## Phase 4 — Wireframes
## Phase 5 — Mockups
## Phase 6 — Epics + Functional Requirements (with journey mapping)
## Phase 7 — Technical Overview
## Phase 8 — Metrics Framework
## Phase 9 — GTM
## Phase 10 — PRD Assembly
## Decisions log
- <date>: <decision> — <reason>
## Revision log (post-delivery)
- <date>: revised <section> — <reason>
Update at the end of every phase.
Goal: understand who's asking, why now, who reads the output.
Ask in one batch:
Ask: "When this PRD is done, who reads it and acts on it?" Capture stakeholders. Common readers: Engineering, Design, Marketing, Sales, Leadership, Legal, Finance. This shapes section emphasis in Phase 10.
Ask: "What questions are you NOT asking us to answer?" Capture explicitly out-of-scope research areas (international, legacy migration, long-term roadmap, etc.).
Summarize role + context + trigger, key stakeholders, research boundaries. Update state. Ask to proceed.
Goal: lock the problem statement, separate business goals from user goals, segment users explicitly.
Ask the user to describe the product in one or two sentences. Capture verbatim.
Read references/frameworks/jtbd-canvas.md. Apply as a default step:
Ask: "Do you have any user research — interviews, support tickets, sales calls, surveys, NPS?"
references/frameworks/interview-synthesis.md. Synthesize 3-5 themes. Cite explicitly in problem statement.Ask:
The "other potential users" answer is critical — it forces the user to articulate why they picked their primary segment.
Ask: "What does the business want from shipping this?" Distinct from user goals. Examples:
Force at least one specific business goal. "Make money" is not a goal.
Ask: "What does the user want from this product?" Distinct from the JTBD outcome (the job they're hiring it for) — user goals are concrete, observable wants from interacting with the product.
Required format for every user goal:
"As a [specific user role], I want to [observable action / state] so that [concrete outcome] — measured by [observable signal]."
Each user goal MUST have:
Reject goals that fail any of:
Examples (rewritten to spec):
Capture 2-4 user goals total. More than 4 means they're not prioritized; 2 is fine if the product is narrowly scoped. Tag each as MVP (must achieve at launch) or post-MVP (target for later).
Walk through, in batches of 2-3:
Ask: "What's explicitly OUT of scope?" This section is the contract that prevents scope creep. Force precision — vague non-goals get violated; precise non-goals hold.
Required format for every non-goal:
"[Specific capability or scope] — out of scope because [rationale]. [Time horizon]. Reconsider if [evidence threshold]."
Each non-goal MUST have:
Categorize non-goals into three buckets:
Reject non-goals that fail any of:
Examples (rewritten to spec):
Capture 3-7 non-goals. Fewer than 3 usually means the user hasn't thought hard about boundaries; more than 7 usually means some are too granular and should be combined.
Read references/frameworks/problem-statement.md. Use it to draft the locked problem statement that integrates JTBD, research, segmentation, and goals. Show the draft. Iterate with user. Do not move on until the user confirms the problem statement is correct. This is the foundation for everything downstream.
For genuinely new products, offer the Amazon PR/FAQ exercise. Read references/frameworks/working-backwards.md. Optional but high-leverage.
Force 5-10 assumptions, tagged:
validated — backed by research/dataassumed — believed, low-risk if wrongrisky — believed, kills the product if wrongIf the user can't name any risky assumptions, push back: "What are we betting on that, if wrong, kills this?"
Summarize:
Update state. Ask to proceed.
Goal: understand the field; explicitly evaluate whether to continue.
Ask: "Who do you see as your competitors?"
Read references/frameworks/competitive-analysis.md. For each competitor: target, core value prop, where they win, where they lose, pricing.
Synthesize: where's the positioning gap this product fills? Not "missing features" — "user/job/context poorly served."
If no clear gap surfaces, capture explicitly. Feeds the kill gate.
After analysis, explicitly evaluate:
## Kill Gate Evaluation
### Reasons to PROCEED
- <reason>
- <reason>
### Reasons to RECONSIDER
- <weak signal on user pain>
- <saturated market>
- <unclear gap>
- <untested risky assumptions>
- <regulatory/technical blocker>
### Recommendation
<PROCEED | PROCEED WITH CAVEATS | RECONSIDER | KILL>
The recommendation is yours. Be willing to recommend KILL.
If KILL: ask the user — (a) produce a kill memo and stop, (b) continue but elevate risks in PRD, (c) loop back to Phase 1 to re-scope. If (a), produce kill memo (1 page: original idea, what was explored, why not to pursue, what would change the answer) and end.
If PROCEED WITH CAVEATS: the caveats become required PRD inclusions.
Summarize competitors + gap + kill-gate result. Update state. Ask to proceed.
Goal: map how users move through the product including edge cases, error states, and abandonment paths.
Walk the primary user from "doesn't know about product" to "recurring value." Use Mermaid:
flowchart TD
A[User discovers product] --> B[Lands on homepage]
B --> C{Decides to sign up?}
C -->|Yes| D[Signup]
C -->|No| Z[Lost]
D --> E[Onboarding]
E --> F[First core action]
F --> G[Aha moment]
G --> H[Recurring use]
For each step: one-line description. Identify the aha moment — flag it.
Map each of these as a separate Mermaid diagram:
For each core action in the happy path, list the failure modes:
For each: what does the user see? What can they do? Is the error recoverable? This becomes a section in the PRD.
If the product involves multiple users (collaborative, marketplace, social), map the cross-user flows:
If single-user, skip this step.
List 3-5 highest-friction points across all paths. For each: what's the friction, MVP handling (accept or address), post-MVP plan.
From all journeys (happy + edge + error), list every distinct user-facing screen. Bridges to Phase 4. Typical: 5-10 screens for a useful MVP.
Show happy path + 4 edge paths + error inventory + friction inventory + screen list. Ask: "Are the journeys complete? Edge cases I missed? Are these the right screens?" Update state.
Goal: structural sketches of each screen at the resolution that lets the team build.
Take the screen list from Phase 3.6. For each, produce ASCII or Mermaid.
ASCII example:
┌──────────────────────────────────────┐
│ Logo [Search] [Profile] │
├──────────────────────────────────────┤
│ │
│ Hero: "One sentence value prop" │
│ [Primary CTA] │
│ │
├──────────────────────────────────────┤
│ Three feature cards │
└──────────────────────────────────────┘
For each screen:
Every screen serves a journey step; every step that's user-facing has a screen. Reconcile mismatches.
Show all wireframes. Ask alignment. Update state.
Goal: take the wireframes one fidelity level deeper. Wireframes show structure; mockups show behavior, content, and interaction.
Mockups remain text-based (ASCII / Mermaid) but with rich annotation the wireframes don't have.
Pick the most important screens from Phase 4 (typically: landing, signup/onboarding, core feature, key edge state). Skip ancillary screens.
For each, produce a richer ASCII mockup with:
Realistic content (not "Hero text here" — actual hero text the product would use)
Interaction annotations below the mockup:
Content notes:
State variations:
Example mockup with annotations:
┌─────────────────────────────────────────────────┐
│ ✱ GraderAI [Help] Mr. Singh ▼ │
├─────────────────────────────────────────────────┤
│ │
│ Welcome back, Mr. Singh. │
│ You have 3 essay batches in progress. │
│ │
│ ┌──────────────────────────────┐ │
│ │ + Upload new batch │ ← primary CTA │
│ └──────────────────────────────┘ │
│ │
│ Recent batches │
│ ┌─────────────────────────────────────┐ │
│ │ Grade 8 — Persuasive essays │ │
│ │ 24 essays · 18 graded · ▶ continue │ │
│ ├─────────────────────────────────────┤ │
│ │ Grade 7 — Narrative essays │ │
│ │ 30 essays · all graded · view │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
Interactions:
Content notes:
States:
For the 1-2 most important interactions across screens, produce a Mermaid sequence diagram:
sequenceDiagram
User->>Dashboard: clicks "Upload new batch"
Dashboard->>FilePicker: opens
User->>FilePicker: selects 5 PDFs
FilePicker->>UploadModal: shows files
User->>UploadModal: enters batch title, clicks "Start"
UploadModal->>BackendAPI: POST /batches with files
BackendAPI-->>UploadModal: 201 Created (batch ID)
UploadModal->>BatchDetail: navigates to batch
Show all mockups with annotations + cross-screen flows. Confirm these are PRD-ready textual mockups (ASCII/Mermaid) that can be pasted directly into the final PRD without redesign. Ask: "Do these mockups capture the interactions correctly? Anything missing or wrong?" Update state.
Goal: produce a build-ready breakdown — epics → features → functional requirements — with each requirement mapped to specific journey steps.
An epic is a major area of functionality, typically delivered across multiple sprints. Group functionality into 3-7 epics. Examples for an essay-grading product:
If the user names 10+ epics, push back — likely some are features, not epics.
For each epic, list 3-7 features. A feature is a discrete capability that can be built and tested. Example for E2:
For each feature, write 1-3 functional requirements that are specific and testable. Keep them concise and implementation-agnostic.
Required format:
FR-<epic>.<feature>.<n>: The system shall <observable behavior> when <condition>.Example:
FR-2.1.1: The system shall accept drag-drop uploads of up to 10 files when each file is PDF, DOCX, or JPG and under 10 MB.FR-2.1.2: The system shall show inline validation errors when any file exceeds limits or unsupported formats are included.Every functional requirement must map to at least one journey step. Format as a table:
| Feature | Functional requirement | Maps to journey step |
|---|---|---|
| F2.1: Drag-drop batch upload | FR-2.1.1 upload allowed formats/limits | Happy path step 5 ("Upload first batch") |
| F2.1: Drag-drop batch upload | FR-2.1.2 inline validation feedback | Error path "unsupported format" |
| F2.3: Batch metadata | FR-2.3.1 save title/grade level/prompt | Happy path step 5; Power-user path "find old batch" |
If a requirement doesn't map to any journey step → either (a) add a journey step you missed, or (b) cut/defer the requirement. Requirements without journey grounding are scope creep.
Tag each feature and requirement as MVP or post-MVP. The MVP set should:
As part of this phase, log alternatives considered and rejected (for example: epic boundaries, feature cuts, requirement variants). These entries are compiled later into the optional PRD "Paths Not Taken" section.
For each candidate, capture:
Show epic list, features per epic, requirement set, mapping table, MVP vs. post-MVP split, and paths-not-taken candidates captured so far. Ask: "Are the epics and requirements correct? Do all MVP requirements trace to journeys? Is the MVP cut realistic?" Update state.
Goal: a single paragraph and a Mermaid architecture diagram. Not a full spec. Just enough to ground the engineering conversation.
Pull from:
Write a single paragraph (max 150 words) covering:
Example:
"Web-first product, React/Next.js frontend with a Python backend (FastAPI). Core grading uses an LLM API (likely Anthropic or OpenAI — to be evaluated for accuracy on student essays). User-uploaded essays stored in object storage with signed URLs; redacted of PII before sending to LLM. Single-tenant architecture for MVP. Key risks: LLM cost per essay at scale, latency for batch grading (likely needs background processing), and grading consistency across runs (may require fine-tuning or careful prompting)."
A simple Mermaid diagram showing the major components and data flow:
flowchart LR
Browser[Web App<br/>React] -->|HTTPS| API[Backend API<br/>FastAPI]
API --> DB[(Postgres)]
API --> Storage[Object Storage<br/>S3]
API -->|essay text| LLM[LLM API]
LLM -->|grading + feedback| API
API -->|webhook| Queue[Background Queue<br/>for batch jobs]
Queue --> API
Keep it high-level. No internal services, no class diagrams, no schema. The point is to communicate shape, not implementation.
Show the paragraph + the diagram. Ask: "Does this match how you imagine the system? Any major components missing?" Update state.
Goal: a structured metrics hierarchy with rationale per metric.
The single metric that, if it goes up, means the product is genuinely working. Not a vanity metric. Examples:
Ask: "If this product is succeeding 6 months from now, what single number tells you that?"
For the chosen metric, capture:
Metrics that drive the north-star. If these move, the north-star moves. Examples for a product with a "weekly active completing core action" north-star:
For each primary metric:
Operational and health metrics. Don't drive the north-star directly but reveal problems.
For each: definition, target/threshold, rationale.
What values would tell us this isn't working? Be concrete. "D30 retention < 15% → re-evaluate" beats "low retention is bad."
For each metric across the framework, validate:
Reject metrics that fail. Rewrite with the user.
Show the framework: north-star + primaries + secondaries + failure thresholds. Ask: "Do these metrics give us the right signal? Anything missing or weak?" Update state.
Goal: ICP, buyer persona, positioning, pricing, channels, marketing+sales plan.
Read references/frameworks/persona-canvas.md.
ICP is at the company / segment level. Capture:
If B2C, "ICP" becomes "primary segment definition" — same logic, individual-level.
Distinct from ICP. The buyer persona is the individual who makes or influences the purchase decision within an ICP company.
For each persona (typically 1-3):
In B2B, the buyer persona is often not the user. Capture both if they differ. Map decision-influencer relationships.
Read references/frameworks/positioning-canvas.md.
Product positioning statement (one sentence):
"For [user] who [need], [product] is a [category] that [value prop]. Unlike [alternative], we [differentiator]."
Three things this is "the X for" — sharpens category definition.
What this is NOT — clarifies more than what it is.
Brand positioning (distinct from product positioning):
Brand positioning shapes marketing assets, copy, design language. One paragraph.
Read references/frameworks/monetizing-innovation.md.
List 2-3 primary channels for acquiring the ICP. For each:
Read references/frameworks/gtm-strategy.md. Pick: PLG / sales-led / PLG+sales-assist / community-led / partner-led. One paragraph rationale based on ACV (from 9.4), customer profile (from 9.1-9.2), product complexity.
One paragraph for each:
If sales-led or sales-assisted:
If purely PLG with no sales: skip this. Note it explicitly.
Show ICP, buyer persona, positioning (product + brand), pricing, channels, motion, marketing/sales plan, launch. Ask: "Does this GTM hold together? Are there assumptions to challenge?" Update state.
Goal: produce the deliverable. Compile everything; optionally include paths-not-taken from prior phases; offer optional TAM/SAM/SOM.
Read discovery-state.md. Read templates/PRD.md.template. Pull all answers and decisions.
Ask the user: "Do you want to include a 'Paths Not Taken' section in the PRD? We already captured alternatives during discovery; this step compiles them into a concise section. Include or skip?"
If skip: don't include the section. Move on to 10.3.
If include: compile captured alternatives first, then fill any gaps with prompts:
For each path not taken, capture:
This becomes a dedicated section in the PRD. Keep it concise: 3-7 strongest alternatives max.
Ask: "Do you want to include market sizing in the PRD? This is TAM/SAM/SOM analysis. It's useful for fundraising, board updates, or strategic alignment, but requires either (a) credible bottom-up data, or (b) accepting wide error bars on top-down estimates. Skip if neither applies."
If user says yes:
Be explicit about methodology. Tag every number as "data-backed" or "estimated based on assumptions." Better to say "TAM: $5B (rough estimate based on industry reports)" than to fabricate precision.
If user says skip: don't include the section. Don't penalize the user for skipping — TAM/SAM/SOM is theatrical when the data isn't there.
Fill in templates/PRD.md.template. Section emphasis follows Phase 0.2 stakeholders:
Structure the PRD so feature modules and user flows are explicit before detailed requirements:
Feature Modules and User Flows section (2-5 modules, each with flow steps and key behavior notes).Epics, Features, and Functional Requirements.
Use the template section order as canonical for final output:Mockups are required in the final PRD. Include a Low-Fidelity Mockups section with at least 3 textual mockups from Phase 5 (ASCII and/or Mermaid), each with:
Don't omit sections. If light, label explicitly (e.g., "Technical overview: TBD with engineering review.").
Show the PRD. Ask:
Iterate.
Write to PRD.md in the project root. Update discovery-state.md:
Tell the user:
"PRD is at
PRD.md. Discovery state is atdiscovery-state.md.To revise: just say "revise the [section name]" — I'll edit both files. Common triggers: stakeholder feedback, scope cuts after engineering review, new evidence, priority changes."
Activates when:
PRD.md existsdiscovery-state.md shows Phase 10 completePRD.md and the corresponding phase in discovery-state.md.- <date>: revised <section> — <reason>
| Phase | Step | File |
|---|---|---|
| 1 | 1.2 | references/frameworks/jtbd-canvas.md |
| 1 | 1.3 (if research) | references/frameworks/interview-synthesis.md |
| 1 | 1.9 | references/frameworks/problem-statement.md |
| 1 | 1.10 (optional) | references/frameworks/working-backwards.md |
| 2 | 2.2 | references/frameworks/competitive-analysis.md |
| 9 | 9.1 | references/frameworks/persona-canvas.md |
| 9 | 9.3 | references/frameworks/positioning-canvas.md |
| 9 | 9.4 | references/frameworks/monetizing-innovation.md |
| 9 | 9.6 | references/frameworks/gtm-strategy.md |
| 10 | 10.1 | templates/PRD.md.template |
Other frameworks (opportunity-tree, hypothesis, solution-brief, experiment-design, prioritization, stakeholder-summary) are available. Reach for them when a phase surfaces a need: