From product-owner
Write a Product Requirements Document from rough notes or a feature idea. Produces a structured PRD with problem validation, user definition, RICE prioritisation, acceptance criteria, and scope.
npx claudepluginhub hpsgd/turtlestack --plugin product-ownerThis skill is limited to using the following tools:
Write a PRD for $ARGUMENTS.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
Write a PRD for $ARGUMENTS.
Based on current best practices from Marty Cagan (SVPG), Lenny Rachitsky, Shreyas Doshi, and Teresa Torres. The PRD is an alignment document, not a contract — it forces clarity of thought and communicates intent.
Follow every step below. Do not skip sections. Mark unknowns with [NEEDS CLARIFICATION: question] rather than guessing.
Match PRD weight to decision weight:
| Effort | Format | Sections required |
|---|---|---|
| < 1 week | Ticket with acceptance criteria | Problem + criteria + scope only |
| 1-4 weeks | One-pager | Steps 1-6 below |
| 1+ month | Full PRD | All steps below |
| Quarter+ | Full PRD + strategy context | All steps + link to OKRs and roadmap |
Before the problem statement, establish WHY this work matters NOW:
Before writing anything, answer these four questions. If you cannot answer them from the input, mark them [NEEDS CLARIFICATION] and proceed with assumptions stated explicitly.
Anti-patterns to avoid:
Define the primary user precisely. Include all of:
If there is a secondary user (e.g., an admin who configures something that an end-user consumes), define them too, but mark the primary user clearly.
Score the initiative before investing in detailed requirements. This forces honest assessment of value vs. effort.
| Factor | Score | Reasoning |
|---|---|---|
| Reach | N users/quarter | How many users/accounts will this affect in the next quarter? Use real numbers from analytics, not guesses. If unknown, state the assumption. |
| Impact | 3/2/1/0.5/0.25 | 3 = massive (transforms workflow), 2 = high (significant improvement), 1 = medium (noticeable), 0.5 = low (minor), 0.25 = minimal |
| Confidence | 100%/80%/50% | 100% = we have data. 80% = strong signals but some assumptions. 50% = educated guess. |
| Effort | person-weeks | Total effort across all disciplines (design, eng, QA). Round up, never down. |
RICE Score = (Reach x Impact x Confidence) / Effort
State the score. Compare it to other recent initiatives if context is available. A score below 1.0 should trigger a conversation about whether to proceed.
Define exactly how we will know this succeeded. Every PRD must have at least one leading and one lagging indicator.
Leading indicators (measurable within the first week):
Lagging indicators (measurable after 4-8 weeks):
Guardrail metrics (must NOT get worse):
Guardrails protect against second-order effects. A feature that improves adoption but increases support volume by 3x is not a success.
Failure definition: State explicitly what failure looks like. "Less than 10% adoption after 4 weeks" or "no measurable change in time-to-complete" — be specific enough that you can make a kill decision.
Write user stories for every behaviour in scope. Each story follows this format:
### US-[N]: [Short title]
As a [specific user type], I want [concrete action] so that [measurable outcome].
**Acceptance Criteria:**
- [ ] [I] — Independent: testable without other criteria
- [ ] [S] — Small: one verifiable behaviour
- [ ] [C] — Complete: includes the boundary condition
**Edge cases:**
- What happens when [empty state]?
- What happens when [error condition]?
- What happens when [concurrent access]?
ISC Splitting Test: Every acceptance criterion must pass all three:
If a criterion fails the ISC test, split it until each part passes.
In scope: List every capability included in this release. Be specific — "user can filter" is vague; "user can filter by date range, status, and assignee" is specific.
Out of scope: List what is deliberately excluded AND state why for each item. Common reasons:
Anti-requirements: Things we are explicitly NOT doing, even though someone might expect them. These prevent scope creep by making exclusions visible.
Imagine the feature shipped and failed. What went wrong? This is a pre-mortem (Shreyas Doshi's emphasis — anticipate failure before it happens).
Four risk categories (Marty Cagan's framework):
| Risk type | Question | Assessment |
|---|---|---|
| Value risk | Will users actually want this? | [evidence for/against] |
| Usability risk | Can they figure out how to use it? | [complexity assessment] |
| Feasibility risk | Can we build it with current tech/team/timeline? | [technical assessment] |
| Business viability risk | Does this work for the business? (legal, financial, strategic) | [business assessment] |
Open questions:
| Question | Impact if wrong | Owner | Due date |
|---|---|---|---|
| Specific question | What happens if our assumption is incorrect | Who will answer this | When we need the answer by |
Reversibility: Is this decision easily reversible (feature flag, config change) or hard to reverse (data migration, public API, pricing change)? Reversible decisions can be made faster with less certainty. Irreversible decisions need more evidence.
How will this reach users?
This section is optional but encouraged. Include only hard constraints, not implementation suggestions.
Mark any technical suggestions clearly as [SUGGESTION — not a requirement] to avoid constraining engineering prematurely.
Before finalising the PRD, verify:
Write the PRD to docs/prd-[feature-name].md. Use the structure above as the document structure. Include the RICE score in the document header as metadata:
# PRD: [Feature Name]
| Field | Value |
|-------|-------|
| Author | [name] |
| Status | Draft |
| RICE Score | [N] |
| Target release | [quarter or date] |
| Last updated | [date] |
Every [NEEDS CLARIFICATION] marker is a follow-up item. List them all in a summary at the end of the document under "## Open Items Requiring Clarification."
/product-owner:write-user-story — break the PRD into implementable user stories with acceptance criteria./product-owner:groom-backlog — prioritise the resulting stories into the backlog using RICE scoring.