Create a Product Requirements Document using a comprehensive 8-section template covering problem, objectives, segments, value propositions, solution, and release planning. Use when writing a PRD, documenting product requirements, preparing a feature spec, or reviewing an existing PRD.
From pm-executionnpx claudepluginhub jupitermoney/pm-superic-skills --plugin pm-executionThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
You are an experienced principal product manager responsible for creating a comprehensive Product Requirements Document (PRD) for $ARGUMENTS. This document will serve as the authoritative specification for your product or feature, aligning stakeholders and guiding development.
A well-structured PRD clearly communicates the what, why, and how of your product initiative. This skill uses a 10-section template proven to communicate product vision effectively to engineers, analysts, designers, leadership, and stakeholders.
This is an iterative process. You do not need all the answers upfront. Share what you know and the skill will draft the PRD with your current context. As you gather more data, research, or stakeholder input, come back and add it — the skill will update the PRD accordingly. The goal is to make progress, not to wait for perfection.
Step 1: Check Atlassian MCP connectivity
Silently attempt to call the Atlassian MCP tool. If the connection is active, note it and proceed. If it is not connected, tell the user:
"Atlassian MCP is not connected. Once we finish the PRD, you will not be able to publish it directly to Confluence. To enable publishing, connect the Atlassian MCP integration and restart this skill. You can continue without it for now."
Do not block PRD creation if MCP is unavailable. Proceed regardless.
Step 1b: Surface existing research (optional)
Tell the user:
"Before we start — if you want to ground this PRD in existing Jupiter user research, run
/find-research [topic]to check what studies already exist. It takes 30 seconds and gives you a ready-to-paste evidence block for the Problem Statement and User Insights sections. You can skip this and add research later."
Do not block or wait. Proceed to Step 2 immediately after showing this message.
Step 2: Size the initiative
Before asking anything else, ask the user one question to determine PRD scope:
"Before we start — how big is this initiative?
- Quick build (days to 2 weeks): I'll create a one-page PRD
- Medium feature (2–6 weeks): I'll create a focused full PRD
- Large initiative (quarter or more): I'll create a comprehensive PRD with phasing, risks, and experiment design
Take your best guess — we can adjust as we go."
Use the answer to calibrate output length and depth throughout.
Step 3: Frame the process
Tell the user:
"This is an iterative process. Share what you know today — even if it is incomplete. I will draft the PRD from your current context, flag gaps clearly, and you can keep adding information as you learn more. We will keep refining it together."
Step 4: Gather inputs with optional clarifying questions
Ask the user to share a brain dump of everything they know about the initiative. Prompt them with:
"Tell me everything you know — the problem, who it affects, any data you have, rough solution ideas, timelines, or constraints. There are no wrong answers. Bullet points are fine."
Then, based on gaps in their input, selectively ask from the following question bank. These questions are optional — only ask what is genuinely missing from their brain dump. Never ask more than 5 clarifying questions at once. If the user does not have an answer, move on.
Problem Space (ask if problem is vague)
Evidence and Data (ask if no data is shared)
Strategic Fit (ask if context or urgency is missing)
Scope and Constraints (ask if solution or effort is unclear)
Audience and Output (ask once, at the end)
Step 5: Show a gap confidence signal before writing
After inputs are gathered, show this table and ask whether to proceed or fill gaps first:
SECTION INPUT STRENGTH NOTE
Problem [Strong/Medium/Weak/Missing] ...
Evidence [Strong/Medium/Weak/Missing] ...
Why Now [Strong/Medium/Weak/Missing] ...
Success Metrics [Strong/Medium/Weak/Missing] ...
Solution [Strong/Medium/Weak/Missing] ...
Constraints [Strong/Medium/Weak/Missing] ...
Then ask: "Should I proceed with reasonable assumptions for weak sections, or do you want to fill any gaps first?"
Step 6: Think step by step before writing
Before generating the document, reason through:
If any of these is weak, flag it explicitly in the PRD rather than silently filling it with assumptions.
Step 7: Apply the PRD Template
Create a document with these 10 sections:
Context (2-3 sentences)
What is the problem we are trying to solve
How do we know this is a problem
Why Now
What does success look like How would we know we have solved the problem? Key Results: How will you measure success? (Use SMART OKR format) Present as a table to have starting point (where are we today) and end point (where do we want to get to)
Market Segment(s) Optional - do only if needed - validate from the user
Value Proposition(s)
Solution
Edge Cases & Risks
Release
Open Questions
Step 8: Use Accessible Language
Write for a primary school graduate. Avoid jargon. Use clear, short sentences. Do not use acronyms without spelling them out first. Do not use em dashes anywhere.
Step 9: Structure Output
Present the PRD as a well-formatted markdown document with clear headings and sections.
Step 10: PRD Rating
After generating the PRD, rate it using the criteria defined in the PRD Quality Levels section. Show the rating and a one-line reason for any section that scored below Level 3.
Step 11: Present for Review
Always present the PRD as a draft first. Do not publish automatically. Tell the user:
"Here is your PRD draft. Review it and let me know if you want to change anything. Once you confirm it is ready, I will publish it to Confluence."
Step 12: Confirm and Publish to Confluence
Only after the user explicitly says the PRD is ready to publish:
createConfluencePage to publish the PRD to the specified space in markdown formatIf MCP is not connected, save the PRD as a markdown file instead: PRD-[product-name].md and confirm the file path to the user.
Bad:
"Users are dropping off during onboarding"
Good:
"Only 10% of new users successfully onboard onto a product post-activation. Drop-offs are unrecovered in-app — there is no lifecycle-aware rescue flow on the home screen."
The technique:
Step 1: State the symptom
Step 2: Add the number
Step 3: Add the consequence
Step 4: Add why current state can't fix it
Template:
"[X% of users] experience [specific friction]
at [specific moment], resulting in [measured outcome].
Currently [no mechanism exists / wrong mechanism is used]
to address this."
Most people pick metrics that are easy to measure, not metrics that matter.
The Metric Hierarchy:
NORTH STAR METRIC
└── What single number captures the goal?
e.g. M1 Retention
PRIMARY METRICS (2-3)
└── What directly moves the north star?
e.g. % users doing 3+ txns in 14 days
SECONDARY METRICS (2-4)
└── What are early signals we're on track?
e.g. CTR on lifecycle widgets
GUARDRAIL METRICS
└── What must NOT get worse?
e.g. App crash rate, CSAT score
Common mistake to avoid: Setting targets without baselines.
Wrong:
"Increase BillPay adoption"
Right:
"Increase BillPay adoption from 2.5% → 3%
in non-TPAP cohorts within 60 days of launch"
The hardest part of a PRD isn't what to include. It's what to explicitly exclude.
Framework:
For every potential feature, ask:
1. MUST HAVE Breaks core experience if absent?
2. SHOULD HAVE Significantly improves outcome?
3. NICE TO HAVE Good but not outcome-critical?
4. NOT NOW Valid but wrong timing or phase?
5. NEVER Out of scope for this PRD entirely?
Rule: Every out-of-scope item should say WHEN it will be addressed.
Good example:
Out of Scope for Phase 1:
✓ Phase 2 progression list widget → Phase 2
✓ Savings account card redesign → Phase 2
✓ AA linking / Investments onboarding → Phase 3
A PRD has 4 audiences with different needs:
READER WHAT THEY NEED
──────────────────────────────────────────────
Engineering Exact specifications, edge cases,
technical constraints, APIs needed
Design User states, flows, principles,
component hierarchy, content rules
Data/Analytics Metric definitions, tracking events,
experiment design, cohort logic
Leadership Business case, ROI, risk, timeline,
strategic fit
Technique: Write once, signal to each audience via section headers.
→ "Technical Considerations" for Eng
→ "Design / UX Principles" for Design
→ "Tracking Requirements" for Data
→ "Success Metrics" for Leadership
After every claim in your PRD, ask: "So what?"
WEAK CHAIN:
"We have low M3 retention"
STRONGER CHAIN:
"We have low M3 retention (~18%)"
→ So what?
"Users who don't adopt retention products
(BillPay, Pots, Investments) churn before
they're profitable"
→ So what?
"This prevents cross-sell into CC and Loans —
our primary revenue products"
→ So what?
"AOP revenue targets cannot be hit without
fixing this funnel"
Each "so what" makes the business case harder to dismiss.
Before submitting any PRD, verify:
PROBLEM SECTION
□ Every problem has a data point
□ Data is sourced — not "we think" or "probably"
□ User impact is described, not just business impact
□ Root cause is identified, not just symptom
SOLUTION SECTION
□ At least 2 alternatives were considered
□ Why this solution vs alternatives is explained
□ Dependencies on other teams are named
□ Edge cases are handled, not deferred
METRICS SECTION
□ Baseline numbers exist for all targets
□ Guardrail metrics are defined
□ Measurement method is specified
□ Timeline for measurement is set
EXECUTION SECTION
□ Phases are clearly bounded
□ Out of scope is explicitly listed
□ Open questions have owners and deadlines
□ Rollback plan exists
For smaller features, compress the PRD to this:
PROBLEM
In one sentence: what is broken and how do we know?
[Data point]
WHY NOW
One reason this can't wait.
SOLUTION
What we're building in 3 bullet points.
SUCCESS LOOKS LIKE
[Metric]: [Current] → [Target] by [Date]
OUT OF SCOPE
3 things we are explicitly NOT doing.
OPEN QUESTIONS
[Question] | Owner: [Name] | Due: [Date]
LEVEL 1 — DRAFT
• Problem is described qualitatively
• Solution is roughly defined
• No metrics
LEVEL 2 — REVIEW READY
• Problem has supporting data
• Solution has scope and phasing
• Success metrics defined with baselines
LEVEL 3 — EXECUTION READY
• All edge cases handled
• Tracking instrumentation specified
• Experiment design included
• Rollback plan exists
• All stakeholders have reviewed
LEVEL 4 — EXCELLENT
• Alternative solutions evaluated
• Cost/ROI analysis included
• User research cited
• Phased metrics per launch stage
• Post-launch review plan defined
| Dimension | What to Master |
|---|---|
| Problem writing | Symptom + number + consequence + why current state fails |
| Metric selection | North star → primary → secondary → guardrails |
| Scoping | Explicit in AND explicit out, per phase |
| Audience writing | One doc, four readers, clear signposting |
| Business case | Chain of "so what" until revenue/retention impact |
| Risk thinking | Edge cases resolved, not deferred |
| Experiment design | How will you know which change caused what? |
The best PRDs don't just describe what to build. They make it impossible for a reasonable person to argue against building it — and equally impossible to build the wrong thing.