Quantify and rank opportunities using the RICE framework (Reach, Impact, Confidence, Effort) to enable data-driven prioritization and trade-off discussions. Use when comparing diverse features, deciding what to build next, or allocating engineering time across initiatives.
From prioritizationnpx claudepluginhub sethdford/claude-skills --plugin pm-prioritizationThis skill is limited to using the following tools:
examples/example-output.mdtemplate.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Rank a backlog of opportunities using RICE (Reach, Impact, Confidence, Effort) to remove bias and enable transparent trade-off conversations across product, design, and engineering.
You are helping a product team prioritize what to build next. A backlog typically contains 30-100+ ideas; without a systematic framework, prioritization devolves into politics (whoever speaks loudest wins) or guesswork. RICE provides a transparent, quantifiable scoring system that makes trade-offs visible and defensible.
RICE is not magic; it's a tool for organizing thinking. The value comes from the estimation process—asking "who will this affect?" and "how confident are we?"—not from the final score. Use RICE to start conversations, not to end them.
Before starting RICE scoring, gather:
If you're missing data, estimate based on best judgment and note the confidence. Don't wait for perfect data.
Write down every candidate opportunity: features, improvements, bug fixes, infrastructure improvements, experiments. Don't filter yet. Aim for 20-50 items.
Example:
Before scoring, align the team on what each dimension means:
Reach:
Impact (value per affected user):
Confidence (% estimate accuracy):
Effort (engineering labor):
For each candidate, estimate Reach, Impact, Confidence, and Effort.
| Opportunity | Reach | Impact | Confidence | Effort (weeks) | RICE Score |
|---|---|---|---|---|---|
| Add dark mode | 8,000 | 0.5 | 75% | 2 | (8000 × 0.5 × 0.75) / 2 = 1500 |
| Fix profile load bug | 2,000 | 2 | 100% | 1 | (2000 × 2 × 1.0) / 1 = 4000 |
| Slack integration | 1,500 | 3 | 50% | 6 | (1500 × 3 × 0.5) / 6 = 375 |
| Improve onboarding | 3,000 | 2 | 75% | 4 | (3000 × 2 × 0.75) / 4 = 1125 |
Look for:
Sort by RICE score (highest first). Then review with team:
Select top opportunities to fill your next roadmap period (next quarter or 6 weeks). Typically:
A RICE scoring spreadsheet (shared with team) + a 1-page summary:
# Q2 2024 Opportunity Prioritization (RICE Scoring)
## Top Opportunities (Next Quarter)
| Rank | Opportunity | Reach | Impact | Confidence | Effort | RICE | Notes |
|------|-------------|-------|--------|------------|--------|------|-------|
| 1 | Fix profile load bug | 2,000 | 2x | 100% | 1 week | 4,000 | Critical path blocker; high ROI |
| 2 | Improve onboarding flow | 3,000 | 2x | 75% | 4 weeks | 1,125 | Affects trial-to-paid conversion |
| 3 | Add dark mode | 8,000 | 0.5x | 75% | 2 weeks | 1,500 | User-requested; modest impact |
| 4 | Build Slack integration | 1,500 | 3x | 50% | 6 weeks | 375 | High impact IF we ship, but uncertain |
## Summary
**Plan**: Commit to #1-3 for Q2 (6 weeks engineering). #4 is lower priority unless customer commits or we validate demand.
**Rationale**: #1-2 have highest RICE scores and will improve core metrics. #3 is user-requested and has good reach. #4 is speculative; recommend research spike before committing engineering.
**Confidence**: High on #1-2 (backed by data). Medium on #3 (user requests, but impact is modest). Low on #4 (speculative; needs customer validation).
Q2 2024 Opportunity Prioritization: Analytics Platform
| Rank | Opportunity | Reach | Impact | Confidence | Effort (weeks) | RICE Score | Notes |
|---|---|---|---|---|---|---|---|
| 1 | Fix funnel abandonment bug (P1) | 5,000 | 2x | 100% | 1 | 10,000 | ~100 customers affected; blocks critical workflow |
| 2 | Segment users by cohort | 4,000 | 2x | 75% | 3 | 2,000 | Top customer request; impacts key use case |
| 3 | Add real-time alerts | 2,000 | 3x | 50% | 8 | 375 | High impact but effort-heavy; lower confidence |
| 4 | Dark mode | 8,000 | 0.5x | 75% | 2 | 1,500 | User-requested; low impact |
| 5 | Build API | 500 | 3x | 25% | 6 | 62 | Could unlock integrations; very uncertain |
| 6 | Improve page load (3s → 1s) | 12,000 | 1x | 100% | 4 | 3,000 | Affects all users; good ROI |
Decision:
When estimating and facing uncertainty:
Description: Claiming "all 100,000 users" will be affected by a feature, when really only power users (10% = 10,000) will use it.
Why LLMs make this mistake: Maximizing Reach makes the RICE score higher, so LLMs unconsciously inflate it.
Guard: Always ask "Will this specific user segment use this feature?" Don't count users who won't notice the change.
Example:
Description: Confusing the importance of a problem with the impact of a solution. High-reach + low-impact = high RICE score, but building it is wrong.
Why LLMs make this mistake: LLMs see "important customer problem" and score impact as high, without thinking about the solution's magnitude.
Guard: Ask "If we ship this, how much better will users' lives be?" A dark mode is nice, but it doesn't change productivity (low impact). Reducing onboarding time from 2 hours to 30 minutes does change productivity (high impact).
Example:
Description: Using 100% confidence on speculative features ("We built something similar once, so we know how to do this").
Why LLMs make this mistake: LLMs assume past success = future success; they don't account for unique project risks.
Guard: Reserve 100% confidence for items you've already measured or shipped. For new opportunities, start at 50% or 75% unless you have strong validation.
Description: Underestimating engineering effort ("It's just a UI tweak"), leading to inflated RICE scores and missed timelines.
Why LLMs make this mistake: LLMs don't account for testing, edge cases, or context-switching overhead.
Guard: Always involve engineering in effort estimation. Include design, QA, and deployment. If uncertain, add 50% buffer.
Description: Scoring opportunities once, then never revisiting. But priorities shift (new customer, market change, data).
Why this fails: RICE is static; real business is dynamic.
Guard: Re-score backlog quarterly or when major context shifts (big customer, strategic pivot).
Before sharing RICE scores: