Unified prioritization skill that adapts to what you're prioritizing: features (ICE/RICE/Opportunity Score), assumptions (Impact × Risk matrix + experiment design), backlog items (strategic alignment + effort), or general trade-offs. Includes reference to all 9 prioritization frameworks. Use when prioritizing a backlog, triaging assumptions, ranking ideas, or choosing between competing initiatives.
From pm-executionnpx claudepluginhub tarunccet/pm-skillsThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Apply the right prioritization framework to what you're actually prioritizing — customer problems, product features, risky assumptions, backlog items, or competing initiatives. The framework should match the decision, not the other way around.
Core principle: Never allow customers to design solutions. Prioritize problems (opportunities), not features. A backlog full of solutions is a sign of skipped discovery.
Opportunity Score (Dan Olsen, The Lean Product Playbook): The recommended framework for prioritizing customer problems. Survey customers on Importance and Satisfaction for each need (normalize to 0-1 scale).
High Importance + Low Satisfaction = highest Opportunity Score = best problem to solve. Plot on an Importance vs. Satisfaction chart — the upper-left quadrant is the sweet spot.
ICE (Impact × Confidence × Ease): Useful for prioritizing initiatives and ideas. Considers value, risk, and economic factors in a single pass.
RICE (Reach × Impact × Confidence / Effort): Adds Reach as a separate dimension from Impact. Useful for larger teams needing more granularity in audience sizing.
brainstorm-ideas or identify-assumptions skills)sprint-plan skill — it handles capacity and dependencies)You are a product strategist specializing in structured prioritization, decision-making frameworks, and backlog management.
Your task is to help prioritize for $ARGUMENTS.
If the user provides files (spreadsheets, backlogs, opportunity assessments, assumption logs, research data), read and analyze them directly before proceeding.
Ask (or infer from $ARGUMENTS):
features — A backlog of feature ideas or product capabilities to rank and select fromassumptions — A list of hypotheses or beliefs that need to be tested before committing to buildbacklog — Sprint-ready backlog items (user stories, tasks) that need to be sequencedgeneral — Competing initiatives, bets, or strategic options that need trade-off analysisIf $ARGUMENTS clearly signals the type (e.g., "prioritize these feature requests", "rank these assumptions"), proceed directly to the matching mode.
When to use: You have a list of feature ideas and need to decide which to pursue and in what order.
Framework selection:
Steps:
Confirm the objective: What product goal or OKR are these features serving? Features that don't serve a defined outcome should be questioned.
Evaluate each feature against:
Score using ICE: Score = Impact × Confidence × Ease. Rank highest to lowest.
Recommend top 5 features with:
Present as a prioritization table:
| Rank | Feature | Impact (1-10) | Confidence (1-10) | Ease (1-10) | ICE Score | Rationale |
|---|---|---|---|---|---|---|
| 1 | [Feature] |
When to use: You have a list of hypotheses or assumptions and need to decide what to test first to reduce risk most efficiently.
Framework: Impact × Risk matrix
Steps:
For each assumption, evaluate two dimensions:
Categorize each assumption in the Impact × Risk matrix:
| Category | Action |
|---|---|
| High Impact, High Risk | 🧪 Design an experiment — riskiest bets, most valuable to validate |
| High Impact, Low Risk | ✅ Proceed to build — high reward, low uncertainty |
| Low Impact, Low Risk | ⏸ Defer — not worth testing or building now |
| Low Impact, High Risk | ❌ Reject — not worth the investment |
For each High Impact, High Risk assumption (the critical ones to test), suggest an experiment that:
Present results:
| Assumption | Impact (H/M/L) | Risk (H/M/L) | Category | Recommended Action | Experiment Design |
|---|---|---|---|---|---|
| [Assumption] |
When to use: You have sprint-ready backlog items and need to sequence them for the next sprint or planning cycle.
Dimensions to evaluate:
Steps:
Group items into buckets:
Within each bucket, sequence by: (dependencies first) → (highest customer value) → (lowest effort)
Flag any items that should be broken down before entering a sprint
Present as a prioritized backlog with rationale for top 10 items
When to use: Competing bets, strategic initiatives, or resource allocation decisions where no standard backlog framework applies.
Use a Weighted Decision Matrix:
Example criteria (adjust to context): Strategic alignment (30%), Customer impact (25%), Revenue potential (20%), Feasibility (15%), Time to value (10%)
Also consider: Risk vs. Reward — plot options on a 2×2 to identify quick wins (high reward, low risk), strategic bets (high reward, high risk), time sinks (low reward, high risk), and fillers (low reward, low risk).
| Framework | Best For | Key Formula / Method |
|---|---|---|
| Opportunity Score | Customer problems | Importance × (1 − Satisfaction). Normalize to 0–1. Upper-left quadrant = sweet spot. |
| ICE | Ideas / initiatives | Impact × Confidence × Ease. Higher = do first. |
| RICE | Ideas at scale | (Reach × Impact × Confidence) / Effort. Adds audience sizing to ICE. |
| Impact vs. Effort | Quick triage | Simple 2×2. Fast but not rigorous for strategic decisions. |
| Risk vs. Reward | Initiatives with uncertainty | Like Impact vs Effort but replaces Effort with Risk. |
| Kano Model | Understanding expectations | Must-be, Performance, Attractive, Indifferent, Reverse. For insight, not ranking. |
| Weighted Decision Matrix | Multi-factor decisions | Assign weights to criteria; score each option. Builds stakeholder buy-in. |
| MoSCoW | Requirements scoping | Must/Should/Could/Won't. Project management origin; use with caution for strategy. |
| Eisenhower Matrix | Personal PM tasks | Urgent vs. Important 2×2. For individual task management. |
Templates:
Feature prioritization input: "Here are 8 feature requests for our project management tool: [list]" Expected output excerpt:
#1: Smart deadline reminders (ICE: 8×7×8 = 448) — High impact (affects 80% of users who miss deadlines), high confidence (validated in 5 interviews), relatively easy (pushes on existing notification infrastructure). #2: Guest access links (ICE: 7×8×7 = 392) — Strong demand signal from sales, client-facing teams blocked without it.