From product-strategy
Activate for: prioritise, prioritization, backlog prioritisation, backlog order, what to build first, RICE, ICE, MoSCoW, Kano, value vs effort, impact vs effort, backlog ranking, roadmap prioritisation, compare features, what is most important, which feature, product decision, build order, quarterly priorities, should we build this, feature value, rank backlog. NOT for: roadmap planning (use official /roadmap-update), sprint planning (use official /sprint-planning), metrics review (use official /metrics-review).
npx claudepluginhub panaversity/agentfactory-business-plugins --plugin product-strategyThis skill uses the workspace's default tool permissions.
Before prioritising, load `product.local.md` for product context,
Applies RICE, MoSCoW, Kano, ICE, Opportunity Scoring frameworks to rank features and backlog items. Use for prioritizing what to build next, backlog grooming, or tradeoff evaluation.
Apply RICE, ICE, MoSCoW, Kano, and Value vs Effort frameworks for prioritizing features, roadmaps, backlogs, and trade-offs.
Prioritizes feature backlogs by scoring impact, effort, risk, and strategic alignment using ICE/RICE and Opportunity Score to recommend top 5. Use for scope decisions and ranking product ideas.
Share bugs, ideas, or general feedback.
Before prioritising, load product.local.md for product context,
current themes, and engineering capacity. If not configured, ask the user
for product context and current strategic priorities.
Ask: what is the primary prioritisation challenge?
"We have too many features and need to rank them" -> Use RICE (reach, impact, confidence, effort)
"We need a quick 2x2 without complex scoring" -> Use Value vs. Effort matrix
"We need to sort a large backlog by customer demand vs. complexity" -> Use Kano model (basic needs / performance needs / delighters)
"We need to communicate to stakeholders what is MUST vs. SHOULD vs. COULD" -> Use MoSCoW
"We need to evaluate a single feature request -- yes or no" -> Use the single-feature evaluation framework
RICE Score = (Reach x Impact x Confidence) / Effort
Reach: % of active users/accounts that will use this in the first 3 months after launch [0-100%]
Impact: How much will this improve the experience for those who use it? 1 = minimal | 2 = moderate | 3 = significant (be conservative -- overestimating is common)
Confidence: How confident are you in the Reach and Impact estimates? High = 80-100% | Medium = 50-80% | Low = 25-50% [use the lower number if you have less data]
Effort: How much engineering time does this require? Express in person-sprints (1 sprint = ~2 weeks x 1 engineer)
RICE OUTPUT FORMAT:
| Item | Reach | Impact | Confidence | Effort | RICE | Rank |
|---|---|---|---|---|---|---|
| [Item] | [X]% | [1/2/3] | [X]% | [N sprints] | [Score] | [N] |
IMPORTANT: Show all assumptions. RICE scores are only as good as the estimates behind them. A score from bad data is worse than no score -- it creates false precision.
CHALLENGE 1: STRATEGIC OVERRIDE TEST Is there any item that scored low that you would build anyway? If yes: write the override reason explicitly. Hidden strategic overrides are where backlogs go wrong. Common override reasons: CEO commitment / competitive necessity / technical prerequisite / enterprise deal dependency. None of these are wrong -- but they must be documented as overrides, not buried in a score.
CHALLENGE 2: DATA GAP TEST For each item with confidence below 50%:
CHALLENGE 3: "WHAT WOULD WE REGRET?" TEST Ignore all scores. Ask: which item, if we shipped nothing else this quarter, would our best customers be most grateful for? Does it match the top scorer? If not: why the gap? (The gap is usually a data quality issue in the scoring -- fix the data, not just override the score)
After running the framework and three challenges, produce:
QUARTERLY PRIORITY DECISION
================================================================
PRIORITY 1 (must ship):
[Item] -- [One-sentence rationale]
PRIORITY 2 (ships if P1 goes well):
[Item] -- [One-sentence rationale]
STRETCH GOAL (ships if capacity allows):
[Item] -- [One-sentence rationale]
EXPLICITLY NOT BUILDING THIS QUARTER:
[Item] -- [One-sentence rationale -- this is the most important entry]
STRATEGIC OVERRIDES DOCUMENTED:
[Item] -- [Override reason -- why scoring was overridden]
DISCOVERY SPIKES RECOMMENDED:
[Item] -- [What question the spike needs to answer]
================================================================
For a yes/no decision on one feature request:
FEATURE EVALUATION: [Feature name]
---------------------------------------------------------
Request source: [Customer / internal / competitor pressure]
Frequency of request:[How often; from how many sources]
User problem: [What problem does it solve?]
Alternative solutions:[Can this be solved another way?]
Effort: [Rough estimate]
Opportunity cost: [What does NOT get built if we build this?]
Confidence in value: [Low / Medium / High -- with basis]
RECOMMENDATION: [BUILD / DEFER / DECLINE / DISCOVERY SPIKE]
Rationale: [2-3 sentences]
---------------------------------------------------------
This skill handles backlog prioritisation and framework scoring. For related PM workflows:
/roadmap-update/sprint-planning/stories from this plugin/metrics-reviewALL OUTPUTS REQUIRE REVIEW BY THE PM AND STAKEHOLDERS BEFORE COMMITMENT.