Skill

prioritization

Prioritization techniques including MoSCoW, Kano model, weighted scoring, and value-effort matrices. Ranks requirements, features, backlog items, and investment decisions.

From business-analysis
Install
1
Run in your terminal
$
npx claudepluginhub melodic-software/claude-code-plugins --plugin business-analysis
Tool Access

This skill is limited to using the following tools:

ReadWriteGlobGrepTaskSkillAskUserQuestion
Skill Content

Prioritization

Systematically rank and prioritize requirements, features, backlog items, and initiatives using proven prioritization frameworks. Supports MoSCoW, Kano model, weighted scoring, and value-effort analysis.

What is Prioritization?

Prioritization is the process of determining relative importance and ordering of items to focus resources on what matters most. Effective prioritization balances:

  • Value: Benefit to customers or business
  • Effort: Cost, time, and resources required
  • Risk: Uncertainty and potential downsides
  • Dependencies: Constraints and sequencing

Prioritization Techniques

MoSCoW Method

Categorical prioritization for timeboxed delivery:

CategoryDefinitionGuidance
MustNon-negotiable, required for successWithout these, delivery is a failure
ShouldImportant but not criticalSignificant value, workarounds exist
CouldDesirable if resources permitNice to have, enhances experience
Won'tExplicitly excluded this timeNot now, maybe later

When to Use: Sprint planning, release scoping, MVP definition, timeboxed projects

Rules:

  • Musts should be ~60% of capacity (leave room for unknowns)
  • Won'ts are explicitly stated (not silently dropped)
  • Categories are relative to the timebox, not absolute

Kano Model

Customer satisfaction-based classification:

CategoryIf PresentIf AbsentDetection
Basic (Must-Be)No increase in satisfactionMajor dissatisfactionCustomers assume these exist
Performance (Linear)Proportional satisfactionProportional dissatisfactionCustomers explicitly request
Delighter (Excitement)High satisfactionNo dissatisfactionCustomers don't expect
IndifferentNo impactNo impactNo reaction either way
ReverseDissatisfactionSatisfactionSegment prefers absence

When to Use: Product feature prioritization, understanding customer needs, differentiating from competitors

Kano Questionnaire:

  • Functional: "How would you feel if this feature was present?"
  • Dysfunctional: "How would you feel if this feature was absent?"

Responses: Like it, Expect it, Neutral, Can tolerate, Dislike it

Weighted Scoring Matrix

Multi-criteria quantitative comparison:

Step 1: Define Criteria

CriterionWeightDescription
Customer Value40%Impact on customer satisfaction
Strategic Fit25%Alignment with goals
Effort20%Development cost (inverse)
Risk15%Uncertainty/failure potential (inverse)

Step 2: Score Items

ItemCustomer Value (1-5)Strategic Fit (1-5)Effort (1-5)Risk (1-5)Weighted Score
A54344.15
B35433.75

Step 3: Calculate Weighted Score

Score = Σ (Weight × Score)
Item A = (0.40×5) + (0.25×4) + (0.20×3) + (0.15×4) = 4.20

When to Use: Complex trade-offs, multiple stakeholders, defensible decisions

Value vs Effort Matrix

2×2 prioritization for quick decisions:

quadrantChart
    title Value vs Effort
    x-axis Low Effort --> High Effort
    y-axis Low Value --> High Value
    quadrant-1 Big Bets (Plan carefully)
    quadrant-2 Quick Wins (Do first)
    quadrant-3 Fill-ins (Do if time permits)
    quadrant-4 Money Pits (Avoid)
QuadrantValueEffortAction
Quick WinsHighLowDo first
Big BetsHighHighPlan carefully
Fill-insLowLowDo if time permits
Money PitsLowHighAvoid or deprioritize

When to Use: Fast initial triage, backlog grooming, stakeholder alignment

RICE Scoring

Product management prioritization:

FactorDefinitionCalculation
ReachUsers/customers affectedNumber per time period
ImpactEffect on each user0.25 (minimal) to 3 (massive)
ConfidenceCertainty of estimates0.5 (low) to 1 (high)
EffortPerson-months requiredNumber
RICE Score = (Reach × Impact × Confidence) / Effort

When to Use: Product roadmap prioritization, feature comparison

WSJF (Weighted Shortest Job First)

SAFe/Lean prioritization for flow:

WSJF = Cost of Delay / Job Duration

Cost of Delay = User/Business Value + Time Criticality + Risk Reduction
FactorScore (1-20)Description
User/Business Value1-20Benefit to users or business
Time Criticality1-20Urgency, deadlines, decay
Risk Reduction1-20Risk/opportunity addressed
Job Duration1-20Relative size (inverted)

When to Use: Continuous flow environments, maximizing value delivery

Workflow

Phase 1: Prepare

Step 1: Gather Items to Prioritize

## Prioritization Session

**Date:** [ISO date]
**Scope:** [What's being prioritized]
**Stakeholders:** [Who's involved]
**Constraint:** [Timebox, budget, capacity]

### Items

| ID | Description | Owner |
|----|-------------|-------|
| 1 | [Item 1] | [Name] |
| 2 | [Item 2] | [Name] |

Step 2: Select Prioritization Technique

SituationRecommended Technique
Sprint/release planningMoSCoW
Product feature decisionsKano + RICE
Trade-off decisionsWeighted Scoring
Quick triageValue vs Effort
Continuous flowWSJF
Multiple criteriaWeighted Scoring

Phase 2: Execute

Step 1: Apply Selected Technique

Follow the specific technique workflow (see above).

Step 2: Validate Results

  • Do top priorities align with strategy?
  • Are dependencies respected?
  • Does the team have capacity?
  • Are stakeholders aligned?

Step 3: Document Rationale

## Prioritization Rationale

### Top Priorities

1. **[Item A]** - Score: X
   - Rationale: [Why this is top priority]
   - Dependencies: [What it depends on]

2. **[Item B]** - Score: Y
   - Rationale: [Why this is second]
   - Dependencies: [What it depends on]

### Deferred Items

- **[Item C]** - Reason: [Why deferred]

Phase 3: Communicate

Step 1: Create Prioritized Backlog

## Prioritized Backlog

| Rank | Item | Priority/Score | Owner | Target |
|------|------|----------------|-------|--------|
| 1 | [Item A] | Must / 4.5 | [Name] | Sprint 1 |
| 2 | [Item B] | Must / 4.2 | [Name] | Sprint 1 |
| 3 | [Item C] | Should / 3.8 | [Name] | Sprint 2 |

Step 2: Communicate Decisions

  • Share prioritization results with stakeholders
  • Explain rationale for key decisions
  • Address concerns about deprioritized items
  • Set expectations for what's not included

Output Formats

Narrative Summary

## Prioritization Summary

**Session:** [Scope/context]
**Date:** [ISO date]
**Technique:** [MoSCoW/Kano/Weighted Scoring/etc.]
**Facilitator:** prioritization-analyst

### Results Overview

- **Total Items:** N
- **Top Priority:** [Count]
- **Deferred:** [Count]

### Priority Distribution

| Category | Count | % |
|----------|-------|---|
| Must/Quick Wins | X | Y% |
| Should/Big Bets | X | Y% |
| Could/Fill-ins | X | Y% |
| Won't/Money Pits | X | Y% |

### Key Decisions

1. **[Top Item]**: Prioritized because [reason]
2. **[Deferred Item]**: Deferred because [reason]

### Next Steps

1. Begin work on top priority items
2. Re-prioritize at [next review point]

Structured Data (YAML)

prioritization:
  version: "1.0"
  date: "2025-01-15"
  scope: "Q1 Feature Backlog"
  technique: "weighted_scoring"
  facilitator: "prioritization-analyst"

  criteria:
    - name: "Customer Value"
      weight: 0.40
    - name: "Strategic Fit"
      weight: 0.25
    - name: "Effort"
      weight: 0.20
      inverse: true
    - name: "Risk"
      weight: 0.15
      inverse: true

  items:
    - id: "FEAT-001"
      name: "User Dashboard"
      scores:
        customer_value: 5
        strategic_fit: 4
        effort: 3
        risk: 4
      weighted_score: 4.20
      priority: 1
      rationale: "Highest customer value, manageable effort"

    - id: "FEAT-002"
      name: "API Integration"
      scores:
        customer_value: 3
        strategic_fit: 5
        effort: 4
        risk: 3
      weighted_score: 3.75
      priority: 2
      rationale: "Strong strategic alignment"

  moscow_summary:
    must: ["FEAT-001"]
    should: ["FEAT-002", "FEAT-003"]
    could: ["FEAT-004"]
    wont: ["FEAT-005"]

Mermaid Visualizations

Value-Effort Matrix:

quadrantChart
    title Prioritization Matrix
    x-axis Low Effort --> High Effort
    y-axis Low Value --> High Value
    quadrant-1 Big Bets
    quadrant-2 Quick Wins
    quadrant-3 Fill-ins
    quadrant-4 Money Pits
    "Feature A": [0.2, 0.9]
    "Feature B": [0.3, 0.7]
    "Feature C": [0.7, 0.8]
    "Feature D": [0.8, 0.3]
    "Feature E": [0.2, 0.2]

MoSCoW Distribution:

pie title MoSCoW Distribution
    "Must" : 3
    "Should" : 4
    "Could" : 5
    "Won't" : 2

When to Use Each Technique

TechniqueBest ForTeam SizeTime Required
MoSCoWSprint/release planningAny30-60 min
KanoProduct featuresProduct team2-4 hours
Weighted ScoringComplex trade-offsCross-functional1-2 hours
Value vs EffortQuick triageAny15-30 min
RICEProduct roadmapProduct team1-2 hours
WSJFContinuous flowSAFe teams30-60 min

Common Pitfalls

PitfallPrevention
Everything is "Must"Enforce category limits (60% capacity)
HiPPO (highest paid person's opinion)Use objective scoring criteria
Ignoring effortAlways consider cost/effort dimension
Static prioritizationRe-prioritize regularly as context changes
OvercomplicatingStart simple, add complexity only if needed
Ignoring dependenciesMap dependencies before finalizing order

Integration

Upstream

  • Requirements - Items to prioritize
  • stakeholder-analysis - Stakeholder input on value
  • swot-pestle-analysis - Strategic context

Downstream

  • Sprint planning - Ordered backlog
  • Roadmaps - Prioritized initiatives
  • decision-analysis - Detailed option evaluation

Related Skills

  • decision-analysis - For complex option evaluation
  • stakeholder-analysis - Stakeholder input on priorities
  • risk-analysis - Risk dimension of prioritization
  • capability-mapping - Capability investment prioritization

User-Facing Interface

When invoked directly by the user, this skill operates as follows.

Arguments

  • <items-or-context>: Items to prioritize (inline list, file reference, or context description)
  • --mode: Prioritization method (default: moscow)
    • moscow: Must/Should/Could/Won't categorization (~4K tokens)
    • kano: Customer satisfaction categorization (~5K tokens)
    • weighted: Multi-criteria weighted scoring (~6K tokens)
    • all: All three methods for comparison (~12K tokens)
  • --output: Output format (default: both)
    • yaml: Structured YAML for downstream processing
    • markdown: Formatted markdown tables
    • both: Both formats
  • --dir: Output directory (default: docs/analysis/)

Execution Workflow

  1. Parse Arguments - Extract items, mode, and output format. If no items provided, ask the user what to prioritize.
  2. Gather Items - Collect from inline list, file reference, or context-based exploration.
  3. Execute Based on Mode:
    • MoSCoW: Categorize into Must/Should/Could/Won't with stakeholder input on business criticality, dependencies, compliance, and user impact.
    • Kano: Classify by satisfaction impact (Basic, Performance, Delighter, Indifferent, Reverse) considering customer expectations and competitive baseline.
    • Weighted: Define criteria with weights, score each item 1-5, calculate weighted scores, and rank.
    • All: Run all three methods, compare for consistency, highlight conflicts, and synthesize final priority.
  4. Generate Output - Produce YAML structure, markdown tables (MoSCoW summary, weighted scoring matrix), Mermaid visualizations (quadrantChart, pie chart), and summary report.
  5. Save Results - Save to docs/analysis/prioritization.yaml and/or docs/analysis/prioritization.md (or custom --dir).
  6. Suggest Follow-Ups - Recommend effort estimation for high-priority items, risk analysis for high-risk items, and capability-mapping for alignment.

Version History

  • v1.0.0 (2025-12-26): Initial release
Stats
Parent Repo Stars40
Parent Repo Forks6
Last CommitFeb 16, 2026