Use when prioritizing opportunities or features using ICE or RICE scoring with documented rationale and trade-offs. Also triggers on 'prioritize', 'ICE score', 'RICE score', 'what comes first', 'priorização', 'ranking', or 'P0/P1/P2'.
From pmnpx claudepluginhub etusdigital/etus-plugins --plugin pmThis skill uses the workspace's default tool permissions.
knowledge/template.mdSearches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides slash command development for Claude Code: structure, YAML frontmatter, dynamic arguments, bash execution, user interactions, organization, and best practices.
Transform the OST into a decision: what enters now, what waits, and why. This document makes trade-offs explicit (impact vs effort vs confidence) and prevents "everything is a priority." It converts opportunities into P0/P1/P2 classification with documented rationale, enabling the team to open PRDs for the right items.
Principle: Sequencing is choosing.
Reference: .claude/skills/orchestrator/dependency-graph.yaml
BLOCKS (must exist — auto-invoke if missing):
docs/ets/projects/{project-slug}/planning/ost.md — Opportunities to prioritize come from the OST (O-# references).ENRICHES (improves output — warn if missing):
docs/ets/projects/{project-slug}/discovery/baseline.md — Baseline metrics inform impact assessment (baseline → target).docs/ets/projects/{project-slug}/discovery/product-vision.md — BO-# objectives inform priority alignment with strategic direction.Resolution protocol:
dependency-graph.yaml → prioritization.requires: [ost]docs/ets/projects/{project-slug}/planning/ost.md exist, is non-empty, and is not <!-- STATUS: DRAFT -->?ost skill → wait → continuePosition in workflow: After OST, before PRD. P0 items from this document become features in the PRD.
MANDATORY: This skill MUST write its artifact to disk before declaring complete.
mkdir -p if neededIf the Write fails: Report the error to the user. Do NOT proceed to the next skill.
This skill follows the ETUS interaction standard. Your role is a thinking partner, not an interviewer — challenge inflated impact scores, push for evidence behind confidence ratings, probe whether effort estimates have been validated with the Tech Lead, and flag when everything is being marked P0. Prioritization work is about honest trade-offs — these patterns ensure the user makes deliberate choices, not wishful rankings.
One question per message — Ask one question, wait for the answer, then ask the next. Prioritization questions require the user to make judgment calls, so give them space. Use the AskUserQuestion tool when available for structured choices.
3-4 suggestions for choices — When the user needs to choose a direction (e.g., ICE vs RICE, how to handle ties, whether to include dependencies as a factor), present 3-4 concrete options with a brief description of each. Highlight your recommendation. Let the user pick before proceeding.
Propose approaches before generating — Before generating any content section, propose 2-3 approaches with tradeoffs. Example: "I see two methods for this prioritization: (A) ICE — simpler, faster, works well when reach is roughly equal across items; (B) RICE — adds Reach dimension, better when items have very different audience sizes. I recommend A because the OST opportunities target the same user segment."
Present output section-by-section — Don't generate the full document at once. Present each major section (Method selection, then each item's scoring one at a time, then ranking, then trade-offs), ask "Does this capture it well? Anything to adjust?" and only proceed after approval.
Track outstanding questions — If something can't be answered now, classify it:
Multiple handoff options — At completion, present 3-4 next steps as options instead of a single fixed path.
Resume existing work — Before starting, check if the target artifact already exists at the expected path. If it does, ask the user: "I found an existing prioritization.md at [path]. Should I continue from where it left off, or start fresh?" If resuming, read the document, summarize the current state, and continue from outstanding gaps.
Assess if full process is needed (right-size check) — If the user's input already has clear scores with rationale and a proposed ranking, don't force the full interview. Confirm understanding briefly and offer to skip directly to document generation. Only run the full interactive process when there's genuine ambiguity to resolve.
Thinking partner behaviors:
This skill reads and writes persistent memory to maintain context across sessions.
On start (before any interaction):
docs/ets/.memory/project-state.md — know where the project isdocs/ets/.memory/decisions.md — don't re-question closed decisionsdocs/ets/.memory/preferences.md — apply user/team preferences silentlydocs/ets/.memory/patterns.md — apply discovered patternsOn finish (after saving artifact, before CLOSING SUMMARY):
project-state.md is updated automatically by the PostToolUse hook — do NOT edit it manually.python3 .claude/hooks/memory-write.py decision "<decision>" "<rationale>" "<this-skill-name>" "<phase>" "<tag1,tag2>"python3 .claude/hooks/memory-write.py preference "<preference>" "<this-skill-name>" "<category>"python3 .claude/hooks/memory-write.py pattern "<pattern>" "<this-skill-name>" "<applies_to>"The .memory/*.md files are read-only views generated automatically from memory.db. Never edit them directly.
Before generating content, challenge the prioritization quality with these questions (ask the most relevant 1-2, not all):
The goal is to sharpen the ranking's quality, not to block progress. If the ranking is well-justified, acknowledge it and move on quickly.
These anti-patterns come from the team's real experience (KB). Flag them actively during the interview and generation:
Everything is P0 — If all items are P0, the prioritization has failed. Push back: "We have [N] P0 items. That means all compete for the same capacity. Which 1-2 are truly non-negotiable?"
Score as automatic decision — The score is a tool, not the truth. Scores inform the ranking but don't dictate it. Dependencies, risks, and strategic alignment also matter. Flag if the user treats score as gospel: "The score suggests [X], but does that account for [dependency/risk]?"
Prioritizing without dependencies — An item that depends on another cannot come first. Always check: "Does this require something else to be built first?"
Hiding constraints — Capacity, legal requirements, integration dependencies, and team availability all affect what can actually be done. Push for disclosure: "Are there constraints (capacity, legal, integrations) that affect this ranking?"
Table without justification — A spreadsheet of numbers without written rationale is not a prioritization document. Every score needs 2-5 lines of justification.
Effort estimated by PM alone — Effort estimates must be validated with the Tech Lead. Flag if missing: "Who validated this effort estimate?"
Load context in this order of priority:
[context-path], read that file directly.docs/ets/projects/{project-slug}/state/reports/ for upstream planning artifacts.docs/ets/projects/{project-slug}/planning/ for ost.md. Load O-# opportunities. Also scan docs/ets/projects/{project-slug}/discovery/ for baseline.md and product-vision.md.This interview follows a one-question-at-a-time rhythm. Ask each question alone in one message, wait for the user's answer, then decide whether to ask a follow-up or move forward.
Question 1 (ask alone, one message):
"O que estamos priorizando? Oportunidades (O-#) da OST ou solucoes/hipoteses? Recomendacao: primeiro priorize oportunidades; depois detalhe hipoteses dentro do top P0."
Wait for the answer. Extract: unit of prioritization (opportunities, solutions, or hypotheses).
Question 2 (ask alone, one message):
"Qual metodo preferem? Apresento as opcoes: (A) ICE = (Impacto x Confianca) / Esforco — mais simples e rapido, funciona bem quando o alcance e similar entre itens; (B) RICE = (Reach x Impact x Confianca) / Esforco — melhor quando os itens tem audiencias muito diferentes. Recomendo [A or B] porque [rationale based on loaded context]. Qual preferem?"
Wait for the answer.
For each O-# from the OST:
Question 3 (ask alone, one message, for each item):
"Para [O-# — titulo]: qual o impacto esperado? (1-5). Justifique com base em baseline — qual metrica muda, de quanto para quanto?"
Wait for the answer. If impact is 5 without strong evidence, challenge it.
Question 4 (ask alone, one message):
"Qual sua confianca nesse impacto? (1-5). Quais evidencias sustentam? (dados, discovery, historico)"
Wait for the answer. If confidence is 5, push for specific evidence.
Question 5 (ask alone, one message):
"Qual o esforco estimado? (1-5). Essa estimativa foi validada com o Tech Lead? Quais as principais complexidades?"
Wait for the answer. If not validated with Tech Lead, flag it.
After scoring, compute and present:
"Score para O-[#]: ICE = ([I] x [C]) / [E] = [score]. Dependencias: [list]. Riscos: [list]. Prioridade sugerida: [P0/P1/P2] porque [rationale]."
Repeat Block 2 for each item.
Question 6 (ask alone, one message):
"Baseado nos scores, proponho este ranking: [P0/P1/P2 list]. Concorda? Quer ajustar algum item?"
Wait for the answer. If user adjusts, update and re-present.
Question 7 (ask alone, one message):
"O que ficou de fora e por que? Algum trade-off que precisa ser documentado? Quais riscos estamos aceitando?"
Wait for the answer.
The generated docs/ets/projects/{project-slug}/planning/prioritization.md contains:
SST Rule: ICE/RICE scores, P0/P1/P2 ranking, and trade-off decisions ONLY in this document. No other document should redefine scores or the priority ranking.
No new IDs — this document references O-# from OST and generates P0/P1/P2 classification.
Downstream traceability: O-# (from ost.md) → P0 items → PRD-F-# features in prd.md
knowledge/template.md for the prioritization document template and structure.ost.md (BLOCKS):
Before marking this document as COMPLETE:
If any check fails → mark document as DRAFT with <!-- STATUS: DRAFT --> at top.
After saving and validating, display:
prioritization.md saved to `docs/ets/projects/{project-slug}/planning/prioritization.md`
Status: [COMPLETE | DRAFT]
Method: [ICE | RICE]
Items scored: [count]
P0: [count] | P1: [count] | P2: [count]
Trade-offs documented: [count]
Then present these options using AskUserQuestion (or as a numbered list if AskUserQuestion is unavailable):
Wait for the user to choose before taking any action. Do not auto-proceed to the next skill.
ost.md (BLOCKS), optionally baseline.md (ENRICHES), product-vision.md (ENRICHES)Input: Interview notes from Step 3
Action: Generate the document one major section at a time, using the template from knowledge/template.md. For each section:
Section order:
Output: Approved sections assembled into complete prioritization.md
docs/ets/projects/{project-slug}/planning/ — create if missingdocs/ets/projects/{project-slug}/planning/prioritization.md using the Write tooldocs/ets/projects/{project-slug}/planning/prioritization.md) + paths to upstream documents (BLOCKS: docs/ets/projects/{project-slug}/planning/ost.md)"Document saved to
docs/ets/projects/{project-slug}/planning/prioritization.md. The spec reviewer approved it. Please review and let me know if you want any changes before we proceed." Wait for the user's response. If they request changes, make them and re-run the spec review. Only proceed to validation after user approval.
| Error | Severity | Recovery | Fallback |
|---|---|---|---|
| BLOCKS dep missing (ost.md) | Critical | Auto-invoke ost skill — opportunities are needed to prioritize | Pause until ost is available |
| BLOCKS dep is DRAFT | Warning | Proceed with available context, noting opportunity quality may be lower | Add <!-- ENRICHMENT_MISSING: ost is DRAFT --> |
| ENRICHES dep missing (baseline.md) | Low | Proceed — impact can be estimated without precise baseline | Note that impact scores may need revision after baseline is defined |
| ENRICHES dep missing (product-vision.md) | Low | Proceed — strategic alignment can be assessed without formal vision | Note that priority alignment may need revision |
| User rates all items as P0 | High | Push back: "If everything is P0, nothing is P0. Force-rank the top 1-2." | Accept max 3 P0 items, flag the rest |
| Effort not validated with Tech Lead | Medium | Flag: "Effort estimate not validated — cap confidence at 3 for this item" | Add <!-- EFFORT: PM estimate only --> |
| Only 1 item to prioritize | Low | Offer versao curta — still document rationale and risks | Generate abbreviated document |
| Output validation fails | High | Address gaps, re-present sections to user | Mark as DRAFT |
This skill supports iterative quality improvement when invoked by the orchestrator or user.
| Condition | Action | Document Status |
|---|---|---|
| Completeness >= 90% | Exit loop | COMPLETE |
| Improvement < 5% between iterations | Exit loop (diminishing returns) | DRAFT + notes |
| Max 3 iterations reached | Exit loop | DRAFT + iteration log |
--quality-loop on any skill invocation--no-quality-loop to disable (generates once, validates once)