From product-owner
Review and groom a backlog — classify items by readiness, break large items into deliverables, add acceptance criteria, RICE-score, and recommend actions.
npx claudepluginhub hpsgd/turtlestack --plugin product-ownerThis skill is limited to using the following tools:
Groom the backlog at $ARGUMENTS.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Reviews prose for communication issues impeding comprehension, outputs minimal fixes in a three-column table per Microsoft Writing Style Guide. Useful for 'review prose' or 'improve prose' requests.
Groom the backlog at $ARGUMENTS.
Follow every step below in order. Do not skip steps.
If the argument is a file path, read it. If the argument is "current" or a directory:
grep -r "TODO\|FIXME\|HACK\|XXX" across the codebase*.md files containing "backlog", "roadmap", "issues", or "tasks".git remote is configured: parse git remote -vCollect every backlog item into a working list. Each item needs: title, description (if available), current status, last activity date, any existing priority or labels.
Classify each item into exactly one of four categories. Apply these criteria strictly — do not give items the benefit of the doubt.
All of the following must be true:
Any of the following is true:
Any of the following is true:
All of the following are true:
Anti-pattern: Do not classify vague items as "Blocked." An item that says "Build analytics dashboard" with no further detail is "Needs Refinement," not "Blocked." Blocked means there is a specific external impediment.
For each item classified as "Needs Refinement," perform all applicable refinement actions:
Apply the INVEST criteria:
If an item fails the "Small" or "Independent" test, split it. Each child item must independently deliver value — do not create items like "Build backend for X" and "Build frontend for X." Instead, split by user behaviour: "User can create X" and "User can filter X by date."
Write acceptance criteria that pass the ISC Splitting Test:
For each item, list:
Use t-shirt sizes based on complexity AND uncertainty:
| Size | Complexity | Uncertainty | Typical duration |
|---|---|---|---|
| S | Well-understood, similar to past work | Low — we have done this before | 1-3 days |
| M | Moderate complexity, some new territory | Medium — some unknowns but manageable | 3-5 days |
| L | High complexity or significant new territory | High — meaningful unknowns to resolve | 1-2 weeks |
| XL | Too large — must be broken down | N/A | N/A — split this item |
Any item sized as XL must be broken down before it can be scheduled. This is not optional.
Calculate a RICE score for every item classified as "Ready" (including items that were refined into "Ready" in Step 3).
Reach — How many users/accounts will this affect per quarter?
Impact — How much does this improve the affected users' experience?
| Score | Meaning | Example |
|---|---|---|
| 3 | Massive | Eliminates a multi-step manual process entirely |
| 2 | High | Significantly reduces time/effort for a common task |
| 1 | Medium | Noticeable improvement, users would appreciate it |
| 0.5 | Low | Minor convenience, nice-to-have |
| 0.25 | Minimal | Cosmetic or very edge-case improvement |
Confidence — How sure are we about the Reach and Impact estimates?
| Score | Meaning | Basis |
|---|---|---|
| 100% | High | Direct user data, support ticket volume, analytics |
| 80% | Medium | User interviews, strong signals, analogous data |
| 50% | Low | Gut feel, untested hypothesis, no data |
Effort — Person-weeks across all disciplines (design + eng + QA)
Formula: RICE = (Reach x Impact x Confidence) / Effort
Create a dependency map showing which items block or are blocked by other items.
Format:
[Item A] --depends on--> [Item B]
[Item C] --depends on--> [Item A]
[Item D] (no dependencies — can start immediately)
Flag any dependency cycles — these indicate a scoping problem that needs to be resolved before scheduling.
Produce four lists:
Items that are Ready, ordered by RICE score descending. Include the RICE score and size estimate for each.
Items that need more work before they can be scheduled. For each, list the specific questions that need answers or the refinement that is needed.
Stale items with a one-sentence rationale for closing each. Be direct — "No activity in 90 days and the problem was addressed by [other item]" is sufficient.
Items that cannot proceed without external action. For each, state: who needs to act, what they need to do, and what the consequence of delay is.
Present the groomed backlog as a structured document:
# Backlog Grooming Summary — [Date]
## Overview
- Total items reviewed: N
- Ready to schedule: N
- Needs refinement: N
- Recommended for closure: N
- Blocked: N
## 1. Schedule Next (by RICE priority)
| Priority | Item | RICE | Size | Dependencies | Notes |
|----------|------|------|------|-------------|-------|
| 1 | ... | 15.2 | S | None | ... |
| 2 | ... | 12.0 | M | Item 1 | ... |
## 2. Needs Refinement
| Item | Issue | Action needed |
|------|-------|--------------|
| ... | Missing AC | Write acceptance criteria; clarify scope with [owner] |
## 3. Recommended for Closure
| Item | Reason |
|------|--------|
| ... | Superseded by [other item], no activity since [date] |
## 4. Blocked — Escalation Needed
| Item | Blocker | Who | Impact of delay |
|------|---------|-----|----------------|
| ... | API contract not finalised | Platform team | Delays 3 downstream items |
Write the output to a file if the backlog is file-based. Otherwise, present it directly.
/product-owner:write-user-story — expand high-priority backlog items into detailed user stories with acceptance criteria.