From sentinel
Product and roadmap decision hygiene. Trigger when the user: is deciding what to build or prioritise ("on doit prioriser le backlog", "help me decide what to build next", "which features do we tackle first", "should we build or buy this"); is reviewing a product roadmap; asks about sprint planning, feature scoring, or backlog grooming; uses phrases like "product roadmap", "feature prioritization", "sprint planning", "backlog grooming", "build vs buy", "RICE", "MoSCoW". Also activates for: product noise audits, roadmap pre-mortem, sprint calibration, and feature scorecard generation.
npx claudepluginhub jamon8888/cc-suite --plugin SentinelThis skill uses the workspace's default tool permissions.
Product management combines ALL ingredients of systematic poor judgment
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes multiple pages for keyword overlap, SEO cannibalization risks, and content duplication. Suggests differentiation, consolidation, and resolution strategies when reviewing similar content.
Share bugs, ideas, or general feedback.
Product management combines ALL ingredients of systematic poor judgment (Kahneman, Sibony & Sunstein):
Load: skills/product-management/biases.yaml
Padding RICE/WSJF inputs to justify a pre-made decision. Question: "What would the RICE score be if someone who DISAGREES with building this feature filled in the inputs?"
The highest-paid person's opinion determines the roadmap. Question: "If the CPO/CEO had no opinion on this, what would the data say the priority is?"
Systematic underestimation of time, cost, and complexity. Question: "What's the median delivery time for features of similar scope in the last 12 months? How does this estimate compare?"
User research is done to validate a pre-decided solution, not to discover real problems. Question: "Did your research sessions start with the solution or with the problem? What would have happened if users hated the idea?"
Continuing a feature because it's already partially built. Question: "If this feature didn't exist at all today, would you start building it?"
The initial scope is always "MVP" but always grows before launch. Question: "What's the actual scope after all stakeholder feedback is incorporated? What's the minimum that delivers value if you cut 50%?"
Features are evaluated in isolation, not against what else could be built with the same resources. Question: "What are the 3 best things the engineering team could build instead? How does this feature compare?"
Loud recent feedback (Slack messages, support tickets, sales blockers) dominates prioritization over systematic user research. Question: "Is this priority driven by data or by whoever complained loudest this week? What does the usage data say?"
Load: skills/product-management/frameworks.yaml
6 independent dimensions:
"6 months post-launch. The feature flopped. What happened?" 7 product-specific failure modes.
Each PM/engineer estimates independently (no anchoring). Sentinel measures CV across estimates. CV > 40% = nobody actually knows.
6 techniques: churned user, competitor shipping first, cut 50%, user who never asked for this, rebuild from first principles, kill the roadmap.
Map all competing features by effort vs impact. Force explicit tradeoffs before committing. No free additions.
When /sentinel detects product management context:
STANDARD adds:
FULL adds: