From sentinel
Post-decision and retrospective hygiene. Trigger when the user: wants to understand why something succeeded or failed ("pourquoi notre lancement a-t-il raté", "why did the project fail", "what went wrong last quarter", "let's review what happened"); is running a post-mortem, retrospective, or after-action review; asks about lessons learned or debrief; uses phrases like "post-mortem", "postmortem", "retrospective", "retro", "what went wrong", "lessons learned", "debrief", "after-action", "review what happened", "why did it fail", or "why did it succeed". Also activates for: postmortem scorecards, retrospective bias audits (hindsight, outcome bias, self-serving bias).
npx claudepluginhub jamon8888/cc-suite --plugin SentinelThis skill uses the workspace's default tool permissions.
Post-decision analysis is NOT the same as pre-decision analysis.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Analyzes multiple pages for keyword overlap, SEO cannibalization risks, and content duplication. Suggests differentiation, consolidation, and resolution strategies when reviewing similar content.
Share bugs, ideas, or general feedback.
Post-decision analysis is NOT the same as pre-decision analysis. The biases are different, they operate in different directions, and they require different structural interventions.
The core asymmetry:
An organization that makes good retrospective analyses is more resilient than one that makes perfect pre-decision analyses — because learning compounds.
"The project failed → it was a bad decision." "The project succeeded → it was a good decision."
Both inferences are logically invalid. Good decisions produce bad outcomes (bad luck is real). Bad decisions produce good outcomes (dumb luck is real). Conflating decision quality with outcome quality produces the wrong learning: teams abandon good processes because of bad luck, and repeat bad processes because of good luck.
Structural fix: The 2x2 framework is mandatory in all retrospectives. Decision quality and outcome quality must be evaluated on separate axes.
Successes: "We made great decisions. The team executed brilliantly." Failures: "Market conditions were unfavorable. We couldn't have known."
Both retrospectives are written by the same people. The attribution pattern is not based on the evidence — it is based on self-protection.
Structural fix: Apply the attribution audit in both directions. For successes: "What external factors contributed that had nothing to do with us?" For failures: "What internal decisions, made at the time, contributed to this?"
"We always believed in this approach." "The concern I had was minor — I didn't really worry about it."
Memory is not a recording. It is reconstructed to be consistent with present beliefs and outcomes. Without written records from decision time, retrospective analysis is partially fiction.
Structural fix: Original decision records must be read BEFORE retrospective questions are asked. If no records exist, acknowledge this limitation explicitly and discount retrospective conclusions accordingly.
"It was obvious this would fail." "Anyone could have seen that the market wasn't ready."
Hindsight bias inflates the perceived predictability of past events. The signal-to-noise ratio at decision time was much lower than it appears now. Learning "be better at spotting the obvious" from a hindsight-contaminated retrospective is learning nothing — because the signal wasn't obvious.
Structural fix: Reconstruct the information environment at decision time. "Given only the information available on [date], what probability would you have assigned to this outcome?" This breaks the inevitability illusion.
Primary bias risks: Self-Serving Bias, Hindsight Bias, scapegoating Key questions to add:
Do NOT ask: "Who made the wrong call?" (produces scapegoating, not learning) DO ask: "What information, process, or structure was missing that would have changed the decision?"
Primary bias risks: Self-Serving Bias, Choice-Supportive Bias, attribution to skill Key questions to add:
Do NOT conclude: "We have a repeatable playbook." (unless you can specify what's in it) DO conclude: "The process was sound [or wasn't]. The external conditions were [favorable/unfavorable]."
Primary bias risks: Plan Continuation Bias (ID 67), Escalation of Commitment (ID 63), Sunk Cost (ID 7) Key questions to add:
When running MAP protocol on a past decision for retrospective review:
These are evaluated INDEPENDENTLY — especially 1 and 2.
Every retrospective must end with a learning statement that meets these criteria:
Bad: "We should have done more due diligence." Good: "Our Reality Checker relied on survivorship-biased benchmarks for SaaS growth rates. Next time, explicitly ask: 'What types of companies are absent from this benchmark?' before using it."
skills/retrospective/templates/postmortem-scorecard.mdskills/retrospective/templates/attribution-audit.mdskills/retrospective/templates/plan-continuation-review.md