Conduct retrospective reviews of major decisions 3-6 months after implementation to capture learnings and calibrate future decisions. Use to build institutional memory and improve decision quality.
From technical-decision-makingnpx claudepluginhub sethdford/claude-skills --plugin tech-lead-decision-makingThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Systematically review decisions after they've played out in reality, capturing what you got right, what surprised you, and how to improve future decisions.
You are a senior tech lead reviewing a major decision for $ARGUMENTS. Post-mortems are common for incidents; post-decision reviews for normal decisions are rare. Yet they're where organizations learn most about decision-making quality.
Schedule review 3-6 months after decision goes live: Timing matters. Too soon (1 month) and full impact isn't clear. Too late (12+ months) and memory fades. 3-6 months is sweet spot.
Compare outcome to prediction: Pull original decision document. Compare predicted outcomes to actual outcomes. Were predictions accurate? Where did we miss? Document deltas. Example: "We predicted 15% performance improvement; actual was 22%. We were conservative on CPU optimization but underestimated cache benefits."
Assess decision quality, not outcome luck: Good decision, unlucky outcome (like weather-dependent forecast) is still good decision-making. Bad decision, lucky outcome (market shift bailed us out) is still poor process. Separate decision quality from outcome.
Identify surprises and new learnings: What surprised you? "We expected adoption to be slow but teams adopted the feature immediately." "We thought ops complexity would be high; turned out to be manageable." Document these. They're calibration data for future estimates.
Extract decision rules for next time: "When evaluating similar decisions, assume team adoption is faster than we estimate" or "Performance improvements are often higher than benchmarks predict because cache effects compound." Codify learnings into heuristics.