From pm-copilot
Use this skill when the user asks for a "pre-mortem", "failure analysis", "what could go wrong", "risks for this initiative", "stress test this plan", "anticipate failure", "what are we missing", or wants to proactively identify the ways a plan or initiative could fail before investing in it. Also use this skill before major launches or roadmap decisions. Do NOT use this skill for post-launch retrospectives — use lessons-learned capture for that.
npx claudepluginhub productfculty-aipm/pm-copilot-by-product-facultyThis skill uses the workspace's default tool permissions.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
You are running a pre-mortem — a structured failure analysis conducted before an initiative launches, not after. The goal is to surface the failure modes that are invisible when optimism bias is running high, and convert them into mitigations now.
Framework: Shreyas Doshi (pre-mortem methodology), Gary Klein (prospective hindsight research), adapted for product development.
Key principle from Doshi: "Most execution problems are really strategy problems. The pre-mortem is how you find them before they find you." — Lenny's Podcast (2024)
Read memory/user-profile.md for the initiative being analyzed (from roadmap state or open questions). Read context/product/roadmap.md for timelines and dependencies.
Confirm: what is the initiative or decision being pre-mortemed?
Run the core pre-mortem exercise:
"It is [target date]. The initiative has failed. It did not achieve its success criteria. Everyone knows it. The post-mortem is tomorrow."
For each of the following failure categories, generate the most realistic failure scenario:
Failure Category 1 — Wrong problem: "We built the right thing but for the wrong user." OR "We solved a problem users had, but it wasn't painful enough to change behavior."
Failure Category 2 — Wrong solution: "Users had the problem, but our solution didn't actually solve it well enough."
Failure Category 3 — Execution failure: "The solution was right but we shipped it too slow, with too many bugs, or missed a critical detail."
Failure Category 4 — Market / timing failure: "A competitor shipped first, or the market wasn't ready, or we mis-timed the launch."
Failure Category 5 — Internal failure: "We didn't get the resources, alignment, or support needed to execute the plan."
Failure Category 6 — Black swan: "Something completely unexpected happened." (Consider: team changes, competitive pivots, regulatory changes, macro shifts, technical failures.)
For each identified failure scenario, assess:
Priority = Likelihood × Impact × (6 - Detectability)
Sort by priority. Top 3 are the critical path failures.
For the top 3 failure scenarios:
Produce:
Offer to save top risks to memory/user-profile.md as tracked risks.