From flywheel-pm
Capture learnings from a product cycle and update your PM profile — the most important step
npx claudepluginhub abhitsian/compound-pm-marketplace --plugin flywheel-pm[feature name or measurement doc path]pm/# Compound — The Most Important Step ## Purpose After a product cycle (launch, measurement review, or post-mortem), capture what worked, what didn't, compare predictions to actuals, and update your PM profile so the next cycle is faster and better. **This is where the gains accumulate. Skip it, and you've done traditional PM with AI assistance.** ## Input <compound_input> #$ARGUMENTS </compound_input> **If a file path:** Read the measurement or spec document. **If a feature name:** Search `docs/` for all related documents. **If empty:** Ask what to compound. ## Step 0: Load Context ...
/compoundReflects on the current session to extract key learnings, heuristics, friction points, and tacit knowledge for improving future interactions. States if none found.
/compoundDocuments a recently solved problem via parallel subagents, assembling a single structured Markdown file with YAML frontmatter in docs/solutions/.
/compoundExecutes the learnship compound workflow to capture the current solution as a structured document in .planning/solutions/.
/compoundExtract and categorize learnings from completed experiments into docs/ds/learnings/ for future retrieval
/compoundParallel-agent workflow to document a solved problem for team reuse. Use after debugging, fixing, or resolving a non-trivial issue.
/compoundCapture learnings from the current work cycle. Documents what worked, what didn't, and how to prevent similar issues. Stores in docs/solutions/ for future sessions.
After a product cycle (launch, measurement review, or post-mortem), capture what worked, what didn't, compare predictions to actuals, and update your PM profile so the next cycle is faster and better. This is where the gains accumulate. Skip it, and you've done traditional PM with AI assistance.
<compound_input> #$ARGUMENTS </compound_input>
If a file path: Read the measurement or spec document.
If a feature name: Search docs/ for all related documents.
If empty: Ask what to compound.
cat pm-profile.yaml 2>/dev/null
Gather all related documents:
ls docs/opportunities/*[feature]* docs/solutions/*[feature]* docs/specs/*[feature]* docs/measurements/*[feature]* 2>/dev/null
Ask the PM (using AskUserQuestion tool where appropriate):
If a measurement doc exists with predictions, go through each:
| Prediction | Predicted | Actual | Delta | Why |
|---|---|---|---|---|
| Adoption rate | [X]% | [actual]% | [+/-] | [explanation] |
| Activation | [X]% | [actual]% | [+/-] | [explanation] |
| Retention | [X]% | [actual]% | [+/-] | [explanation] |
| Revenue impact | $[X] | $[actual] | [+/-] | [explanation] |
Classify failures by type:
This is the core compounding mechanism. Update pm-profile.yaml with:
Add to project_patterns:
- project: "[Feature Name]"
date: "YYYY-MM-DD"
domain: "[domain]"
opportunity_type: "[signal type]"
solution_type: "[differentiator | mmr | neutralizer]"
outcome_metric: "[metric name]"
predicted: "[prediction]"
actual: "[actual]"
delta: "[+/- %]"
key_learning: "[one sentence]"
activation:
setup: "[actual rate]"
aha: "[actual rate]"
habit: "[actual rate]"
engagement:
casual_pct: "[actual %]"
core_pct: "[actual %]"
power_pct: "[actual %]"
business_impact: "[summary]"
Recalculate averages across all project patterns:
thresholds:
avg_activation_setup: "[recalculated]"
avg_activation_aha: "[recalculated]"
avg_activation_habit: "[recalculated]"
avg_estimation_accuracy: "[recalculated]"
typical_time_to_habit: "[recalculated]"
Add to decision_log:
- date: "YYYY-MM-DD"
feature: "[name]"
decision: "[what was decided]"
rationale: "[why]"
outcome: "[what actually happened]"
would_decide_differently: "[yes/no and why]"
If a framework consistently fails or succeeds, note it:
framework_notes:
- "Activation funnel works well for B2B employee-facing products"
- "Revenue predictions tend to be 20% optimistic — discount accordingly"
- "User segmentation (casual/core/power) maps well to enterprise engagement"
Write to docs/compounds/YYYY-MM-DD-<feature>-compound.md:
---
date: YYYY-MM-DD
feature: "[feature name]"
outcome: "[success | partial | failure | pivot]"
key_learning: "[one sentence]"
profile_updated: true
---
# Compound: [Feature Name]
## Outcome Summary
[What happened in 2-3 sentences]
## Predictions vs Actuals
[Table of predicted vs actual metrics]
## Estimation Calibration
[Where predictions were off and why]
## What Worked
[Frameworks, decisions, approaches that succeeded]
## What Didn't Work
[Failures classified by type]
## What Was Surprising
[Unexpected outcomes]
## PM Profile Updates
- Added project pattern: [summary]
- Updated thresholds: [what changed]
- Added to decision log: [key decision]
- Framework note: [any framework adjustment]
## Implications for Future Work
[What the next PM cycle should do differently]
Show the PM what changed in their profile:
Profile updated:
New project pattern added: [Feature Name]
Outcome metric: [predicted] → [actual] ([delta])
Activation: Setup [X]%, Aha [Y]%, Habit [Z]%
Thresholds recalculated:
avg_activation_setup: [old] → [new]
avg_estimation_accuracy: [old] → [new]
Decision logged: [summary]
Your PM operating system is now smarter. The next /pm:opportunity
will reference this learning, and /pm:measure will use updated
thresholds for predictions.
After capturing individual learnings, ask:
Use AskUserQuestion tool:
"Is any of this worth sharing with the PM team?"
Options:
team-profile.yaml via /pm:shareIf sharing: Run /pm:share with the selected learning.
Options:
/pm:share to promote more learnings/pm:opportunity for the next initiativeEach product cycle makes the next one easier — not harder.