From mycelium
Runs structured retrospective after completing delivery diamonds or milestones, recording cycle data, ICE/effort calibration, DORA metrics, and learnings in canvas YAML and decision log.
npx claudepluginhub haabe/mycelium --plugin myceliumThis skill uses the workspace's default tool permissions.
Run after every completed delivery diamond or significant milestone. Source: Forsgren (learning culture).
Facilitates structured retrospectives and post-mortems via probing questions, timeline construction, root cause analysis, and recurring theme detection from past retros.
Guides structured retrospectives, post-mortems, AARs, and reviews to generate root cause analysis and SMART action items with psychological safety.
Conducts project or sprint retrospectives by gathering data from status reports, velocity metrics, and artifacts; structures what went well, improvements, and generates actionable items with owners and due dates. Use at sprint ends, milestones, or reviews.
Share bugs, ideas, or general feedback.
Run after every completed delivery diamond or significant milestone. Source: Forsgren (learning culture).
Run these steps IN ORDER. Do not skip any step. Step 1 (cycle recording) MUST be completed FIRST — before any reflective analysis.
.claude/canvas/cycle-history.yml AND Decision Log (MANDATORY — DO THIS FIRST)This step is critical. Without it, the learning metabolism has no data. You MUST do BOTH parts (5a and 5b).
.claude/canvas/cycle-history.ymlFind the leaf_id and opportunity_id for the delivered solution (from .claude/canvas/opportunities.yml or .claude/canvas/gist.yml). Then write a cycle record:
- cycle_id: cycle-NNN
leaf_id: "opp-XXX-sol-X" # From opportunities.yml
opportunity_id: "opp-XXX" # Parent opportunity
diamond_id: "d-XXX" # From .claude/diamonds/active.yml
completed_at: "YYYY-MM-DDTHH:MM:SSZ"
outcome: shipped | partial | failed | discarded
predicted:
ice_score: {i: X, c: X, e: X, total: XXX} # ICE at time of scoring
feasibility_risk: low | medium | high # From four_risks
effort_estimate: "X days/weeks" # Original estimate
actual:
effort: "X days/weeks" # How long it actually took
dora: # From /mycelium:dora-check or known metrics
deploy_frequency: "..."
lead_time: "..."
change_failure_rate: "..."
mttr: "..."
calibration:
ice_accuracy: "predicted XXX vs actual [outcome description]"
effort_accuracy: "predicted X days vs actual X days (delta: +/-X)"
risk_accuracy: "feasibility was [predicted] — actual was [description]"
learnings: "Key learning from this cycle"
Update calibration_summary.total_cycles count. If total_cycles reaches a multiple of 5, prompt: "5 cycles since last review. Run /mycelium:framework-health to check calibration?"
Write a decision log entry titled "Cycle calibration record" that includes ALL of the following (use these exact words):
This decision log entry ensures the calibration data is auditable alongside other decisions, not just buried in cycle-history.yml.
docs/adr/ exists): did implementation follow the decided approach? Any consequences that turned out differently than expected? Mark superseded ADRs.If this retrospective is for a cycle completed more than 14 days ago, check:
rework.post_delivery_correctionsrework.post_delivery_regressionsrework.days_to_first_regressionUpdate the cycle record in .claude/canvas/cycle-history.yml with the rework fields. This is the denominator — the hidden cost of delivery that velocity metrics miss.
If this retrospective is for a just-completed cycle, prompt: "Set a reminder to check rework in 14 days. Run /mycelium:retrospective rework-check [cycle-id] after that."
Source: Paddo (the denominator problem — 43% of AI-assisted code requires post-delivery debugging). Forsgren (change failure rate as a trailing indicator).
Use these two complementary techniques. Fishbone gives breadth (all possible causes). 5 Whys gives depth (one cause traced to its root).
Map all potential causes before investigating any. Structure:
┌─ People (skills, handoffs, communication)
├─ Process (gates, cadence, workflow)
Problem ◄───────────────├─ Product (canvas, evidence, assumptions)
(effect) ├─ Platform (tools, infra, dependencies)
├─ Principles (which theory/guardrail failed?)
└─ Pressures (deadlines, scope, external)
Ishikawa's original 6M manufacturing categories: Man (Manpower), Machine, Method, Material, Measurement, Mother Nature (Environment). Adapted for product development as: Man→People, Machine→Platform, Method→Process, Material→Product (inputs to the work), Measurement→Principles (what we measure against), Mother Nature→Pressures (external forces).
For the top-ranked cause from the fishbone, ask "why?" five times:
Stop rule: Stop when ANY of these conditions are met:
Anti-pattern: Stopping at "human error" — that's never the root cause. Ask why the system allowed the error.
Source: Ishikawa (cause-and-effect diagrams), Toyoda/Ohno (5 Whys), adapted for agentic product development.
"Eliminating waste is the foundation of lean." (Ohno)
Before root cause analysis, identify which waste category the problem falls into:
| Waste | Product Development Form | Detection |
|---|---|---|
| Transportation | Handoffs between people/teams, between discovery and delivery | Count handoffs in the value stream |
| Inventory | WIP, unshipped code, unfinished features, unmerged branches, open PRs | Check WIP limits, branch age |
| Motion | Context switching between tasks, tools, codebases | Track focus time vs fragmented time |
| Waiting | Blocked tasks, review queues, approval bottlenecks, blocked dependencies | Measure wait-to-work ratio |
| Overproduction | Building features nobody uses, YAGNI violations | Compare shipped features to validated needs |
| Overprocessing | Gold-plating, unnecessary abstraction, premature optimization | "Would removing this step reduce value?" |
| Defects | Bugs, rework, corrections, failed deployments | Track defect escape rate |
Also watch for: Muri (overburden → BVSSH Happier / sustainable pace) and Mura (unevenness → delivery cadence variation).
Source: Taiichi Ohno, Sakichi Toyoda (Toyota Production System). Mapped to product development via Poppendieck (Lean Software Development).
For incidents or significant failures, use the SRE blameless post-mortem:
Rule: No blame. Focus on the system, not the person. Source: Beyer et al. (SRE)
After delivery retrospective, always ask:
.claude/memory/corrections.md with new corrections.claude/memory/patterns.md with new patterns.claude/memory/delivery-journal.md with retrospective entry.claude/canvas/cycle-history.yml (see Cycle History Recording above)Before finalizing the retrospective, draft a one-line counter-argument for each major claim: "What's the strongest case that this 'went well' was actually luck? That this 'went wrong' was actually unavoidable? That this 'pattern' is actually noise?" If you can't articulate counter-cases, run /mycelium:devils-advocate before locking in the corrections/patterns.
This addresses the bias cluster documented in corrections.md (L5 sycophancy 2026-04-20, eval overfitting 2026-04-30, sharper-framing-isn't-righter 2026-05-03). Retrospectives are particularly bias-prone — narrative coherence is rewarded, the agent is incentivized to find tidy patterns, and post-hoc rationalization is the natural mode. Counter-arguments break that gravity.
Especially important when proposing graduation candidates (recurring corrections → guardrails) — make sure the recurrence is real, not 3 instances of pattern-matching by the agent itself.