From mycelium
Prioritizes OST solution leaves using ICE scoring derived from Four Risks (Value, Usability, Feasibility, Viability). Computes Impact x Confidence x Ease, ranks solutions, flags riskiest assumptions, and checks biases.
npx claudepluginhub haabe/mycelium --plugin myceliumThis skill uses the workspace's default tool permissions.
ICE scoring with integrated confidence meter.
Quantify and rank opportunities using the RICE framework (Reach, Impact, Confidence, Effort) to enable data-driven prioritization and trade-off discussions. Use when comparing diverse features, deciding what to build next, or allocating engineering time across initiatives.
Provides frameworks for feature prioritization: RICE scoring, ICE scoring, and Opportunity Solution Trees with templates, tables, and anti-patterns. Useful for deciding what to build next.
Scores and ranks product features using RICE or ICE frameworks with OKR alignment. Outputs ranked lists, scores, rationale, and recommendations.
Share bugs, ideas, or general feedback.
ICE scoring with integrated confidence meter.
ICE scoring is applied to OST solution leaves. Each leaf must have a Four Risks assessment (Torres Product Trio) before it can be scored — the risks are the inputs, ICE is the output.
For each solution leaf in .claude/canvas/opportunities.yml, check that four_risks has been assessed:
If any risk dimension is missing, assess it first. Each dimension must have its own evidence — a single combined statement fails (Torres Product Trio rule).
For each solution, score three dimensions (1-10):
Impact — derived from value + usability + viability risks:
Confidence — how well-tested are the risk assessments?
Scale note: Mycelium uses 0.0-1.0. Gilad's original Confidence Meter uses 0-10 non-linear (0.01=opinion, 1=anecdotal, 5=market data, 8=A/B test, 10=launch data). The non-linear penalty is preserved through evidence-class weighting.
Ease — derived from feasibility risk:
ICE = Impact x Confidence x Ease. Rank solution leaves by ICE score.
For the top-ranked solutions, extract the highest-risk assumptions from the Four Risks assessment — these are what /mycelium:assumption-test should target first. Prioritize assumptions where importance is high but evidence is low.
Are high-scoring items benefiting from availability bias, IKEA effect, or anchoring? Review for bias after scoring, before acting.
| Solution | Value | Usability | Feasibility | Viability | I | C | E | ICE | Riskiest Assumption |
|----------|-------|-----------|-------------|-----------|---|---|---|-----|---------------------|
| ... | risk | risk | risk | risk | X | X | X | XXX | [what to test] |
Update .claude/canvas/opportunities.yml — write four_risks and ice_score per solution leaf.
Update .claude/canvas/gist.yml with idea ICE scores and confidence levels.
ICE scores are susceptible to noise — unwanted variability where different sessions or assessors produce different scores for the same evidence. Unlike bias (systematic skew in one direction), noise is random scatter.
Detection: Re-score the same evidence independently (different session or assessor) and compare. If scores diverge by >1 point on any dimension, investigate why before proceeding.
Mitigation: Use structured assessment criteria (the Four Risks inputs above), apply scores independently before discussion, and anchor to evidence types rather than gut feel.
Noise audit procedure: Score the same evidence independently twice (different session or different assessor). If ICE scores diverge by >1 point on any dimension, the gap is noise — investigate the scoring criteria before proceeding. For solo developers: re-score after a 24h break to detect temporal noise.