From mycelium
Assesses delivery health metrics using DORA + APEX for software and adapted metrics for content, AI tools, services based on product_type from .claude/diamonds/active.yml.
npx claudepluginhub haabe/mycelium --plugin myceliumThis skill uses the workspace's default tool permissions.
Assess delivery health using product-type-appropriate metrics. Check `product_type` from `.claude/diamonds/active.yml` to determine which assessment to run.
Audits developer experience by measuring onboarding time, build/test speeds, deployment friction, and satisfaction. Use for DX audits, DORA metrics, or workflow reviews.
Designs SLO frameworks, defines SLIs and error budgets, and builds monitoring dashboards to balance service reliability with delivery velocity.
Evaluates project alignment with Better Value Sooner Safer Happier (BVSSH) via checklists on quality, value, speed, safety, happiness, and CALMS culture. Run at milestones or periodically.
Share bugs, ideas, or general feedback.
Assess delivery health using product-type-appropriate metrics. Check product_type from .claude/diamonds/active.yml to determine which assessment to run.
Product type routing (v0.11.0):
Assess delivery health using Forsgren's five DORA metrics AND LinearB's APEX AI-era metrics.
Gather current metrics from CI/CD, deployment logs, incident records.
Note: DORA expanded from 4 to 5 metrics. "MTTR" was renamed to "Failed Deployment Recovery Time" (FDRT) for precision — the original name was ambiguous with other mean-time-to-X metrics. "Reliability" was added as the 5th metric in the 2024 State of DevOps report.
Deployment Frequency: How often does code reach production?
Lead Time for Changes: Commit to production time?
Change Failure Rate: % of deployments causing failure?
Failed Deployment Recovery Time (FDRT): Time to restore service after a failed deployment?
Formerly "Mean Time to Recovery (MTTR)." Renamed for precision — FDRT measures recovery from failed deployments specifically, not all incidents.
Reliability: Does the software meet or exceed its reliability targets?
Added in DORA 2024. Measures operational reliability via SLOs/SLIs. Connects to SRE metrics in Part 3.
"Faster coding doesn't mean faster delivery."
Assess the four APEX pillars to detect AI-era delivery problems:
## DORA + APEX Assessment
### DORA Metrics
| Metric | Current | Level | Target | Gap |
|--------|---------|-------|--------|-----|
| Deploy freq | ... | ... | ... | ... |
| Lead time | ... | ... | ... | ... |
| Change fail rate | ... | ... | ... | ... |
| FDRT | ... | ... | ... | ... |
| Reliability | ... | ... | ... | ... |
### APEX Metrics (AI-Era)
| Pillar | Status | Key Signal |
|--------|--------|-----------|
| AI Leverage | ... | AI acceptance rate: ...% |
| Predictability | ... | Planning accuracy: ...%, Rework rate: ...% |
| Flow Efficiency | ... | Cycle time: ..., Review wait: ... |
| Developer Experience | ... | Satisfaction: ..., Burnout: ... |
### Shifting Bottleneck Check
[Is coding faster but review/deployment slower? Yes/No]
[If yes: where is the new bottleneck?]
### DORA Bottleneck
[The metric most constraining overall performance]
### Value Stream Diagnosis (if bottleneck detected)
If DORA shows a bottleneck, map the value stream to identify WHERE in the flow the constraint lives:
- Run `/mycelium:canvas-update` to update `.claude/canvas/value-stream.yml` with current stage timings
- Apply Theory of Constraints Five Focusing Steps (Goldratt): Identify -> Exploit -> Subordinate -> Elevate -> Repeat
- Look for wait times >> process times (a sign of queuing, not capacity, problems)
- Look for high handoff counts (each handoff adds delay and information loss)
- Calculate flow efficiency: process_time / lead_time -- target >25%
### Top 3 Improvements
1. [specific action]
2. [specific action]
3. [specific action]
If SLIs/SLOs defined in .claude/canvas/dora-metrics.yml sre section:
Error budgets are the social contract: reliability earns the right to ship faster. Connects to BVSSH Safer.
If NOT defined: "Consider defining SLIs/SLOs to balance velocity with reliability."
Always APPEND a ### DORA Assessment or ### Delivery Metrics Assessment entry to .claude/harness/decision-log.md with:
Always update .claude/canvas/dora-metrics.yml with:
For content_course, content_publication, content_media products. Read .claude/canvas/content-metrics.yml.
Publication Cadence: How often does content reach the audience?
Production Lead Time: Idea to published -- how long?
Revision Rate: % of published content requiring significant revision?
Completion Rate: % of planned content actually completed on schedule?
Time to First Value (TTFV): How quickly does a buyer access and get value after purchase?
Engagement & Drop-off: Course completion rate, satisfaction, return rate?
Acquisition: Conversion rate, cost per acquisition, cart abandonment?
Revenue Health: Refund rate, CLV, churn (subscriptions), NRR?
Update .claude/canvas/content-metrics.yml with current measurements and last_measured timestamp.
For ai_tool products. Read .claude/canvas/ai-tool-metrics.yml.
Eval Frequency: How often are prompts/models evaluated?
Accuracy & Consistency: Are eval scores stable or improving?
Safety Score: Red-team results -- are adversarial inputs handled?
Bias Assessment: Last assessed when? Any demographic gaps found?
Version Cadence: How often are prompt/model versions shipped?
Regulatory Status: EU AI Act risk classification current?
Time to First Value (TTFV): How quickly does a user get useful output after first access?
Usage & Retention: DAU, task success rate, retention (7-day, 30-day)?
Revenue Health: Refund rate, CLV, churn, NRR?
Update .claude/canvas/ai-tool-metrics.yml with current measurements and last_measured timestamp.
For service_offering products. Read .claude/canvas/service-metrics.yml.
Client Throughput: How many clients/engagements per period?
Delivery Lead Time: Engagement start to delivery -- how long?
Client Satisfaction: NPS, CSAT, retention rate, referral rate?
Repeatability: Is the delivery workflow documented and templated? Score 1-5.
Time to First Value (TTFV): How quickly does a client receive meaningful value after engaging?
Acquisition: Conversion rate, cost per acquisition, proposal win rate?
Revenue Health: Refund/dispute rate, CLV, churn (retainers), NRR?
Update .claude/canvas/service-metrics.yml with current measurements and last_measured timestamp.