From design-system-ops
Generates design system adoption reports separating coverage from active usage across design (Figma) and engineering (npm), with team trends, risk flags, and auto-pulled metrics.
npx claudepluginhub murphytrueman/design-system-opsThis skill uses the workspace's default tool permissions.
A skill for producing a design system adoption report that distinguishes coverage (who has access and can use the system) from adoption (who is actively using it), with trend direction and risk flags for teams where adoption is low or declining.
Assesses design system maturity on 1-4 scale with metrics, evaluates adoption health and governance, analyzes component duplication, calculates custom vs system ratios, identifies barriers and debt trends.
Assesses design system health across seven dimensions—tokens, components, documentation, adoption, governance, AI readiness, platform maturity—producing findings summary and prioritized actions.
Creates adoption strategies and materials for design systems, including awareness, education, enablement, incentives, metrics, and overcoming barriers across design and engineering teams.
Share bugs, ideas, or general feedback.
A skill for producing a design system adoption report that distinguishes coverage (who has access and can use the system) from adoption (who is actively using it), with trend direction and risk flags for teams where adoption is low or declining.
Coverage and adoption are not the same thing, and treating them as equivalent is one of the most common ways design system reports mislead. A system available to twenty product teams has 100% coverage. If only eight of those teams are actively using it, adoption is 40%. Both numbers are true. Only one of them tells you how the system is actually performing.
This skill produces a report that holds both numbers separately and distinguishes between them throughout. It also separates adoption across two dimensions that are frequently conflated: design adoption (are designers using the Figma library?) and engineering adoption (is the code being consumed from the system?). High design adoption with low engineering adoption is a specific kind of problem — the design side is working but the handoff is broken. The reverse is also a specific kind of problem.
Before producing output, check for a .ds-os-config.yml file in the project root. If present, load:
system.component_count — informs small-system behavioursystem.maturity_level — informs adoption expectations calibration (see Step 1b)integrations.* — enables auto-pull for adoption data (see below)recurring.* — enables trend comparison against previous reportsIf integrations are configured in .ds-os-config.yml, pull data automatically:
npm registry (integrations.npm.enabled: true):
integrations.npm.package_name over the reporting periodintegrations.npm.scoped_packages — note these are directional signals (see monorepo caveat in component-audit)Figma MCP (integrations.figma.enabled: true):
integrations.figma.file_key if available via the Figma REST APIGitHub (integrations.github.enabled: true):
gh api search/codeDocumentation platform (integrations.documentation.enabled: true):
If an integration fails, log it and proceed with manual data gathering. Do not block the adoption report on integration availability.
If recurring is configured in .ds-os-config.yml:
recurring.output_directory.recurring.retain_count.Ask for or confirm (skip questions already answered by auto-pull):
Before proceeding, audit which adoption signals are available and their reliability:
Direct signals (measured data):
Indirect signals (structural inference):
Document which signals are available and which are unavailable. Adoption assessment is only as strong as the signals used — if only one signal is available, note that the adoption assessment is based on limited data and may be incomplete.
If data is limited: the adoption report can be conducted as a structured assessment based on available signals rather than hard metrics. Note clearly in the output where figures are estimated rather than measured.
Before calculating any metrics, calibrate the measurement against the system's maturity level and the current reporting period. Raw adoption percentages are misleading without context.
Maturity-appropriate adoption expectations:
| Maturity level | Expected adoption range | Interpretation guide |
|---|---|---|
| Level 1 (Ad-hoc) | 0–20% | Any adoption is a positive signal. Focus on whether early adopters are satisfied and whether the system is removing friction for them. Growth potential is high. |
| Level 2 (Managed) | 20–50% | Growth is the key metric. Is adoption increasing quarter-over-quarter? This is the growth phase — adoption should be clearly trending up. |
| Level 3 (Systematic) | 50–75% | Breadth matters. Are most teams using the system for most patterns? This is the critical transition phase — crossing 50% indicates the system is now central infrastructure. |
| Level 4 (Measured) | 75–90% | Depth matters. Are teams using the system deeply, not just superficially? At this level, drops in adoption are concerning. Focus on why teams are not using the system for the remaining 10–25% of patterns. |
| Level 5 (Optimised) | 85%+ | Maintenance matters. Is adoption holding steady without active promotion? At this level, the system should sustain adoption with minimal ongoing messaging. Declines are the primary concern. |
Calibration note for adoption interpretation: When writing the adoption picture synthesis in Step 5, frame the adoption percentage against the expected range for the system's maturity level. A Level 2 system at 35% adoption is healthy and growing — the focus is on acceleration. A Level 4 system at 35% adoption has a structural problem — focus is on diagnostic. The same percentage means very different things depending on context.
Before calculating any metrics, define what "coverage" and "adoption" mean for this reporting period.
Coverage = teams that have access to the system and can use it, whether they are using it or not.
Signals that indicate coverage:
Coverage is the easy metric. A system available to twenty product teams has 100% coverage if all twenty have access. The challenge is actual usage.
Adoption = teams that are actively consuming design system components in shipped product work, not just installed or in explorations.
Before calculating any adoption metrics, align on what "adoption" means for this context. Different definitions produce different numbers — and comparing reports that use different definitions creates misleading trends.
Adoption definition worksheet:
What counts as "using the system"?
What counts as "design adoption"?
What counts as "engineering adoption"?
What is the threshold for "partial" vs "full" adoption?
Document the chosen definitions at the top of the report. Use the same definitions for every subsequent report to enable meaningful trend comparison.
Design adoption and engineering adoption are independent. Common patterns:
High design adoption, low engineering adoption: Design system is working, but engineering handoff is broken. The components are being designed with the system, but engineers are not implementing them from the code library. Investigate: are the code components available? Are they what designers think they are? Is there a documentation or discovery gap?
Low design adoption, high engineering adoption: Engineers are adopting the system's code, but designers are not using the Figma library. Investigate: is the design library up to date? Is it discoverable? Are there design tokens being used that are not reflected in the code?
Both low: System has not crossed the adoption threshold. The focus is on why — is there a blocker that explains both, or are design and engineering facing different problems?
For each team in scope, determine:
Sources for per-team data:
For each team, assign a status:
Actively adopting: Team is shipping products with system components regularly. Usage is recent (within the last 30 days). Both design and engineering are engaged or one is engaged and the other is appropriately delegated.
Partially adopting: Team has shipped with system components, but adoption is inconsistent. May be using the system for some patterns but building locally for others. Engagement is intermittent.
Not adopting: Team has access but is not actively using the system in shipped work. Usage may have been exploratory or one-time.
No engagement: Team has no known contact with the system. May not be aware it exists.
Document the assessment criteria you use so that this assessment can be repeated in future reporting periods.
Flag teams where adoption is declining, where there has been no engagement for an extended period, or where known blockers exist.
For each at-risk team, identify:
At-risk teams are the most actionable section of the report. Adoption work is most effective early — a team that has disengaged for six months is significantly harder to re-engage than a team that has been quiet for six weeks.
Based on what is known from team interactions, support requests, and survey data: what are the most commonly cited reasons for non-adoption or partial adoption?
Group blockers into categories:
For each category: how many teams or incidents cite this as a blocker, and what would address it.
Flag if one blocker category dominates. If "missing components" is cited by 80% of at-risk teams, that is a different problem than if "awareness gaps" dominates. The remediation is category-specific.
Period: [reporting period] Date: [date] Previous report: [link or date, if applicable] Adoption definition: [the agreed definition from Step 2] Maturity level: [current system maturity — Level 1/2/3/4/5]
| Metric | This period | Previous period | Change | Trend |
|---|---|---|---|---|
| Teams in scope | [n] | [n] | [+/-] | |
| Coverage | [%] | [%] | [+/-] | |
| Active adoption | [%] | [%] | [+/-] | |
| Design adoption | [%] | [%] | [+/-] | |
| Engineering adoption | [%] | [%] | [+/-] | |
| At-risk teams | [n] | [n] | [+/-] |
| Design | Engineering | Combined | |
|---|---|---|---|
| Teams in scope | [n] | [n] | [n] |
| Teams with access (coverage) | [n] ([%]) | [n] ([%]) | [n] ([%]) |
| Teams actively adopting | [n] ([%]) | [n] ([%]) | [n] ([%]) |
| Change from last period | [+/- n] | [+/- n] | [+/- n] |
One paragraph synthesising the adoption state. Include:
Example: "Design adoption is growing (55%, up from 45%) and ahead of Level 3 expectations. Engineering adoption is stable (42%, no change) and slightly below expected growth for this maturity level. The design side is working — designers are actively using the Figma library. The gap is in the engineering handoff: code components are available but not widely integrated into shipping products."
State the overall direction in one sentence: growing, stable, declining, or mixed. A mixed signal (design adoption growing while engineering adoption is flat) is worth naming explicitly.
If this is the first report: no trend direction is available. Flag that this report establishes the baseline and that trend analysis will be available from the next reporting period.
Frame the trend against maturity-level expectations. What is the expected direction for this system at this maturity level? Is the actual trend ahead of or behind expectations?
Document which signals informed this report:
Example: "Report is based on npm download statistics (reliable), Figma library usage (reliable), and interviews with 8 of 12 teams (partial coverage). Figures for the 4 teams not interviewed are estimated from code import analysis."
For each team in scope:
| Team | Design adoption | Engineering adoption | Status | Notes |
|---|---|---|---|---|
| [Team name] | Active / Partial / Not adopting | Active / Partial / Not adopting | On track / At risk / No engagement | [relevant context] |
For teams with "Partial" adoption: note which areas of the system are being used and which are not. Partial adoption often indicates that the system does not yet serve a specific use case this team has, which is actionable information.
Example: "Team Checkout — Partial / Active: Using form components and buttons (40% of shipping patterns). Building custom payment flow locally (not in system scope). Engagement high. No blockers identified."
Flag teams where adoption is declining, where there has been no engagement for an extended period, or where known blockers exist.
For each at-risk team:
| Team | Signal | Likely cause | Recommended action |
|---|---|---|---|
| [Team name] | Declining usage (from 60% to 20% in last quarter) | System does not have date picker; team building locally | Assess date picker as contribution candidate |
| No recent engagement (last contact 8 weeks ago) | Unknown | Reach out for a conversation | |
| Active parallel solution being built | Framework incompatibility with system | Assess whether to support this team's framework or document local exception |
At-risk teams are the most actionable section of the report.
Which teams do not yet have access to the system? This is a different category from teams that have access but are not adopting — these teams have not been onboarded at all.
For each uncovered team: whether they have expressed interest, what their likely use case would be, and what would be required to extend coverage.
Example: "Team Platform — Not in scope. High interest in using design system for admin tools. Would require: framework support for Django templates (currently system supports React only), evaluation of how admin-specific patterns map to existing component library."
Based on what is known from team interactions, support requests, and survey data: what are the most commonly cited reasons for non-adoption or partial adoption?
Present as a table:
| Blocker category | Cited by | Example | Remediation |
|---|---|---|---|
| Missing components | 6 teams | Date picker, data table, drag-and-drop | Component audit + contribution planning |
| Documentation gaps | 3 teams | "How do I use the Button component in our framework?" | Improve component doc search and examples |
| Integration friction | 4 teams | Installation process, build integration | Simplify setup, provide templates |
| Awareness gaps | 2 teams | Teams did not know system existed | Onboarding outreach |
| Tooling misalignment | 1 team | System uses Vue, team uses Svelte | Assess Svelte support or document local exception |
For each category: how many teams or incidents cite this as a blocker, and what would address it.
Flag if one blocker category dominates. Focus remediation effort on the category affecting the most teams.
Prioritised actions based on the report findings:
Immediate actions for at-risk teams
Adoption blocker remediation in priority order
Coverage extension opportunities
Metrics improvements
At the staff level, adoption reports should include infrastructure reliability signals alongside usage metrics. These help explain WHY adoption is where it is.
Release reliability:
Documentation currency:
Integration friction score:
Frame these as leading indicators: reliability metrics predict future adoption trends. A system with two breaking changes, no migration paths, and stale documentation will lose adoption even if current numbers look healthy.
If the system has AI-ready metadata (structured descriptions, machine-readable manifests, MCP integration):
This is a forward-looking metric. Even if AI tooling adoption is currently zero, documenting AI readiness positions the system for the next wave of tooling.
For systems this size, the coverage-vs-adoption framing still works but the metrics change. Coverage is likely 100% — if you have three components, every team that uses the system has access to all of them. Adoption is better measured as scope coverage: what percentage of the team's actual interface needs does the system serve? A system with 3 components that covers 80% of a team's UI patterns has stronger adoption than a 30-component system covering 20%.
The team-by-team breakdown may reduce to a single team or two — that is fine, but go deeper per team: which patterns are they using the system for, and which are they building locally?
The "at-risk teams" section may not apply if there is only one consuming team. Replace it with an "unserved needs" section listing the interface patterns the team is building outside the system. These are your roadmap.
For reporting, treat partial adoption as "using the system for [%]% of patterns" rather than "using 3 of 5 components."