From ai-adoption-playbook
Generates a one-page AI adoption scorecard with metrics on tool access, weekly usage, and workflow changes for engineering teams. For board decks or leadership updates.
npx claudepluginhub adimango/ai-adoption-playbookThis skill uses the workspace's default tool permissions.
Produces a one-page adoption snapshot with hard numbers — who's using what, how often, and what's changed. This is a measurement tool, not a diagnostic. It reports the current state without analyzing why or recommending what to do next.
Assesses team AI fluency via context questions, optional data import or 9-question quiz, generating scorecard across psychological barriers, integration failures, and ownership gaps for AI adoption leaders.
Generates design system adoption reports separating coverage from active usage across design (Figma) and engineering (npm), with team trends, risk flags, and auto-pulled metrics.
Guides organizational AI adoption using Brian Balfour's CODER framework: diagnoses barriers, creates plans with constraints, ownership, directives, expectations, rewards.
Share bugs, ideas, or general feedback.
Produces a one-page adoption snapshot with hard numbers — who's using what, how often, and what's changed. This is a measurement tool, not a diagnostic. It reports the current state without analyzing why or recommending what to do next.
Core principle: Adoption is behavior change, not tool access. This scorecard measures what people actually do, not what licenses they have.
If not available from prior skills, ask for:
| Level | What it means | How to count |
|---|---|---|
| Access | Has a license or account | Count of provisioned seats |
| Usage | Opens the tool at least weekly | Count from admin dashboard or founder estimate |
| Adoption | Work has visibly changed — tasks start differently, output is different | Count of people whose workflow has shifted |
Most founders report access as if it were adoption. This scorecard separates the three so the board sees reality.
Produce the scorecard in this exact format:
## AI Adoption Scorecard
**Company:** [name] | **Team:** [size] | **Date:** [date]
**Previous scorecard:** [date or "first measurement"]
### Adoption at a Glance
| Metric | Count | % of Team |
|--------|:-----:|:---------:|
| Team size (engineering) | [X] | 100% |
| Have AI tool access | [X] | [X]% |
| Use AI tools weekly | [X] | [X]% |
| Work has visibly changed | [X] | [X]% |
### Tool Breakdown
| Tool | Seats | Weekly Users | Primary Use Case | Cost/Month |
|------|:-----:|:------------:|-----------------|:----------:|
| [Tool 1] | [X] | [X] | [use case] | $[X] |
| [Tool 2] | [X] | [X] | [use case] | $[X] |
| **Total** | **[X]** | **[X]** | | **$[X]** |
### Unused Capacity
- [X] seats provisioned but not used weekly ($[X]/month)
- [Tools or use cases attempted and abandoned, if any]
### Adoption by Role
| Role | Total | Using AI Weekly | Notes |
|------|:-----:|:---------------:|-------|
| Senior engineers | [X] | [X] | |
| Mid-level engineers | [X] | [X] | |
| Junior engineers | [X] | [X] | |
| Engineering managers | [X] | [X] | |
| Non-engineering | [X] | [X] | |
### Use Case Coverage
| Workflow Stage | AI Used? | Tool | Who |
|---------------|:--------:|------|-----|
| Writing code | [yes/no] | | |
| Code review | [yes/no] | | |
| Testing | [yes/no] | | |
| Documentation | [yes/no] | | |
| Debugging | [yes/no] | | |
| Planning/architecture | [yes/no] | | |
### Change Since Last Measurement
[If prior scorecard exists: what moved, what didn't. If first measurement: "Baseline established."]
### Team Stability Note
[If team stability is not "Stable": 1-2 sentences explaining how the restructuring affects these numbers. E.g., "Sales team went from 20 to 15 after restructuring. 3 of the 5 departures were non-adopters — adoption percentage may overstate real behavior change." If team is stable, omit this section.]
Symptom: "60% of our team has Copilot" presented as adoption. Consequence: Board thinks adoption is further along than it is. Reality gap grows. Fix: Separate access, usage, and adoption. "60% have access, 33% use it weekly, 15% say it's changed how they work."
Symptom: Scorecard includes "next steps" or "areas for improvement." Consequence: Mixes measurement with prescription. The scorecard should be a mirror, not a coach. Fix: Just the numbers. Other skills handle the "what to do about it."
Symptom: "Adoption is progressing well" or "team is engaged." Consequence: Means nothing. Board can't act on vibes. Fix: Every cell in the scorecard is a number, a name, or a yes/no.
fluency-assessment — deeper diagnostic that scores fluency across three pillarsroi-calculator — uses adoption data to calculate financial returnquarterly-review — re-runs this scorecard and compares to previous period