From ai-adoption-playbook
Guides founders to pick highest-probability first AI use case for visible results in 2-4 weeks via workflow questions, pain points, team assessment, and scored 2-3 options.
npx claudepluginhub adimango/ai-adoption-playbookThis skill uses the workspace's default tool permissions.
Helps a founder choose the right first AI use case for their team. The goal is maximum visible wins with minimum friction — not the theoretically best use case, but the one most likely to actually succeed and shift attitudes. Produces a scored comparison of 2-3 use cases with a clear recommendation.
Orchestrates full AI adoption sequence for founders: fluency assessment, blocker diagnosis, use case selection, 90-day plan, board narrative.
Guides organizational AI adoption using Brian Balfour's CODER framework: diagnoses barriers, creates plans with constraints, ownership, directives, expectations, rewards.
Audits, re-engineers, or bootstraps projects to align with AI-first design principles across 7 dimensions. Triggers on review, ai-firstify, improve, or new project requests.
Share bugs, ideas, or general feedback.
Helps a founder choose the right first AI use case for their team. The goal is maximum visible wins with minimum friction — not the theoretically best use case, but the one most likely to actually succeed and shift attitudes. Produces a scored comparison of 2-3 use cases with a clear recommendation.
Core principle: The first use case isn't about productivity. It's about proof. Pick the one that creates believers.
digraph usecase {
"Review scorecard + blockers" [shape=box];
"Understand workflow (3-4 Qs)" [shape=box];
"Identify pain points (2-3 Qs)" [shape=box];
"Assess team dynamics (2-3 Qs)" [shape=box];
"Score candidate use cases" [shape=box];
"Present 2-3 options with recommendation" [shape=box];
"Founder picks" [shape=diamond];
"Produce use case brief" [shape=box];
"Route to next skill" [shape=doublecircle];
"Review scorecard + blockers" -> "Understand workflow (3-4 Qs)";
"Understand workflow (3-4 Qs)" -> "Identify pain points (2-3 Qs)";
"Identify pain points (2-3 Qs)" -> "Assess team dynamics (2-3 Qs)";
"Assess team dynamics (2-3 Qs)" -> "Score candidate use cases";
"Score candidate use cases" -> "Present 2-3 options with recommendation";
"Present 2-3 options with recommendation" -> "Founder picks";
"Founder picks" -> "Produce use case brief" [label="chosen"];
"Founder picks" -> "Present 2-3 options with recommendation" [label="none fit, adjust"];
"Produce use case brief" -> "Route to next skill";
}
Reference the fluency scorecard and blocker report if available. If not, ask for a quick summary.
"Let's find the right place to start. I need to understand your team's workflow, where time gets wasted, and who's most likely to try something new. A few questions, then I'll give you 2-3 options scored against what matters for a first win."
You need to know how work actually flows through the team to find where AI fits naturally.
Questions (ask one at a time):
Find the specific friction points where AI could help.
Questions:
Understand who would actually try a new tool and who needs convincing.
Questions:
Based on what you've learned, identify 2-3 candidate use cases and score them against five criteria:
| Criterion | What it means | Why it matters for a first use case |
|---|---|---|
| Visibility | Will the result be seen by others? | First use case needs witnesses. Hidden wins don't spread. |
| Time to result | How fast will someone see value? | Must show results in 2-4 weeks or momentum dies. |
| Friction | How hard is it to start? | Every setup step loses people. Lower friction = higher adoption. |
| Skeptic-resistance | Can skeptics dismiss the result? | If the senior engineer can wave it away, it didn't work. |
| Failure cost | What happens if it doesn't work? | First use case must be safe to fail. Low stakes. |
Bonus lens: Does this eliminate blank-page work?
The highest-impact first use cases remove tasks that currently start from zero. If engineers are writing boilerplate, drafting PRs from scratch, or creating test skeletons by hand — that's blank-page work. AI eliminates it by providing a starting point that humans refine.
When scoring candidates, favor use cases where AI replaces "start from nothing" with "start from a draft." These produce the most visceral time savings and are hardest for skeptics to dismiss — the before/after is obvious.
Score each candidate 1-5 on each criterion. Higher = better.
Common use case candidates and their typical profiles:
| Use case | Visibility | Time to result | Friction | Skeptic-resistance | Failure cost |
|---|---|---|---|---|---|
| AI-assisted code review | 4 | 4 | 5 | 4 | 5 |
| Test generation | 3 | 3 | 3 | 3 | 4 |
| PR descriptions/summaries | 5 | 5 | 5 | 2 | 5 |
| Documentation drafts | 3 | 4 | 4 | 2 | 5 |
| Bug investigation/debugging | 4 | 3 | 3 | 4 | 4 |
| Code migration/refactoring | 3 | 2 | 2 | 4 | 2 |
| Boilerplate/scaffolding | 2 | 5 | 4 | 1 | 5 |
These are defaults. Adjust scores based on what you learned about this specific team. A team that hates writing docs will score documentation higher on pain-point relief. A team with flaky tests will score test generation higher.
Present 2-3 use cases with scores and a clear recommendation. Lead with your recommendation and explain why.
"Based on what you've told me, here are three options. I'd go with Option A, and here's why."
Present each option briefly:
Then ask: "Which of these feels right for your team?"
Once the founder picks, produce the brief.
Symptom: You suggest "code review" after hearing the company name and team size. Consequence: Generic recommendation that may not fit their specific workflow or blockers. Fix: Complete all discovery questions first. The best use case depends on their bottlenecks, team dynamics, and what's already been tried.
Symptom: Founder wants to start with "AI-powered feature for customers" or "rewrite our CI pipeline with AI." Consequence: Too complex, too long, too risky. Failure sets adoption back months. Fix: "That could be a great second or third use case. For the first one, we want something that proves value in 2-4 weeks with minimal risk. Let's start smaller and build credibility for that bigger project."
Symptom: The use case with the highest theoretical ROI wins. Consequence: High-ROI use cases are often high-friction and hard to demonstrate. The first use case needs to create believers, not maximize efficiency. Fix: Score on all five criteria, not just time savings. Visibility and skeptic-resistance matter more for the first use case than raw productivity.
Symptom: Recommending code completion when the blocker report says seniors have identity-based resistance to AI writing code. Consequence: The use case triggers the exact barrier that's already blocking adoption. Fix: Cross-reference the blocker report. If psychological barriers are high, favor use cases where AI reviews/assists rather than generates. If integration is the issue, favor use cases with zero setup.
Produce the use case brief in this exact format:
## First Use Case Brief
**Company:** [name] | **Date:** [date]
### Chosen Use Case
[Name of the use case — e.g., "AI-assisted code review on pull requests"]
### Why This One
[2-3 sentences tying the choice to this team's specific workflow, pain points, and dynamics]
### Criteria Scores
| Criterion | Score |
|-----------|:-----:|
| Visibility | X/5 |
| Time to result | X/5 |
| Friction | X/5 |
| Skeptic-resistance | X/5 |
| Failure cost | X/5 |
| **Total** | **X/25** |
### Who Runs the Pilot
[Specific role or person — the champion, a willing team, etc.]
### Quality Bar for Stage 1
[What "good enough" means for AI output in this use case. Set it low on purpose: "good enough to move work forward" not "ready to publish." Perfectionism kills early adoption — the team needs to see AI as a useful starting point, not a replacement for their judgment.]
### Baseline (Before Pilot Starts)
[The current number for the metric you'll use to prove success — e.g., "Average PR review time: 45 minutes. Engineers using AI tools weekly: 2 of 14." Without this, there's nothing to compare against at Week 4.]
### What Success Looks Like in 2 Weeks
[One concrete, measurable outcome — e.g., "AI catches at least 3 real issues in PRs that humans confirm were valid"]
### What Success Looks Like in 4 Weeks
[One concrete, measurable outcome focused on **repeat use** — e.g., "5+ engineers voluntarily use it on every PR, not just when reminded." Repeat use is the signal that adoption is real. One-off demos and novelty don't count.]
### Alternatives Considered
- [Option B — one line on why it wasn't chosen]
- [Option C — one line on why it wasn't chosen]
After the use case is picked:
| Situation | Recommended next skill |
|---|---|
| Founder wants a full rollout plan | 90-day-plan-builder |
| Founder needs board-ready story | board-narrative-coach |
| Founder wants to understand cost/benefit | roi-calculator |
| Default | 90-day-plan-builder |
fluency-assessment — provides the scorecard that informs use case selectionblocker-diagnosis — identifies which barriers the use case must avoid triggering90-day-plan-builder — most common next step, builds the rollout plan around the chosen use case