From ai-adoption-playbook
Calculates board-ready ROI for AI tool adoption from founder-provided data across cost efficiency, revenue optimization, and new revenue buckets.
npx claudepluginhub adimango/ai-adoption-playbookThis skill uses the workspace's default tool permissions.
Produces a board-ready ROI calculation from the founder's actual data — not industry benchmarks. Separates ROI into three buckets so the founder can tell a complete story. This is a calculation tool, not a strategy session.
Builds financial models for business cases including ROI, NPV, IRR, scenario analysis, TCO, DCF, break-even, and EVA. Useful for investment recommendations, strategic decisions, and cost-benefit analysis.
Generates business case analyses with ROI, NPV, IRR, payback period, TCO calculations, and sensitivity analysis for financial justification, build-vs-buy, and investment decisions.
Calculates monthly token savings, ROI by model, context multiplier, and team savings for Context Optimizer. Uses real session data or 35% waste industry average.
Share bugs, ideas, or general feedback.
Produces a board-ready ROI calculation from the founder's actual data — not industry benchmarks. Separates ROI into three buckets so the founder can tell a complete story. This is a calculation tool, not a strategy session.
Core principle: Use the founder's real numbers. If a number is estimated, label it as estimated. Never substitute industry averages for missing data — flag the gap instead.
Every AI investment produces value in one or more of these buckets:
| Bucket | What it measures | Examples |
|---|---|---|
| Cost efficiency | Time saved, spend reduced | Hours saved per engineer per week, reduced contractor spend, fewer tools needed |
| Revenue optimization | Existing revenue protected or grown | Faster feature shipping, reduced churn from faster bug fixes, shorter sales cycles |
| New revenue | Revenue that wouldn't exist without AI | AI-powered product features, new service offerings, markets entered faster |
Most early-stage AI adoption lives in bucket 1. That's fine — but naming all three buckets shows the board you're thinking beyond cost cutting.
Ask for these if not already available from prior skills:
Cost efficiency ROI:
Annual tool cost = (seats × price/seat × 12)
Annual time value saved = (active users × hours saved/week × 50 weeks × hourly rate)
Cost efficiency ROI = (time value saved - tool cost) / tool cost × 100
Important notes for the founder:
Revenue optimization and new revenue: These require founder-specific data. If the founder has revenue impact data, include it. If not, note the bucket as "not yet measured" — don't estimate.
Symptom: "GitHub research shows 25-30% productivity improvement" used as the basis for ROI. Consequence: Board asks one follow-up question and the number falls apart. Fix: Use only this team's data. If they don't have productivity data, say "we measured time savings of X hours/week (self-reported)" — not "industry benchmarks suggest."
Symptom: ROI denominator includes all seats, but only half are used. Consequence: ROI looks lower than it is, and the real problem (unused seats) gets hidden. Fix: Calculate ROI on active users separately. Show unused seat cost as waste to address.
Symptom: ROI shows AI tools save money vs. doing nothing. Consequence: Board asks "what else could we spend $10K/year on?" and founder has no answer. Fix: Include the cost-per-engineer-per-month and let the board compare to other investments.
Symptom: A metric looks green — "95% of engineers have Copilot" — but the underlying reality hasn't changed. Access ≠ usage ≠ adoption. Consequence: Board thinks adoption is working. Next quarter the numbers don't move and credibility takes a hit. Fix: If the founder's data shows high access but low usage, flag it: "This number measures access, not behavior. The board will ask what people actually do with it."
Symptom: A metric becomes a target and people optimize the number instead of the outcome — e.g., team accepts every AI suggestion to hit "AI-assisted commits" while code quality drops. Consequence: The metric improves, the goal doesn't. Board eventually notices. Fix: Pair every activity metric with an outcome metric. If you report "AI-assisted commits," pair it with "defect rate" or "review cycle time." If they move in opposite directions, flag the tension.
Symptom: Usage spikes during the pilot month when everyone's watching, then drops when attention moves elsewhere. Consequence: ROI calculated from the pilot month overstates the real impact. Fix: If usage data covers only a monitored period, label the number in the output: "(pilot period — not yet confirmed as sustained)." The board sees it's provisional. Don't present pilot-month data as a trend.
Produce the ROI calculation in this exact format:
## AI Investment ROI
**Company:** [name] | **Period:** [timeframe] | **Date:** [date]
### Investment Summary
| Item | Monthly | Annual |
|------|---------|--------|
| [Tool 1] — [X] seats | $[X] | $[X] |
| [Tool 2] — [X] seats | $[X] | $[X] |
| Unused seats ([X] of [Y]) | $[X] wasted | $[X] wasted |
| **Total investment** | **$[X]** | **$[X]** |
| **Effective investment** (active seats only) | **$[X]** | **$[X]** |
### ROI by Bucket
#### 1. Cost Efficiency
- Active users: [X] of [Y] engineers
- Time saved: [X] hours/engineer/week [estimated/measured]
- Total weekly capacity recovered: [X] hours
- Value of recovered capacity: $[X]/year (at $[X]/hr fully loaded)
- **Cost efficiency ROI: [X]%**
- **Breakeven:** Tool needs to save each active user [X] minutes/day to pay for itself
#### 2. Revenue Optimization
[Specific data if available, or: "Not yet measured. To calculate: track feature shipping velocity, bug resolution time, or customer-facing cycle times before and after AI adoption."]
#### 3. New Revenue
[Specific data if available, or: "Not applicable yet. This bucket activates when AI enables product features or services that generate revenue directly."]
### The Math
[Show each calculation step so the board can verify]
### What's Missing
[List any data gaps that would strengthen the case — e.g., "Objective time measurement (current numbers are self-reported)", "Quality impact data (bug rates before/after)"]
[If team stability is "Restructuring in progress": add this line:]
- **Team restructuring in progress.** Headcount changes may inflate or obscure AI-driven savings. If roles were eliminated alongside AI adoption, separate "capacity recovered through AI" from "capacity removed through restructuring" before presenting to the board. Mixed numbers invite hard follow-up questions.
[If team stability is "Restructuring completed": add this line instead:]
- **Team restructured during measurement period.** Baseline headcount changed. ROI calculations use the current team as the denominator, but prior-period comparisons should note the team was larger/smaller then.
### Board-Ready Summary
[2-3 sentences: investment amount, return, and what it means. Use only defended numbers.]
board-narrative-coach — uses this calculation in the board update narrative90-day-plan-builder — Phase 2 begins measuring ROI, Phase 3 consolidates itadoption-scorecard — provides the usage data that feeds the active user count