From bette-think
Audits pre-launch AI features across 6 dimensions—model selection, data quality, cost, monitoring, failure UX, optimization—grading readiness and blocking shipment of broken products.
npx claudepluginhub breethomas/bette-think --plugin bette-thinkThis skill uses the workspace's default tool permissions.
Before you ship an AI feature, it needs to pass 6 checks.
Generates 20 test cases (15 happy path + 5 edge) for AI features in spreadsheet format using PM-Friendly Evals. Launches simple eval workflow with optional Linear project.
Structures AI/ML product planning with canvas for user problems, model/task selection, data needs, evaluation metrics, and responsible AI checks. For LLM integrations and AI features.
Audits, re-engineers, or bootstraps projects to align with AI-first design principles across 7 dimensions. Triggers on review, ai-firstify, improve, or new project requests.
Share bugs, ideas, or general feedback.
Before you ship an AI feature, it needs to pass 6 checks.
Most AI products fail because PMs skip the basics: no cost model, broken failure UX, terrible data quality. This skill stops you from launching garbage.
When this skill is invoked, start with:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
AI HEALTH CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Before shipping an AI feature, it needs to pass 6 checks.
What AI feature are you preparing to launch?
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
/ai-health-check [feature-name]
Examples:
/ai-health-check "AI product recommendations" - Audit specific feature/ai-health-check "email composer AI" - Manual description/ai-health-check --pre-launch - Full checklist for current sprint| Dimension | What It Checks |
|---|---|
| Model Selection | Did you try simple approaches first? |
| Data Quality | The thing you're probably ignoring |
| Cost Modeling | Can you afford this at scale? |
| Production Monitoring | How will you know if it breaks? |
| Failure UX | What happens when AI screws up? |
| System Optimization | Are you measuring the right things? |
| Condition | Verdict |
|---|---|
| Any Blocker | DON'T SHIP |
| 2+ Risks (no blockers) | NEEDS WORK |
| 0-1 Risks | READY |
AI Health Check: Email Composer
Overall Readiness: NEEDS WORK (4/6 dimensions ready)
---
Ready: Model Selection, Production Monitoring, System Optimization
Risk: Data Quality, Failure UX
Blocker: Cost Modeling
VERDICT: DON'T SHIP YET
You have 1 blocker:
- No cost model -> Run /ai-cost-check RIGHT NOW
You have 2 risks:
- Data quality strategy undefined
- Failure UX is broken ("Something went wrong" isn't helpful)
---
What To Do Now:
Option A: Fix everything (RECOMMENDED)
1. Run /ai-cost-check (10 min)
2. Define data quality strategy (2 hours)
3. Build better failure UX (3 hours)
4. Rerun /ai-health-check
Option B: Ship with known risks
1. Fix the blocker only
2. Ship knowing data quality and failure UX are weak
3. Plan to fix in week 1
Cost Modeling missing:
"You're about to launch with zero idea if this bankrupts you at scale." Run
/ai-cost-checkfirst.
Failure UX broken:
"Something went wrong" tells users nothing. No confidence indicators = users don't know when to trust the AI.
No monitoring plan:
"Launching without monitoring = flying blind."
/ai-cost-check - Detailed cost modeling (run if cost dimension is blocked)/start-evals - Set up quality testing/four-risks - Overall feature risk assessmentBest for: Pre-launch validation of AI features Key insight: "Fine-tuning is the last resort. Data quality beats tool selection. Most AI failures are UX problems."