From bette-think
Generates project specs at optimal depth: quick Linear issues for tasks, lite PRDs for features, AI specs with context, evals. Use /spec shortcuts like --quick or --ai.
npx claudepluginhub breethomas/bette-think --plugin bette-thinkThis skill uses the workspace's default tool permissions.
**Write what's needed. Skip what's not.**
Writes structured feature specs or PRDs from problem statements or ideas. Covers problem, goals/non-goals, user stories, prioritized requirements, and success metrics.
Refines rough ideas into executable specifications via collaborative questioning, alternative exploration, and incremental validation. Invoke before creative work or implementation.
Generates 5-stage PRDs for complex features with 15-25 AI behavior examples, rollout plans, gates, and checklists. Invoke via /spec --deep full-prd for deep work.
Share bugs, ideas, or general feedback.
Write what's needed. Skip what's not.
Most specs fail because they're either:
This skill routes you to the right depth:
The templates are already excellent. This skill helps you use them.
These principles guide every level:
Issues, not user stories - Plain language wins. "Add export button to dashboard" beats "As a user, I want to export data so that I can..."
Scope down - If it can't be done in 1-3 weeks by 1-3 people, break it down further.
Short specs get read - Long specs get skipped. Write for clarity, not completeness.
Prototype > documentation - A working demo + 3 paragraphs beats a 10-page spec.
Make decisions, not descriptions - Every section should decide something.
See: skills/spec/references/philosophy.md for the full philosophy.
When this skill is invoked, start with:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
SPEC
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What are you speccing?
1. Quick task (hours to days)
โ Clear title + optional description
โ If it fits in one sentence, just write an issue
2. Feature (1-3 weeks)
โ Problem, solution, success metric, scope
โ Use what's helpful, skip the rest
3. AI feature (any size)
โ Core AI questions + context requirements + behavior examples
โ Evals are non-negotiable. Model costs early.
4. Not sure
โ Tell me what you're building, I'll help you decide
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Parse intent from context:
Command-line shortcuts:
/spec --quick โ Skip to Level 1/spec --feature โ Skip to Level 2/spec --ai โ Skip to Level 3/spec LIN-123 โ Fetch Linear issue, determine levelUse templates/linear-issue.md as reference.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
LEVEL 1: Quick Task
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The goal: A title that makes it obvious what you're doing.
Everything else is optional.
Questions to ask:
What's the action? (Add, Fix, Design, Refactor, Remove...)
What's being changed? (The specific thing)
Where? (Optional: location in product)
Good titles:
Add CSV export to dashboardFix: Login fails on SafariDesign mobile navigationRefactor auth middlewareBad titles:
Export feature (vague)Bug (what bug?)Updates (what updates?)When to add a description:
When to skip description:
Generate a clear issue ready for Linear:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ISSUE READY
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Title: [Generated title]
Description:
[Optional description if needed]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What next?
1. Create in Linear
2. Edit title/description
3. Add more context (โ Level 2)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
If Linear MCP available: Offer to create the issue directly.
Use templates/lite-prd.md as reference.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
LEVEL 2: Feature Spec
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
The goal: Shared understanding. Not completeness.
We'll answer 5 essential questions. Everything else is optional.
1. What problem are we solving?
2. For whom?
3. How do we know this matters?
4. What are we building?
5. How will we know it worked?
After the essentials, offer relevant optional sections:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ESSENTIALS COMPLETE
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
That might be all you need. Want to add any of these?
โก Scope & Decisions (in/out of scope, open questions)
โก Risks (assumptions, four risks check)
โก Discovery Insights (research, data)
โก Technical Notes (estimate, challenges, dependencies)
โก Launch Notes (rollout strategy, communication)
โก Timeline (Now/Next/Later)
Skip what doesn't help create shared understanding.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Generate the spec in markdown:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
SPEC READY
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# [Feature Name]
## The Essentials
**What problem:** [2-3 sentences]
**For whom:** [Specific segment]
**Evidence:** [What you know]
**Solution:** [What you're building + prototype link]
**Success:** [Metric with target]
[Optional sections if added]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What next?
1. Create Linear project (parent + child issues)
2. Export markdown
3. Go deeper (โ Level 4 options)
4. Start over
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Use templates/ai-product-spec.md + context requirements table.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
LEVEL 3: AI Feature Spec
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AI products need more upfront thinking - but not overly complex docs.
We'll cover:
โข Core AI questions (what, quality, testing, cost, failures)
โข Context requirements (what data the AI needs)
โข Behavior examples (what good/bad looks like)
Evals are non-negotiable. Model costs early.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Walk through these 5 questions (from templates/ai-product-spec.md):
1. What's the AI doing?
2. How will you know if it's good?
3. How will you test it?
4. What will it cost?
5. What happens when it's wrong?
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
CONTEXT REQUIREMENTS
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
90% of AI quality comes from context quality.
What context does the AI need to do its job?
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Build a context requirements table:
| Data Needed | Source | Availability | Notes |
|---|---|---|---|
| [Entity/signal] | [DB/API/user] | [Always/Sometimes/Never] | [Sensitivity, freshness] |
See: skills/spec/references/context-table.md for the full format.
Key questions:
Flag problems immediately:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
BEHAVIOR EXAMPLES
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AI behaves according to examples, not descriptions.
We need 5-10 examples minimum covering:
โข Good responses (what should happen)
โข Bad responses (common failure modes)
โข Reject cases (when AI should refuse/defer)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Example format:
Scenario: [Brief description]
Input: [What the user provides]
Good: [Desired response]
Bad: [What to avoid]
Reject: [When to refuse - if applicable]
See: skills/spec/references/behavior-examples.md for guidance.
Coverage to aim for:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AGENCY PROGRESSION
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AI products earn autonomy. What's your ladder?
| Version | Capability | Control | Agency | What You're Testing |
|---------|------------|---------|--------|---------------------|
| V1 | [describe] | High | Low | [what you learn] |
| V2 | [describe] | Medium | Medium | [what you learn] |
| V3 | [describe] | Low | High | [what you learn] |
Which version are you speccing right now? (Usually V1)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Key questions:
See: skills/spec/references/agency-progression.md for examples and ladder patterns.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
CONTROL HANDOFFS
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
How do humans take back control when the AI is wrong?
- Override mechanism: [how users correct/reject AI output]
- Escalation path: [when AI should defer to human]
- Feedback capture: [how corrections feed back into system]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Good control handoffs:
Bad control handoffs:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
REFERENCE DATASET
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Before building, you need 20-100 examples of expected behavior.
This forces alignment on what "good" looks like.
Where will reference examples come from?
- [ ] Historical data (logs, past interactions)
- [ ] Manual curation (team creates examples)
- [ ] User research (observed behaviors)
- [ ] Synthetic generation (for edge cases)
Target count: [X] examples before V1 launch
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Why this matters: Reference datasets force the team to align on expected behavior before writing prompts. Most AI features fail because teams skip this step.
Golden dataset = baseline for evals. Without it, you're testing against vibes.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
AI SPEC READY
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# [Feature Name]
## What's the AI Doing?
[Precise task description]
## Quality Definition
**Good:** [Criteria]
**Bad:** [What to avoid]
## Eval Strategy
[Test approach + dataset categories]
## Cost Model
[Cost per query + projection]
## Failure Handling
[User controls + fallbacks]
## Context Requirements
| Data | Source | Availability | Notes |
|------|--------|--------------|-------|
[Table]
**When context is missing:** [Fallback behavior]
## Behavior Examples
[5-10 examples]
## Agency Progression Plan
| Version | Capability | Control | Agency | What You're Testing |
|---------|------------|---------|--------|---------------------|
| V1 (this spec) | ... | High | Low | ... |
| V2 (future) | ... | Medium | Medium | ... |
| V3 (future) | ... | Low | High | ... |
## Control Handoffs
**Override:** [mechanism]
**Escalation:** [path]
**Feedback:** [capture method]
## Reference Dataset
**Source:** [where examples come from]
**Target:** [X] examples before launch
**Status:** [X/Y collected]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
What next?
1. Create Linear project
2. Export markdown
3. Go deeper (โ Level 4 options)
4. Run /ai-health-check
5. Plan agency ladder (/agency-ladder)
6. Set up post-launch calibration (/calibrate)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
When user needs more depth, offer these expansions:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
GO DEEPER
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Your spec is solid. Need more depth anywhere?
1. --deep context
Full 4D Canvas walkthrough (Demand, Data, Discovery, Defense)
2. --deep examples
Expand to 15-25 behavior examples
3. --deep rollout
Detailed phased rollout with gates
4. --deep full-prd
Complete PRD framework (5 stages)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Invoke the full 4D Context Canvas walkthrough:
Reference the archived context-engineering skill for the full framework.
Expand behavior examples to 15-25:
Detailed rollout planning:
Invoke the prd-writer skill for the complete 5-stage PRD framework:
When Linear MCP is available:
/spec LIN-123 โ Fetch issue details, pre-populate what's availableLevel 1: Create issue directly via Linear MCP
Level 2: Offer to create:
Level 3: Offer to create:
Before /spec:
/four-risks - Should we build this at all?After /spec:
/ai-cost-check - Model the unit economics/ai-health-check - Pre-launch validation/ai-debug - If feature is underperforming/context-check - Quick quality validationLinear Method: Linear team (issues not stories, scope down, momentum) Lite PRD: Aakash Gupta (Product Growth) AI Product Spec: Aakash Gupta (Product Growth) Context Engineering: Aakash Gupta & Miqdad Jaffer (OpenAI) - 4D Context Canvas