From thinking-frameworks-skills
Anchors forecasts in historical base rates by identifying reference classes of similar past events before case-specific analysis. Useful when starting predictions, establishing base rates, testing 'this time is different' claims, or mentions of reference classes/outside view.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Interactive Menu](#interactive-menu)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
What would you like to do?
1. Find My Base Rate - Identify reference class and get statistical baseline
2. Test "This Time Is Different" - Challenge uniqueness claims
3. Calculate Funnel Base Rates - Multi-stage probability chains
4. Validate My Reference Class - Ensure you chose the right comparison set
5. Learn the Framework - Deep dive into methodology
6. Exit - Return to main forecasting workflow
Let's establish your statistical baseline.
Tell me the specific event or outcome you're predicting.
Example prompts:
I'll help you identify what bucket this belongs to.
Framework:
Key Questions:
I'll work with you to refine this until we have a specific, searchable class.
I'll help you find the base rate using:
Search Strategy:
"historical success rate of [reference class]"
"[reference class] failure statistics"
"[reference class] survival rate"
"what percentage of [reference class]"
Once we find the base rate, that becomes your starting probability.
The Rule:
Treat this base rate as your starting point. Adjust only when you have specific, evidence-based reasons from your "inside view" analysis.
Default anchors if no data found:
Next: Return to menu or proceed to inside view analysis.
Challenge uniqueness bias.
When someone (including yourself) believes "this case is special," we need to stress-test that belief.
Question 1: Similarity Matching
Question 2: The Reversal Test
Question 3: Burden of Proof The base rate says [X]%. You claim it should be [Y]%.
Calculate the gap: |Y - X|
Required evidence strength:
I'll tell you:
Next: Return to menu
For multi-stage processes without a single base rate.
Example: "Will Bill X become law?"
No direct data on "Bill X success rate," but we can model the funnel:
Stage 1: Bills introduced → Bills that reach committee
Stage 2: Bills in committee → Bills that reach floor vote
Stage 3: Bills voted on → Bills that pass
Final Base Rate:
P(law) = P(committee) × P(floor) × P(pass)
I'll help you:
Next: Return to menu
Ensure you chose the right comparison set.
Test 1: Homogeneity
Example: "Tech startups" is too broad (consumer vs B2B vs hardware are very different). Subdivide.
Test 2: Sample Size
Test 3: Relevance
I'll walk you through:
Output: Confidence level in your reference class (High/Medium/Low)
Next: Return to menu
Deep dive into the methodology.
📄 Reference Class Selection Guide
Next: Return to menu
Find what usually happens to things like this, start there, and only move with evidence.
estimation-fermi if you need to calculate base rate from componentsbayesian-reasoning-calibration to update from base rate with new evidencescout-mindset-bias-check to validate you're not cherry-picking the reference class📁 resources/
Ready to start? Choose a number from the menu above.