Use to detect and remove cognitive biases from reasoning. Invoke when prediction feels emotional, stuck at 50/50, or when you want to validate forecasting process. Use when user mentions scout mindset, soldier mindset, bias check, reversal test, scope sensitivity, or cognitive distortions.
Detects and removes cognitive biases from your reasoning using systematic tests like reversal, scope sensitivity, and confidence audits. Use when predictions feel emotional, you're stuck at 50/50, or need to validate your forecasting process.
/plugin marketplace add lyndonkl/claude/plugin install lyndonkl-thinking-frameworks-skills@lyndonkl/claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
resources/cognitive-bias-catalog.mdresources/debiasing-techniques.mdresources/scout-vs-soldier.mdScout Mindset (Julia Galef) is the motivation to see things as they are, not as you wish them to be. Contrast with Soldier Mindset, which defends a position regardless of evidence.
Core Principle: Your goal is to map the territory accurately, not win an argument.
Why It Matters:
Use this skill when:
Do NOT skip this when stakes are high, you have strong priors, or forecast affects you personally.
What would you like to do?
1. Run the Reversal Test - Check if you'd accept opposite evidence
2. Check Scope Sensitivity - Ensure probabilities scale with inputs
3. Test Status Quo Bias - Challenge "no change" assumptions
4. Audit Confidence Intervals - Validate CI width
5. Run Full Bias Audit - Comprehensive bias scan
6. Learn the Framework - Deep dive into methodology
7. Exit - Return to main forecasting workflow
Check if you'd accept evidence pointing the opposite direction.
Reversal Test Progress:
- [ ] Step 1: State your current conclusion
- [ ] Step 2: Identify supporting evidence
- [ ] Step 3: Reverse the evidence
- [ ] Step 4: Ask "Would I still accept it?"
- [ ] Step 5: Adjust for double standards
What are you predicting?
List the evidence that supports your conclusion.
Example: Candidate A will win (75%)
Imagine the same evidence pointed the OTHER way.
Reversed: What if polls showed B ahead, B had more funding, experts favored B, and B had better ratings?
The Critical Question:
If this reversed evidence existed, would I accept it as valid and change my prediction?
Three possible answers:
A) YES - I would accept reversed evidence ✓ No bias detected, continue with current reasoning
B) NO - I would dismiss reversed evidence ⚠ Warning: Motivated reasoning - you're accepting evidence when it supports you, dismissing equivalent evidence when it doesn't (special pleading)
C) UNSURE - I'd need to think about it ⚠ Warning: Asymmetric evidence standards suggest rationalizing, not reasoning
If you answered B or C:
Ask: Why do I dismiss this evidence in one direction but accept it in the other? Is there an objective reason, or am I motivated by preference?
Common rationalizations:
The Fix:
Probability adjustment: If you detected double standards, move probability 10-15% toward 50%
Next: Return to menu
Ensure your probabilities scale appropriately with magnitude.
Scope Sensitivity Progress:
- [ ] Step 1: Identify the variable scale
- [ ] Step 2: Test linear scaling
- [ ] Step 3: Check reference point calibration
- [ ] Step 4: Validate magnitude assessment
- [ ] Step 5: Adjust for scope insensitivity
What dimension has magnitude?
The Linearity Test: Double the input, check if impact doubles.
Example: Startup funding
Scope sensitivity check: Did probabilities scale reasonably? If they barely changed → Scope insensitive
The Anchoring Test: Did you start with a number (base rate, someone else's forecast, round number) and insufficiently adjust?
The fix:
The "1 vs 10 vs 100" Test: For your forecast, vary the scale by 10×.
Example: Project timeline
Expected: Probability should change significantly. If all three estimates are within 10 percentage points → Scope insensitivity
The problem: Your emotional system responds to the category, not the magnitude.
The fix:
Method 1: Logarithmic scaling - Use log scale for intuition
Method 2: Reference class by scale - Don't use "startups" as reference class. Use "Startups that raised $1M" (10% success) vs "Startups that raised $100M" (60% success)
Method 3: Explicit calibration - Use a formula: P(success) = base_rate + k × log(amount)
Next: Return to menu
Challenge the assumption that "no change" is the default.
Status Quo Bias Progress:
- [ ] Step 1: Identify status quo prediction
- [ ] Step 2: Calculate energy to maintain status quo
- [ ] Step 3: Invert the default
- [ ] Step 4: Apply entropy principle
- [ ] Step 5: Adjust probabilities
Are you predicting "no change"? Examples: "This trend will continue," "Market share will stay the same," "Policy won't change"
Status quo predictions often get inflated probabilities because change feels risky.
The Entropy Principle: In the absence of active energy input, systems decay toward disorder.
Question: "What effort is required to keep things the same?"
Examples:
Mental Exercise:
Bias check: If P(change) + P(same) ≠ 100%, you have status quo bias.
Second Law of Thermodynamics (applied to forecasting):
Ask:
If you detected status quo bias:
For "no change" predictions that require high energy:
For predictions where inertia truly helps: No adjustment needed
The heuristic: If maintaining status quo requires active effort, decay is more likely than you think.
Next: Return to menu
Validate that your CI width reflects true uncertainty.
Confidence Interval Audit Progress:
- [ ] Step 1: State current CI
- [ ] Step 2: Run surprise test
- [ ] Step 3: Check historical calibration
- [ ] Step 4: Compare to reference class variance
- [ ] Step 5: Adjust CI width
Current confidence interval:
The Surprise Test: "Would I be genuinely shocked if the true value fell outside my confidence interval?"
Calibration:
Test: Imagine the outcome lands just below your lower bound or just above your upper bound.
Three possible answers:
Look at your past forecasts:
| CI Level | Expected Outside | Your Actual |
|---|---|---|
| 80% | 20% | ___% |
| 90% | 10% | ___% |
Diagnosis: Actual > Expected → CIs too narrow (overconfident) - Most common
If you have reference class data:
Example: Reference class SD = 12%, your 80% CI ≈ Point estimate ± 15%
If your CI is narrower than reference class variance, you're claiming to know more than average. Justify why, or widen CI.
Adjustment rules:
Next: Return to menu
Comprehensive scan of major cognitive biases.
Full Bias Audit Progress:
- [ ] Step 1: Confirmation bias check
- [ ] Step 2: Availability bias check
- [ ] Step 3: Anchoring bias check
- [ ] Step 4: Affect heuristic check
- [ ] Step 5: Overconfidence check
- [ ] Step 6: Attribution error check
- [ ] Step 7: Prioritize and remediate
See Cognitive Bias Catalog for detailed descriptions.
Quick audit questions:
If NO to any → Confirmation bias detected
If NO to any → Availability bias detected
If NO to any → Anchoring bias detected
If NO to any → Affect heuristic detected
If NO to any → Overconfidence detected
If NO to any → Attribution error detected
For each detected bias:
Remediation example:
| Bias | Severity | Direction | Adjustment |
|---|---|---|---|
| Confirmation | High | Up | -15% |
| Availability | Medium | Up | -10% |
| Affect heuristic | High | Up | -20% |
Net adjustment: -45% → Move probability down by 45 points (e.g., 80% → 35%)
Next: Return to menu
Deep dive into the methodology.
Next: Return to menu
Scout mindset is the drive to see things as they are, not as you wish them to be.
📁 resources/
Ready to start? Choose a number from the menu above.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.