Use when making predictions or judgments under uncertainty and need to explicitly update beliefs with new evidence. Invoke when forecasting outcomes, evaluating probabilities, testing hypotheses, calibrating confidence, assessing risks with uncertain data, or avoiding overconfidence bias. Use when user mentions priors, likelihoods, Bayes theorem, probability updates, forecasting, calibration, or belief revision.
Apply Bayesian reasoning to update probability estimates when forecasting outcomes or evaluating uncertain claims. Use when user mentions priors, likelihoods, Bayes theorem, probability updates, calibration, or making predictions under uncertainty.
/plugin marketplace add lyndonkl/claude/plugin install lyndonkl-thinking-frameworks-skills@lyndonkl/claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
resources/evaluators/rubric_bayesian_reasoning_calibration.jsonresources/examples/product-launch.mdresources/methodology.mdresources/template.mdApply Bayesian reasoning to systematically update probability estimates as new evidence arrives. This helps make better forecasts, avoid overconfidence, and explicitly show how beliefs should change with data.
Trigger phrases: "What's the probability", "update my belief", "how confident", "forecast", "prior probability", "likelihood", "Bayes", "calibration", "base rate", "posterior probability"
A systematic way to update probability estimates using Bayes' Theorem:
P(H|E) = P(E|H) × P(H) / P(E)
Where:
Quick Example:
# Should we launch Feature X?
## Prior Belief
Before beta testing: 60% chance of adoption >20%
- Base rate: Similar features get 15-25% adoption
- Our feature seems stronger than average
- Prior: 60%
## New Evidence
Beta test: 35% of users adopted (70 of 200 users)
## Likelihoods
If true adoption is >20%:
- P(seeing 35% in beta | adoption >20%) = 75% (likely to see high beta if true)
If true adoption is ≤20%:
- P(seeing 35% in beta | adoption ≤20%) = 15% (unlikely to see high beta if false)
## Bayesian Update
Posterior = (75% × 60%) / [(75% × 60%) + (15% × 40%)]
Posterior = 45% / (45% + 6%) = 88%
## Conclusion
Updated belief: 88% confident adoption will exceed 20%
Evidence strongly supports launch, but not certain.
Copy this checklist and track your progress:
Bayesian Reasoning Progress:
- [ ] Step 1: Define the question
- [ ] Step 2: Establish prior beliefs
- [ ] Step 3: Identify evidence and likelihoods
- [ ] Step 4: Calculate posterior
- [ ] Step 5: Calibrate and document
Step 1: Define the question
Clarify hypothesis (specific, testable claim), probability to estimate, timeframe (when outcome is known), success criteria, and why this matters (what decision depends on it). Example: "Product feature will achieve >20% adoption within 3 months" - matters for launch decision.
Step 2: Establish prior beliefs
Set initial probability using base rates (general frequency), reference class (similar situations), specific differences, and explicit probability assignment with justification. Good priors are based on base rates, account for differences, honest about uncertainty, and include ranges if unsure (e.g., 40-60%). Avoid purely intuitive priors, ignoring base rates, or extreme values without justification.
Step 3: Identify evidence and likelihoods
Assess evidence (specific observation/data), diagnostic power (does it distinguish hypotheses?), P(E|H) (probability if hypothesis TRUE), P(E|¬H) (probability if FALSE), and calculate likelihood ratio = P(E|H) / P(E|¬H). LR > 10 = very strong evidence, 3-10 = moderate, 1-3 = weak, ≈1 = not diagnostic, <1 = evidence against.
Step 4: Calculate posterior
Apply Bayes' Theorem: P(H|E) = [P(E|H) × P(H)] / P(E), or use odds form: Posterior Odds = Prior Odds × Likelihood Ratio. Calculate P(E) = P(E|H)×P(H) + P(E|¬H)×P(¬H), get posterior probability, and interpret change. For simple cases → Use resources/template.md calculator. For complex cases (multiple hypotheses) → Study resources/methodology.md.
Step 5: Calibrate and document
Check calibration (over/underconfident?), validate assumptions (are likelihoods reasonable?), perform sensitivity analysis, create bayesian-reasoning-calibration.md, and note limitations. Self-check using resources/evaluators/rubric_bayesian_reasoning_calibration.json: verify prior based on base rates, likelihoods justified, evidence diagnostic (LR ≠ 1), calculation correct, posterior calibrated, assumptions stated, sensitivity noted. Minimum standard: Score ≥ 3.5.
For forecasting:
For hypothesis testing:
For risk assessment:
For avoiding bias:
Do:
Don't:
resources/template.mdresources/methodology.mdresources/examples/product-launch.md, resources/examples/medical-diagnosis.mdresources/evaluators/rubric_bayesian_reasoning_calibration.jsonBayesian Formula (Odds Form):
Posterior Odds = Prior Odds × Likelihood Ratio
Likelihood Ratio:
LR = P(Evidence | Hypothesis True) / P(Evidence | Hypothesis False)
Output naming: bayesian-reasoning-calibration.md or {topic}-forecast.md
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.