From thinking-frameworks-skills
Applies Bayesian reasoning to update priors with evidence for better forecasts, calibration, and decisions under uncertainty. Use for predictions, hypothesis testing, or risk assessment.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Workflow](#workflow)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Core formula: P(H|E) = P(E|H) x P(H) / P(E), where P(H) = prior, P(E|H) = likelihood, P(H|E) = posterior.
Quick Example:
# Should we launch Feature X?
## Prior Belief
Before beta testing: 60% chance of adoption >20%
- Base rate: Similar features get 15-25% adoption
- Our feature seems stronger than average
- Prior: 60%
## New Evidence
Beta test: 35% of users adopted (70 of 200 users)
## Likelihoods
If true adoption is >20%:
- P(seeing 35% in beta | adoption >20%) = 75% (likely to see high beta if true)
If true adoption is ≤20%:
- P(seeing 35% in beta | adoption ≤20%) = 15% (unlikely to see high beta if false)
## Bayesian Update
Posterior = (75% × 60%) / [(75% × 60%) + (15% × 40%)]
Posterior = 45% / (45% + 6%) = 88%
## Conclusion
Updated belief: 88% confident adoption will exceed 20%
Evidence strongly supports launch, but not certain.
Copy this checklist and track your progress:
Bayesian Reasoning Progress:
- [ ] Step 1: Define the question
- [ ] Step 2: Establish prior beliefs
- [ ] Step 3: Identify evidence and likelihoods
- [ ] Step 4: Calculate posterior
- [ ] Step 5: Calibrate and document
Step 1: Define the question
Clarify hypothesis (specific, testable claim), probability to estimate, timeframe (when outcome is known), success criteria, and why this matters (what decision depends on it). Example: "Product feature will achieve >20% adoption within 3 months" - matters for launch decision.
Step 2: Establish prior beliefs
Set initial probability using base rates (general frequency), reference class (similar situations), specific differences, and explicit probability assignment with justification. Good priors are based on base rates, account for differences, honest about uncertainty, and include ranges if unsure (e.g., 40-60%). Avoid purely intuitive priors, ignoring base rates, or extreme values without justification.
Step 3: Identify evidence and likelihoods
Assess evidence (specific observation/data), diagnostic power (does it distinguish hypotheses?), P(E|H) (probability if hypothesis TRUE), P(E|¬H) (probability if FALSE), and calculate likelihood ratio = P(E|H) / P(E|¬H). LR > 10 = very strong evidence, 3-10 = moderate, 1-3 = weak, ≈1 = not diagnostic, <1 = evidence against.
Step 4: Calculate posterior
Apply Bayes' Theorem: P(H|E) = [P(E|H) × P(H)] / P(E), or use odds form: Posterior Odds = Prior Odds × Likelihood Ratio. Calculate P(E) = P(E|H)×P(H) + P(E|¬H)×P(¬H), get posterior probability, and interpret change. For simple cases → Use resources/template.md calculator. For complex cases (multiple hypotheses) → Study resources/methodology.md.
Step 5: Calibrate and document
Check calibration (over/underconfident?), validate assumptions (are likelihoods reasonable?), perform sensitivity analysis, create bayesian-reasoning-calibration.md, and note limitations. Self-check using resources/evaluators/rubric_bayesian_reasoning_calibration.json: verify prior based on base rates, likelihoods justified, evidence diagnostic (LR ≠ 1), calculation correct, posterior calibrated, assumptions stated, sensitivity noted. Minimum standard: Score ≥ 3.5.
For forecasting:
For hypothesis testing:
For risk assessment:
For avoiding bias:
Do:
Don't:
resources/template.mdresources/methodology.mdresources/examples/product-launch.md, resources/examples/medical-diagnosis.mdresources/evaluators/rubric_bayesian_reasoning_calibration.jsonBayesian Formula (Odds Form):
Posterior Odds = Prior Odds × Likelihood Ratio
Likelihood Ratio:
LR = P(Evidence | Hypothesis True) / P(Evidence | Hypothesis False)
Output naming: bayesian-reasoning-calibration.md or {topic}-forecast.md