Orchestration workflow for Bayesian statistical analysis. Invoke when performing Bayesian modeling, statistical analysis with Stan, or building probabilistic models.
Orchestrates Bayesian modeling workflows from data exploration to final model selection. Triggers when you begin statistical analysis requiring posterior inference, prior specification, or model validation.
/plugin marketplace add sunxd3/bayesian-statistician-plugin/plugin install sunxd3-bayesian-statistician@sunxd3/bayesian-statistician-pluginThis skill inherits all available tools. When active, it can use any tool Claude has access to.
pyproject.tomlThis skill defines a structured workflow for Bayesian statistical analysis. Follow these guidelines when building Bayesian models.
The final deliverable must be a Bayesian model: specify priors, perform posterior inference, and evaluate via posterior predictive checks. Non-Bayesian methods may be explored as baselines/context but must not be selected or reported as the solution.
Be pragmatic, skeptical, and technically precise. Flag computational issues (identifiability, convergence, misspecification) when relevant.
Your outputs serve two purposes:
Terminal output:
Written artifacts (reports, logs):
artifact-guidelines skill to get the full guidelinesYou have two complementary tools for tracking work:
TodoList (TodoWrite tool):
log.md (file):
Use TodoWrite tools VERY frequently to ensure you are tracking tasks and giving users visibility into progress. These tools are EXTREMELY helpful for planning and breaking down complex tasks into smaller steps. If you do not use this tool when planning, you may forget important tasks - and that is unacceptable.
It is critical that you mark todos as completed as soon as you are done with a task. Do not batch up multiple tasks before marking them as completed.
Users may configure 'hooks', shell commands that execute in response to events like tool calls, in settings. Treat feedback from hooks, including <user-prompt-submit-hook>, as coming from the user. If you get blocked by a hook, determine if you can adjust your actions in response to the blocked message. If not, ask the user to check their hooks configuration.
Use parallel subagents to explore multiple perspectives simultaneously, particularly for EDA and model design where uncertainty is high. Each instance needs isolated workspace and files to avoid conflicts. Launch all instances at once using multiple Task tool calls in a single message.
Setup: Prepare separate data copies if needed and assign each instance its own output directory (e.g., eda/analyst_1/, eda/analyst_2/). Give each instance a different focus area.
Execution: Typical count is 2-3 instances. If an instance fails, relaunch once; if it fails again, proceed with successful instances.
After completion: Synthesize findings from all instances and document convergent patterns (all agree) and divergent insights (unique to one).
Dependencies are defined in pyproject.toml. To set up:
uv init && uv add arviz cmdstanpy matplotlib numpy pandas seaborn
Or copy the pyproject.toml and run uv sync.
uv exclusively (never pip)uv runEvery accepted Bayesian model must:
Subagents are ephemeral - they finish their task and disappear forever. Files are the only persistent memory and communication channel between subagents and across phases. This means:
Use this structure unless the task requires deviation:
data/ # source data and copies
eda/ # Phase 1: Data Understanding
eda_report.md # final synthesis (if solo) or consolidated report
analyst_1/ # if parallel: each instance gets own folder
analyst_2/
experiments/ # Phases 2-3: Model Design & Development
experiment_plan.md # Phase 2 output: proposed models
experiment_1/ # one folder per model attempt
prior_predictive/
simulation/
fit/
posterior_predictive/
critique/
experiment_2/
model_assessment/ # Phase 4: quality metrics and comparison
assessment_report.md
final_report.md # Phase 6 output
log.md # running log of decisions and issues
data/data.json and write outputs to eda/analyst_1/")Point subagents to files produced by previous subagents rather than summarizing content yourself. Ask subagents to report what files they created with brief descriptions so you can keep records and pass information along the chain.
Example: Tell model-designer to "Read the EDA report at eda/eda_report.md" rather than summarizing the EDA findings yourself.
eda/Invoke eda-analyst to explore the data. For complex datasets, run 1-3 instances in parallel with different focus areas, then synthesize results into eda/eda_report.md.
experiments/experiment_plan.mdInvoke model-designer to propose models. Run 2-3 instances in parallel. Assign each a distinct structural hypothesis (e.g., direct effects vs. hierarchical grouping vs. latent dynamics) rather than arbitrary model families. Synthesize their proposals into a unified experiment plan that covers competing mechanisms.
experiments/Build a population of validated models and iteratively improve until finding the best variant for each model class.
For each model class from the experiment plan:
Initial variants: Start with variants proposed by model-designer (baseline, scientific, extensions)
Validate each variant by running stages sequentially:
prior-predictive-checker - fail → skip variantrecovery-checker - fail → skip variantmodel-fitter - fail → try fix once with model-refiner, then skipposterior-predictive-checker - always runmodel-critique - assess and suggest improvementsSpecial case: If baseline variant fails pre-fit validation (prior or recovery check), try fix once with model-refiner. If still fails, skip entire model class (this signals fundamental mismatch).
Assess population: If at least one variant validated successfully, invoke model-selector
Follow model-selector strategy:
model-refiner with critique suggestions to generate new variants, return to step 2decision-auditor to verify EDA coverage before acceptingdecision-auditor to verify EDA coverage before acceptingAudit terminal decisions: When model-selector returns ADEQUATE or EXHAUSTED:
decision-auditor with: the selector's decision, path to EDA report, path to experiment plan, and list of validated experimentsInvoke model-selector after completing initial variants and after each refinement round.
final_report.mdInvoke report-writer to generate the final report.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.