This skill should be used when the user asks to "design a Monte Carlo simulation", "validate my theoretical result", "set up a simulation study", "how many replications do I need", "plan a coverage study", or needs to design simulations for verifying theoretical results. Covers sample size determination, convergence diagnostics, and result presentation. Specialized for statistical methodology papers.
From papermillnpx claudepluginhub queelius/claude-anvil --plugin papermillThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Configures VPN and dedicated connections like Direct Connect, ExpressRoute, Interconnect for secure on-premises to AWS, Azure, GCP, OCI hybrid networking.
Help the researcher design rigorous Monte Carlo simulations to validate theoretical results. Simulations bridge the gap between theory and practice -- they demonstrate that analytical formulas work as predicted and reveal finite-sample behavior.
Read .papermill/state.md (Read tool) for:
If .papermill/state.md does not exist, ask the user which theoretical result needs validation. Simulation design can proceed without the state file — suggest running /papermill:init afterward.
Scan the repository for existing simulation code and results (Glob/Read tools).
Ask: "Which theoretical result are you validating with this simulation?"
Common simulation targets in methodology papers:
Specify exactly how to generate synthetic data:
| What to measure | How to summarize |
|---|---|
| Bias | Mean(estimate) - true value |
| Variance | Var(estimates) across replicates |
| MSE | Bias^2 + Variance |
| Coverage | Fraction of CIs containing true value |
| Convergence rate | Plot metric vs. n on log scale |
parallel (R), multiprocessing (Python), or OpenMP (C++).Before presenting results, verify the simulation itself is reliable:
Design the output tables and figures:
Warn about:
If .papermill/state.md exists, register the simulation (Edit tool) under experiments. If it does not exist, skip registration and suggest running /papermill:init.
The entry uses the standard experiment schema with an optional config block for simulation-specific parameters:
experiments:
- name: "simulation-name"
type: "simulation"
hypothesis: "Empirical covariance matches theoretical FIM as n grows"
status: "planned"
script: "research/simulate_covariance.R"
last_run: null
config: # simulation-specific extension, not in the base experiment schema
replications: 5000
sample_sizes: [50, 100, 200, 500, 1000]
parameter_configs: 3
Append a timestamped note documenting the simulation design.
Based on the simulation status, suggest the most relevant next step:
/papermill:proof if the proof itself needs work."/papermill:proof to re-examine the proof's assumptions."/papermill:review to get feedback on the presentation of simulation results."