From open-science-skills
Guides design, estimation, and diagnostics for list experiments (item count technique) on sensitive survey questions. Use for suitability checks, item selection, design variants, estimators, assumption tests, and power analysis.
npx claudepluginhub scdenney/open-science-skills --plugin open-science-skillsThis skill uses the workspace's default tool permissions.
- **Assess sensitivity bias first:** Before committing to a list experiment, consult domain-specific evidence on sensitivity bias. Blair, Coppock, and Moor's (2020) meta-analysis of 30 years of list experiments shows that sensitivity biases are typically smaller than 10 percentage points. A list experiment is not automatically the right choice for any sensitive topic.
Guides design of survey instruments for experimental social science: question wording, response scales, flow organization, treatment delivery, pretesting, bias mitigation.
Matches research questions to appropriate designs, sampling strategies, and validity controls. Useful for experimental, qualitative, and mixed-methods guidance.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Share bugs, ideas, or general feedback.
list R package.list R package (Blair, Chou & Imai), which provides a unified interface for difference-in-means, NLSreg, MLreg, combined estimator, and Bayesian MCMC hierarchical models, along with all standard diagnostic tests.ict.test() in Blair & Imai's (2012) list package.ictreg().list package's simulation tools support this. Rule of thumb: assume effective sample sizes 5–10× below what a direct question study would require.list package cited: Is the list R package (Blair, Chou & Imai) cited as the implementation source?