By scdenney
Claude Code skills for experimental social science and computational text analysis: conjoint design, diagnostics, and data cleaning, survey design, list experiments, cross-national design, topic modeling, LLM text classification, VLM-based OCR pipelines, post-OCR cleanup, paper pre-submission review, hypothesis building, narrative building, pre-registration, and methods reporting. Invoke as /skill-name or let Claude auto-trigger based on context.
npx claudepluginhub scdenney/open-science-skills --plugin open-science-skillsApply conjoint data cleaning expertise to the task below. Cover the relevant aspects of Qualtrics column conventions, wide-to-long reshaping, choice variable mapping, attribute translation, pilot data detection, reference category selection, and validation based on what is needed.
Apply conjoint experiment design expertise to the task below. Cover the relevant aspects of attribute architecture, statistical power, AMCE/AMIE estimation, design variants, and quality checks based on what is needed.
Run a conjoint diagnostics review of the task or study described below. Evaluate design integrity, estimation choices, measurement error, external validity, and interpretation against the diagnostic checklist.
Apply cross-national survey experiment design expertise to the task below. Cover per-country power, measurement equivalence, sensitivity bias auditing, instrument localization, and multi-country estimation as relevant.
Apply causal hypothesis architecture expertise to the task below. Cover falsifiability, counterfactuals, DAGs, FPCI, three-level hypothesis specification, equivalence testing, and SESOI as relevant.
Apply list experiment (item count technique) design, estimation, and diagnostic expertise to the task below. Cover pre-design sensitivity assessment, control list design, estimator choice, assumption testing, and common failure modes as relevant.
Apply methods reporting expertise to the task below. Run through the reporting checklist covering CONSORT standards, JARS pre-registration elements, DA-RT transparency requirements, and open science infrastructure as relevant.
Apply scientific narrative expertise to the task below. Cover introduction logic, literature framing, the Why-to-If-Then funnel, cumulative framing, and multi-experiment coherence as relevant.
Run a comprehensive pre-submission audit of the current paper using parallel review agents. Covers content and argument, numerical consistency, references and DOIs, writing quality, figures and formatting, and replication archive completeness. Returns a severity-ranked report with a journal-readiness checklist.
Apply post-OCR text cleanup expertise to the task below. Cover cleanup strategy selection, LLM-based and rule-based correction, quality diagnostics, multilingual considerations, corpus-level QA, and provenance tracking as relevant.
Apply pre-analysis plan expertise to the task below. Cover PAP structure, registry selection, analytical strategy specification, confirmatory vs. exploratory distinctions, and deviation documentation as relevant.
Apply survey instrument design expertise to the task below. Cover question construction, scale design, survey flow, social desirability mitigation, pretesting, and respondent burden as relevant.
Apply LLM-based text classification expertise to the task below. Cover codebook design (Halterman & Keith format), learning regime selection, human-LLM hybrid workflows, cross-model validation, and agreement statistics as relevant.
Apply structural topic model expertise to the task below. Cover STM specification with metadata covariates, topic count selection, coherence-exclusivity diagnostics, and reporting standards as relevant.
Apply VLM-based OCR pipeline expertise to the task below. Cover model selection, image handling, prompt engineering, pipeline architecture, batch strategy, accuracy evaluation, and reproducibility as relevant.
Specialized logic for cleaning and reshaping choice-based conjoint data from Qualtrics exports into analysis-ready long format. Use when (1) preparing conjoint survey data for analysis, (2) reshaping wide Qualtrics exports to long format, (3) mapping conjoint choice and rating variables to profile-level outcomes, (4) translating attribute labels across languages, (5) diagnosing pilot contamination or data quality issues in conjoint data, or (6) setting AMCE reference categories. Covers Qualtrics column conventions, existing R packages, wide-to-long reshaping, choice variable encoding, attribute-level translation, data validation, and analysis-ready output.
Specialized logic for designing conjoint and factorial vignette experiments. Use when (1) designing a new conjoint experiment, (2) selecting and structuring attributes and levels, (3) conducting a conjoint power analysis, (4) choosing between design variants (paired-choice, rating, factorial vignette), (5) writing conjoint regression specifications, or (6) drafting the conjoint portion of a pre-analysis plan. Covers attribute architecture, AMCE/MM estimation, interaction effects, power formulas, treatment validation, and design variants.
Systematic diagnostic checklist for evaluating choice-based conjoint experiments. Use when (1) reviewing a conjoint paper or manuscript, (2) auditing a conjoint analysis script or dataset, (3) assessing measurement error and IRR in conjoint data, (4) evaluating external validity of a conjoint design, or (5) checking interpretation of AMCEs, marginal means, and interaction effects. Covers design, estimation, measurement error correction, external validity, and reporting.
Guides the design of cross-national comparative survey experiments. Use when (1) selecting countries for a multi-country study, (2) localizing experimental instruments across languages and institutional contexts, (3) calibrating origin-country stimuli for immigration experiments, (4) conducting per-country power analyses, or (5) planning a cross-national analytical strategy with pooled and per-country models. Covers case selection, instrument localization, ecological validity, power management, and sensitivity bias auditing.
Guides the transformation of theoretical concepts into falsifiable, counterfactual-based hypotheses with formal estimands. Use when (1) drafting hypotheses for a pre-analysis plan, (2) specifying estimands and linking them to regression models, (3) choosing between NHST, equivalence, and minimum-effect tests, (4) structuring a multi-experiment hypothesis architecture, or (5) classifying hypotheses as primary, secondary, or exploratory. Ensures every claim has a named estimand, a SESOI, and a three-level specification (conceptual, operationalized, statistical).
Guides design, estimation, and diagnostics for list experiments (item count technique, ICT). Use when (1) deciding whether a list experiment is warranted for a sensitive question, (2) designing the control list or choosing baseline items, (3) selecting between design variants (single, double, placebo), (4) choosing an estimator (difference-in-means, multivariate NLSreg/MLreg, combined), (5) testing the identifying assumptions (no design effect, no floor/ceiling), (6) assessing or diagnosing mechanical inflation or artificial deflation, or (7) interpreting list experiment results in relation to direct question estimates. Covers the full pipeline from pre-design sensitivity assessment through statistical inference and power analysis.
Implements high-transparency reporting standards for experimental social science. Use when (1) drafting or auditing a methods section, (2) preparing a pre-analysis plan or pre-registration, (3) documenting a conjoint or factorial vignette design, (4) building a CONSORT sample flow, or (5) ensuring compliance with APSA Experimental Section, JARS, and DA-RT standards. Includes a 45-item mandatory checklist.
Expert logic for drafting scientific introductions and literature reviews. Use when (1) transforming a research interest into a grounded scientific introduction, (2) writing or reviewing a literature review, (3) structuring a multi-experiment narrative, (4) establishing the "Why-to-If-Then" funnel from theory to hypothesis, or (5) framing a cumulative research program. Prevents "methods-driven" writing by ensuring substantive foundations precede design choices.
Runs a comprehensive pre-submission audit using parallel review agents. Covers content/argument, numerical consistency, references/DOIs, writing quality, figures/formatting, and replication archive. Use when (1) preparing a manuscript for journal submission, (2) checking internal consistency of numbers across abstract, body, tables, and SI, (3) auditing a bibliography for missing DOIs or formatting issues, (4) reviewing a replication archive for completeness, (5) verifying data availability, ethics/IRB, and funding statements, (6) running a cross-check on figures, tables, and formatting, or (7) assessing writing quality and terminology consistency.
Guide post-OCR text cleanup for research corpora. Covers LLM-based correction, rule-based fixes, quality diagnostics, multilingual considerations, and corpus-level quality assurance. Use when (1) choosing between LLM and rule-based OCR error correction, (2) designing prompts for LLM-based OCR cleanup, (3) applying constrained decoding to prevent correction hallucination, (4) building rule-based fixes for Unicode normalization or repetition artifacts, (5) evaluating cleanup quality beyond CER/WER, (6) handling diacritics restoration or script-specific spacing, (7) sampling and flagging documents for human review at corpus scale, or (8) tracking correction provenance for reproducibility.
Guides writing a complete pre-analysis plan (PAP) for experimental social science. Use when (1) choosing a registry platform (OSF, EGAP, AsPredicted), (2) structuring a PAP document end-to-end, (3) specifying analytical strategies with locked, conditional, and exploratory tiers, (4) pre-registering analysis code on simulated data, (5) writing contingency plans for design failures, (6) documenting deviations from a registered plan, or (7) planning a registered report submission. Covers PAP structure, decision rules, code registration, contingency trees, deviation reporting, and timeline logistics.
Guides the design of survey instruments for experimental social science. Use when (1) writing survey questions or constructing response scales, (2) organizing survey flow and block ordering, (3) designing treatment delivery within a survey, (4) planning pretesting or cognitive interviews, (5) managing respondent burden and survey length, (6) mitigating social desirability bias in sensitive questions, or (7) choosing between direct and indirect measurement. Covers question wording, scale construction, ordering effects, attention checks, and treatment-outcome separation.
Guides LLM-based text classification for survey and experimental text data. Covers codebook design, learning regime selection, model choice, human-LLM hybrid workflows, and validation. Use when (1) designing an LLM classification scheme for open-ended survey responses, (2) writing a codebook for LLM text annotation, (3) choosing between zero-shot, few-shot, fine-tuning, or instruction-tuning, (4) selecting a model for classification, (5) validating LLM classifications against human-coded ground truth, (6) implementing hybrid human-LLM workflows, (7) addressing reproducibility concerns, or (8) reporting LLM classification methods and results.
Guides structural topic model (STM) specification for survey and experimental text data. Covers model selection, preprocessing, topic count diagnostics, covariate effects on prevalence, and reporting. Use when (1) selecting a topic model for open-ended survey responses or text corpora, (2) specifying an STM with metadata covariates, (3) choosing the number of topics and evaluating diagnostics, (4) interpreting topic content and estimating covariate effects, (5) preprocessing text data for topic modeling, (6) validating output against treatment groups, or (7) reporting topic modeling results.
Guide setup and execution of a VLM-based OCR pipeline for scanned historical and multilingual documents. Covers model selection, image handling, prompt engineering, batch processing, and accuracy evaluation. Use when (1) selecting a vision-language model for document OCR, (2) deciding on DPI and image extraction strategy for scanned PDFs, (3) writing language-specific OCR prompts for a VLM, (4) designing a multi-stage OCR pipeline with diagnostics, (5) planning batch OCR on HPC or SLURM infrastructure, (6) estimating GPU-hours and VRAM for a document corpus, (7) evaluating OCR accuracy with CER/WER and dictionary proxies, or (8) documenting an OCR pipeline for reproducibility.
Share bugs, ideas, or general feedback.
PhD-level research capabilities: literature review, multi-source investigation, critical analysis, hypothesis-driven exploration, quantitative/qualitative methods, and lateral thinking
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.