From awesome-cognitive-and-neuroscience-skills
Guides selection, fitting, evaluation, and interpretation of drift-diffusion models for two-choice RT data to uncover cognitive processes like evidence accumulation and response caution.
npx claudepluginhub neuroaihub/awesome_cognitive_and_neuroscience_skills --plugin awesome-cognitive-and-neuroscience-skillsThis skill uses the workspace's default tool permissions.
This skill encodes expert knowledge for applying drift-diffusion models (DDMs) to two-choice reaction time data. DDMs decompose observed accuracy and RT distributions into latent cognitive processes — evidence accumulation rate, response caution, and non-decision time. This skill guides researchers through model variant selection, parameter fitting, and result evaluation, encoding domain-specif...
Fits Ratcliff drift-diffusion models to reaction time and accuracy data for parameter estimation (drift rate, boundary separation, non-decision time), model comparison, and parameter recovery validation in cognitive decision tasks.
Advises on selecting DDM, LBA, or race models for choice response-time data analysis based on experimental design and research goals.
Guides statistical test selection, assumption checks, power analysis, hypothesis tests (t-tests, ANOVA, chi-square, regression, Bayesian), effect sizes, and APA-formatted reports for research data.
Share bugs, ideas, or general feedback.
This skill encodes expert knowledge for applying drift-diffusion models (DDMs) to two-choice reaction time data. DDMs decompose observed accuracy and RT distributions into latent cognitive processes — evidence accumulation rate, response caution, and non-decision time. This skill guides researchers through model variant selection, parameter fitting, and result evaluation, encoding domain-specific judgment that requires specialized training in computational cognitive modeling.
references/model-variants.md)Before executing the domain-specific steps below, you MUST:
For detailed methodology guidance, see the research-literacy skill.
This skill was generated by AI from academic literature. All parameters, thresholds, and citations require independent verification before use in research. If you find errors, please open an issue.
The DDM assumes that on each trial, noisy evidence accumulates over time from a starting point toward one of two decision boundaries. The key insight: observed RT = decision time + non-decision time, and accuracy depends on which boundary is reached first (Ratcliff, 1978).
| Parameter | Symbol | Cognitive Interpretation | Typical Range | Source |
|---|---|---|---|---|
| Drift rate | v | Quality/strength of evidence accumulation | 0.1 – 5.0 (commonly 0.5–3.0) | Ratcliff & McKoon, 2008; Voss et al., 2004, Table 2 |
| Boundary separation | a | Response caution (speed-accuracy tradeoff) | 0.5 – 2.5 (commonly 0.8–2.0) | Ratcliff & McKoon, 2008; Voss et al., 2004, Table 2 |
| Non-decision time | t0 (or Ter) | Encoding + motor execution time | 0.1 – 0.6 s (commonly 0.2–0.5 s) | Ratcliff & McKoon, 2008; Matzke & Wagenmakers, 2009, Table 1 |
| Starting point | z | Response bias (relative to boundaries) | a/2 (unbiased) ± 20% | Ratcliff & McKoon, 2008; Voss et al., 2013 |
| Parameter | Symbol | Interpretation | Typical Range | Source |
|---|---|---|---|---|
| Drift rate variability | sv | Cross-trial variation in evidence quality | 0 – 2.0 | Ratcliff & McKoon, 2008 |
| Starting point variability | sz | Cross-trial variation in bias | 0 – 0.3 × a | Ratcliff & McKoon, 2008 |
| Non-decision time variability | st0 | Cross-trial variation in encoding/motor time | 0 – 0.3 s | Ratcliff & McKoon, 2008 |
Is the goal to decompose RT data into cognitive components?
├── YES → Continue to Step 2
└── NO → DDM may not be needed; consider simpler analyses
How many trials per condition do you have?
├── < 20 trials → Insufficient for any DDM variant (Ratcliff & Childers, 2015)
├── 20-40 trials → Use EZ-diffusion only (Wagenmakers et al., 2007)
├── 40-100 trials → Classic 4-parameter DDM or EZ-diffusion
├── 100-200 trials → Full DDM possible but fix some variability parameters
└── > 200 trials → Full DDM with all 7 parameters estimable
(Trial count thresholds: Ratcliff & Childers, 2015, simulation study)
Are you comparing groups or conditions at the population level?
├── YES, with moderate sample size (N > 15 participants)
│ └── Consider HDDM for hierarchical/Bayesian estimation (Wiecki et al., 2013)
├── YES, with large trial counts per person
│ └── Classic or Full DDM per participant, then group-level tests on parameters
└── Exploratory / individual differences focus
└── HDDM or hierarchical Bayesian approach
How many response alternatives?
├── 2 → Standard DDM variants
├── > 2 → LBA or Racing Diffusion Model (see references/model-variants.md)
└── Go/No-Go → Single-boundary model (not covered here)
See references/model-variants.md for detailed comparison of all variants.
What variant did you choose?
├── EZ-diffusion → Closed-form solution, no fitting needed (Wagenmakers et al., 2007)
├── Classic/Full DDM → Use fast-dm (Voss & Voss, 2007) or PyDDM (Shinn et al., 2020)
│ ├── MLE: Best for large trial counts (>100 per condition)
│ ├── Chi-square: Robust for moderate trial counts (Ratcliff & Tuerlinckx, 2002)
│ └── Quantile-based (QMP): Most robust to outliers (Heathcote et al., 2002)
└── HDDM → Use HDDM Python package, Bayesian estimation (Wiecki et al., 2013)
See references/fitting-guide.md for the complete fitting workflow.
See references/fitting-guide.md for detailed guidance on each step.
Fitting too many free parameters with too few trials: The full 7-parameter DDM requires >200 trials per condition for stable estimates (Ratcliff & Childers, 2015). With fewer trials, fix variability parameters or use EZ-diffusion.
Ignoring RT outliers: Extremely fast (< 200 ms) or slow (> 3000–5000 ms) RTs likely reflect non-decision processes (guesses, lapses). Include these and they distort parameter estimates (Ratcliff, 1993; Ratcliff & Tuerlinckx, 2002). Apply cutoffs BEFORE fitting.
Not checking parameter recovery: Always simulate data with known parameters using your exact pipeline and verify you can recover them. Poor recovery means your results are uninterpretable (Heathcote et al., 2015; White et al., 2018).
Confusing drift rate and boundary effects: Speed-accuracy tradeoff instructions should primarily affect boundary separation (a), not drift rate (v). If both change, the model may be misspecified or the manipulation has multiple effects (Ratcliff & McKoon, 2008).
Using mean RT instead of full RT distributions: DDMs leverage the shape of the entire RT distribution. Analyzing only mean RT discards the information DDMs are designed to capture (Ratcliff, 1978; Wagenmakers et al., 2007).
Neglecting error RT distributions: Correct and error RT distributions are jointly constrained by the DDM. Fitting only correct RTs loses critical information about the generative process (Ratcliff & McKoon, 2008).
Treating HDDM posterior modes as point estimates: Bayesian models yield posterior distributions. Report and interpret the full posterior, including credible intervals, rather than treating the mode as a frequentist point estimate (Wiecki et al., 2013).
references/model-variants.md for DDM family details.references/fitting-guide.md for the complete fitting workflow.