From agent-almanac
Reviews data analysis pipelines for quality, correctness, and reproducibility. Assesses data quality, model validation, leakage detection, and verifies reproducibility. Use for pre-publication reviews, ML pipeline validation, or regulatory audits.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Reviews data analysis methodology and quality in Phase 4 of /ds workflow. Chooses single or parallel reviewer strategies, monitors context, and controls tool usage via hooks.
Validates CSV/TSV/Excel files and data analyses for quality, completeness, uniqueness, accuracy, consistency, outliers, and bias using qsv stats and frequency tools.
QA data analyses for methodology, accuracy, biases, and pitfalls before stakeholder sharing. Spot-checks calculations, SQL results, visualizations, and conclusions.
Share bugs, ideas, or general feedback.
Evaluate a data analysis pipeline for correctness, robustness, and reproducibility.
Review the input data before evaluating the analysis:
## Data Quality Assessment
### Completeness
- [ ] Missing data quantified (% by column and by row)
- [ ] Missing data mechanism considered (MCAR, MAR, MNAR)
- [ ] Imputation method appropriate (if used) or complete-case analysis justified
### Consistency
- [ ] Data types match expectations (dates are dates, numbers are numbers)
- [ ] Value ranges are plausible (no negative ages, future dates in historical data)
- [ ] Categorical variables have expected levels (no misspellings, consistent coding)
- [ ] Units are consistent across records
### Uniqueness
- [ ] Duplicate records identified and handled
- [ ] Primary keys are unique where expected
- [ ] Join operations produce expected row counts (no fan-out or drop)
### Timeliness
- [ ] Data vintage appropriate for the analysis question
- [ ] Temporal coverage matches the study period
- [ ] No look-ahead bias in time-series data
### Provenance
- [ ] Data source documented
- [ ] Extraction date/version recorded
- [ ] Any transformations between source and analysis input documented
Expected: Data quality issues documented with their potential impact on results. On failure: If data is not accessible for review, assess quality from the code (what checks and transformations are applied).
For each statistical method or model used:
| Method | Key Assumptions | How to Check |
|---|---|---|
| Linear regression | Linearity, independence, normality of residuals, homoscedasticity | Residual plots, Q-Q plot, Durbin-Watson, Breusch-Pagan |
| Logistic regression | Independence, no multicollinearity, linear logit | VIF, Box-Tidwell, residual diagnostics |
| t-test | Independence, normality (or large n), equal variance | Shapiro-Wilk, Levene's test, visual inspection |
| ANOVA | Independence, normality, homogeneity of variance | Shapiro-Wilk per group, Levene's test |
| Chi-squared | Independence, expected frequency ≥ 5 | Expected frequency table |
| Random forest | Sufficient training data, feature relevance | OOB error, feature importance, learning curves |
| Neural network | Sufficient data, appropriate architecture, no data leakage | Validation curves, overfitting checks |
## Assumption Check Results
| Analysis Step | Method | Assumption | Checked? | Result |
|---------------|--------|------------|----------|--------|
| Primary model | Linear regression | Normality of residuals | Yes | Q-Q plot shows mild deviation — acceptable for n>100 |
| Primary model | Linear regression | Homoscedasticity | No | Not checked — recommend adding Breusch-Pagan test |
Expected: Every statistical method has its assumptions explicitly checked or acknowledged. On failure: If assumptions are violated, check whether the authors addressed this (robust methods, transformations, sensitivity analysis).
Data leakage occurs when information from outside the training set influences the model, leading to over-optimistic performance:
## Leakage Assessment
| Check | Status | Evidence |
|-------|--------|----------|
| Target leakage | Clear | No features derived from target |
| Temporal leakage | CONCERN | Feature X uses 30-day forward average |
| Train-test contamination | Clear | StandardScaler fit on train only |
| Group leakage | CONCERN | Patient IDs not used for stratified split |
Expected: All common leakage patterns checked with clear/concern status. On failure: If leakage is found, estimate its impact by re-running without the leaked feature (if possible) or flag for the analyst to investigate.
Expected: Model validation appropriate for the use case (prediction vs. inference). On failure: If test set performance is suspiciously close to training performance, flag potential leakage.
## Reproducibility Checklist
| Item | Status | Notes |
|------|--------|-------|
| Code runs without errors | [Yes/No] | Tested on [environment description] |
| Random seeds set | [Yes/No] | Line [N] in [file] |
| Dependencies documented | [Yes/No] | requirements.txt / renv.lock present |
| Data loading reproducible | [Yes/No] | Path is [relative/absolute/URL] |
| Results match reported values | [Yes/No] | Verified: Table 1 ✓, Figure 2 ✗ (minor discrepancy) |
| Environment documented | [Yes/No] | Python 3.11 / R 4.5.0 specified |
Expected: Reproducibility verified by re-running the analysis (or assessing from code if data is unavailable). On failure: If results don't reproduce exactly, determine if differences are within floating-point tolerance or indicate a problem.
## Data Analysis Review
### Overall Assessment
[1-2 sentences: Is the analysis sound? Does it support the conclusions?]
### Data Quality
[Summary of data quality findings, impact on results]
### Methodological Concerns
1. **[Title]**: [Description, location in code/report, suggestion]
2. ...
### Strengths
1. [What was done well]
2. ...
### Reproducibility
[Tier assessment: Gold/Silver/Bronze/Opaque with justification]
### Recommendations
- [ ] [Specific action items for the analyst]
Expected: Review provides actionable feedback with specific references to code locations. On failure: If time-constrained, prioritize data quality and leakage checks over style issues.
review-research — broader research methodology and manuscript reviewvalidate-statistical-output — double-programming verification methodologygenerate-statistical-tables — publication-ready statistical tablesreview-software-architecture — code structure and design review