From agent-almanac
Conducts structured peer reviews of research manuscripts, proposals, protocols, and reports. Evaluates methodology, statistics, reproducibility, bias, and scientific rigor.
npx claudepluginhub pjt222/agent-almanacThis skill uses the workspace's default tool permissions.
---
Conducts structured 7-stage peer review of scientific manuscripts and grants: initial assessment, section review, statistical rigor, reproducibility, figure integrity, ethics, writing. Covers CONSORT/STROBE/PRISMA.
Conducts systematic peer reviews of scientific manuscripts and grants, evaluating methodology, statistics, design, reproducibility, ethics, figure integrity, and reporting standards across disciplines.
Generates structured peer review reports for academic manuscripts, evaluating novelty, methodological rigor, clarity, impact, and ethics. Use when critiquing papers or providing reviewer feedback.
Share bugs, ideas, or general feedback.
Perform a structured peer review of research work, evaluating methodology, statistical choices, reproducibility, and overall scientific rigour.
Read the entire document once to understand:
## First Pass Assessment
- **Research question**: [Clear / Vague / Missing]
- **Novelty claim**: [Stated and supported / Overstated / Unclear]
- **Structure**: [Complete / Missing sections: ___]
- **Scope fit**: [Appropriate / Marginal / Not appropriate]
- **Recommendation after first pass**: [Continue review / Major concerns to flag early]
Expected: Clear understanding of the paper's claims and contribution. On failure: If the research question is unclear after a full read, note this as a major concern and proceed.
Assess the research design against standards for the field:
Expected: Methodology checklist completed with specific observations for each item. On failure: If critical methodology information is missing, flag as a major concern rather than assuming.
Common statistical red flags:
Expected: Statistical choices evaluated with specific concerns documented. On failure: If the reviewer lacks expertise in a specific method, acknowledge this and recommend a specialist reviewer.
Reproducibility tiers:
| Tier | Description | Evidence |
|---|---|---|
| Gold | Fully reproducible | Open data + open code + containerized environment |
| Silver | Substantially reproducible | Data available, analysis described in detail |
| Bronze | Potentially reproducible | Methods described but no data/code sharing |
| Opaque | Not reproducible | Insufficient method detail or proprietary data |
Expected: Reproducibility tier assigned with justification. On failure: If data cannot be shared (privacy, proprietary), synthetic data or detailed pseudocode is an acceptable alternative — note whether this is provided.
Expected: Potential biases identified with specific examples from the manuscript. On failure: If biases cannot be assessed from the available information, recommend that the authors address this explicitly.
Structure the review constructively:
## Summary
[2-3 sentences summarizing the paper's contribution and your overall assessment]
## Major Concerns
[Issues that must be addressed before the work can be considered sound]
1. **[Concern title]**: [Specific description with reference to section/page/figure]
- *Suggestion*: [How the authors might address this]
2. ...
## Minor Concerns
[Issues that improve quality but are not fundamental]
1. **[Concern title]**: [Specific description]
- *Suggestion*: [Recommended change]
## Questions for the Authors
[Clarifications needed to complete the evaluation]
1. ...
## Positive Observations
[Specific strengths worth acknowledging]
1. ...
## Recommendation
[Accept / Minor revision / Major revision / Reject]
[Brief rationale for the recommendation]
Expected: Review is specific, constructive, and references exact locations in the manuscript. On failure: If the review is running long, prioritize major concerns and note minor issues in a summary list.
review-data-analysis — deeper focus on data quality and model validationformat-apa-report — APA formatting standards for research reportsgenerate-statistical-tables — publication-ready statistical tablesvalidate-statistical-output — statistical output verification