From academic-research
Generates conference-style peer reviews for academic paper PDFs, assessing novelty, technical soundness, clarity, significance, reproducibility, experimental design, with scores and issues.
npx claudepluginhub jeandiable/academic-research-plugin --plugin academic-researchThis skill uses the workspace's default tool permissions.
Conduct a comprehensive conference-style peer review of an academic paper. This skill analyzes papers across six key dimensions:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
Conduct a comprehensive conference-style peer review of an academic paper. This skill analyzes papers across six key dimensions:
The review follows the format and checklist of a specified conference (NeurIPS, ICML, CVPR, ACL, AAAI, ICCV, ICLR) or a generic academic format. Severity can be adjusted (lenient, standard, strict) to calibrate tone and scoring standards.
<pdf-path> (required)Path to the paper PDF file to review. Can be absolute or relative path.
--conference (optional)Target conference to match review format. Options:
NeurIPS - Neural Information Processing SystemsICML - International Conference on Machine LearningCVPR - IEEE/CVF Conference on Computer Vision and Pattern RecognitionACL - Association for Computational LinguisticsAAAI - Association for the Advancement of Artificial IntelligenceICCV - IEEE/CVF International Conference on Computer VisionICLR - International Conference on Learning Representations--severity (optional, default: standard)Tone and scoring calibration:
lenient - Focus on strengths, constructive framing, generous scoringstandard - Balanced assessment, typical conference reviewer tonestrict - Rigorous evaluation, all issues flagged, conservative scoringInstall dependencies:
python -m pip install -r "BASE_DIR/scripts/requirements.txt"
Required packages:
PyPDF2 or pdfplumber - PDF parsing and text extractionrequests - HTTP requests for arXiv and paper searchespython-dateutil - Date handlingRead the entire PDF using the Read tool. For papers exceeding 20 pages, paginate the reading:
While reading, compile comprehensive notes:
For each dimension, write 2-3 sentences providing specific analysis:
Novelty: Is this genuinely new? What distinguishes it from prior work? Are the ideas incremental or transformative? Does it advance the field in a meaningful way?
Technical Soundness: Are mathematical proofs correct (if present)? Are assumptions justified? Are there logical gaps or unjustified leaps? Do the experiments actually validate the claims? Are there potential flaws in methodology?
Clarity: Is the writing clear and well-organized? Are key concepts explained before use? Are figures and tables informative with good captions? Is the main contribution easy to identify? Are mathematical notations consistent?
Significance: Would this work change how people think about the problem? How many researchers/practitioners would benefit? What is the potential real-world impact? Does it open new research directions?
Reproducibility: Is there sufficient implementation detail to reproduce results? Is code promised or available? Are hyperparameters and training procedures documented? Are computational requirements specified? Can someone independent reproduce the main results?
Experimental Design: Are baselines appropriate and state-of-the-art? Are comparisons fair (same hyperparameter tuning, computational budget)? Are ablations sufficient to understand component contributions? Are error bars/confidence intervals reported? Is evaluation on multiple datasets? Are failure cases discussed?
Extract 3-5 key technical concepts or method names from the paper. For each concept, execute:
python "BASE_DIR/scripts/paper_search.py" \
--query "[concept name or method]" \
--max-results 10 \
--sort citations
Identify papers that are highly relevant (by citation count, recency, or topical alignment) but NOT cited in the submission. Compile these into a "Missing References" section.
Use the conference format from references/conference_formats.md corresponding to --conference argument. If no conference specified, use the Default (Generic) format.
For specified conferences, follow their exact structure:
All reviews include conference-specific checklist items from the reference file.
Adjust language, framing, and scoring based on severity argument:
Lenient:
Standard:
Strict:
Generate final review following conference template. Include:
The review structure (when no conference specified):
# Paper Review: <Paper Title>
## Summary
[2-3 sentence summary of what the paper does and its main contributions]
## Strengths
1. [First strength with specific example or evidence]
2. [Second strength with explanation]
3. [Additional strengths as applicable]
## Weaknesses
1. [First weakness with specific example]
2. [Second weakness with explanation]
3. [Additional weaknesses as applicable]
## Major Issues
1. **[Issue Title]**: [Detailed explanation of the problem]
→ Suggested fix: [Specific actionable suggestion]
2. **[Issue Title]**: [Detailed explanation]
→ Suggested fix: [Specific actionable suggestion]
[Continue for all major issues]
## Minor Issues
1. [Minor issue or suggestion with line/section reference]
2. [Typo or presentation issue]
3. [Additional minor items]
## Questions for Authors
1. [Specific question about methodology, results, or claims]
2. [Clarification requested about experimental setup]
3. [Request for additional analysis or results]
## Missing Related Work
| Paper Title | Key Contribution | Relevance | Should Be Cited In Section |
|------------|-----------------|-----------|---------------------------|
| [Title 1] | [Brief description] | [Why relevant to submission] | [Where in paper] |
| [Title 2] | [Brief description] | [Why relevant to submission] | [Where in paper] |
## Scores
- **Overall Assessment**: [Strong Accept / Accept / Weak Accept / Borderline / Weak Reject / Reject / Strong Reject]
- **Overall Score**: X/10 (or 1-6 for CVPR, etc.)
- **Confidence**: Low / Medium / High / Expert (1-5 scale)
- **Novelty**: Low / Medium / High
- **Technical Soundness**: Low / Medium / High
- **Significance**: Low / Medium / High
- **Clarity**: Low / Medium / High
## Additional Notes
[Any final comments about significance, presentation, or specific feedback]
For conference-specific formats, structure follows the exact template from references/conference_formats.md with appropriate section names and scoring scales.
Reviews are saved to: ./output/paper-reviewing/YYYY-MM-DD-HHMMSS/
Directory contents:
{
"search_queries": ["concept1", "concept2", ...],
"results": [
{
"query": "concept name",
"papers": [
{
"title": "Paper Title",
"authors": "Author1, Author2",
"year": 2024,
"citations": 150,
"relevance_reason": "Why this is relevant"
}
]
}
]
}
# Review a paper with NeurIPS format and standard severity
paper-reviewing /path/to/paper.pdf --conference NeurIPS
# Review with strict tone and CVPR format
paper-reviewing paper.pdf --conference CVPR --severity strict
# Lenient review with generic format
paper-reviewing paper.pdf --severity lenient
# Default: generic format, standard severity
paper-reviewing paper.pdf
paper-summarizing - Generate concise paper summariesliterature-survey - Build comprehensive literature surveysexperiment-analyzer - Analyze experimental results and methodology