From academic-research
Provides ICML meta-review style feedback on academic paper drafts via PDF. Analyzes correctness (equations, proofs), motivation, methodology gaps, presentation, visualizations, citations. Outputs structured assessment, strengths, issues, comments, revision checklist.
npx claudepluginhub jeandiable/academic-research-plugin --plugin academic-researchThis skill uses the workspace's default tool permissions.
Unlike the paper-reviewing skill (which simulates a formal conference reviewer), this skill acts as a **constructive advisor** helping the author improve their draft. It provides actionable feedback with specific suggestions for improvement, not just criticism. The goal is to identify gaps in correctness, motivation, methodology, presentation, and citations, then guide the author toward a stron...
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
Unlike the paper-reviewing skill (which simulates a formal conference reviewer), this skill acts as a constructive advisor helping the author improve their draft. It provides actionable feedback with specific suggestions for improvement, not just criticism. The goal is to identify gaps in correctness, motivation, methodology, presentation, and citations, then guide the author toward a stronger submission.
Key Differences from Paper Review:
Before running this skill, ensure dependencies are installed:
pip install -r BASE_DIR/scripts/requirements.txt
This installs required libraries for PDF processing, citation retrieval, and analysis.
The paper polishing process consists of three main phases: comprehensive reading, multi-angle analysis, and synthesis.
Use the Read tool with pagination to process the entire PDF:
Paper Metadata:
Technical Elements:
Content Tracking:
Analyze the paper from six complementary angles:
For EACH figure and table, provide specific improvement suggestions:
For each major technical claim, baseline comparison, or methodological choice:
python BASE_DIR/scripts/paper_search.py --query "<claim-related-query>" --max-results 5 --sort citationsCompile all findings into the structured output format, organized by severity:
Priority Levels:
Create a prioritized revision checklist ordered from critical to minor issues.
Generate a comprehensive report following this structure:
# Paper Polishing Report: <Paper Title>
## Overall Assessment
**Recommendation:** [Accept / Minor Revision / Major Revision / Not Ready]
**Justification:** [2-3 sentences explaining the assessment and what changes would strengthen it]
## Top 3 Strengths
1. [Specific strength with concrete evidence from the paper]
2. [Specific strength with concrete evidence from the paper]
3. [Specific strength with concrete evidence from the paper]
## Critical Issues (must fix before submission)
1. **[Issue Title]**: [Section/Page reference] — [Explanation of the problem] — [Concrete suggested fix]
2. **[Issue Title]**: [Section/Page reference] — [Explanation of the problem] — [Concrete suggested fix]
3. [Continue as needed...]
## Major Issues (significantly weaken the paper)
1. **[Issue Title]**: [Section/Page reference] — [Explanation of the problem] — [Concrete suggested fix]
2. **[Issue Title]**: [Section/Page reference] — [Explanation of the problem] — [Concrete suggested fix]
3. [Continue as needed...]
## Section-by-Section Feedback
### Abstract
- [Observation 1]: [Specific suggestion]
- [Observation 2]: [Specific suggestion]
- [If no issues, state "Well-written abstract that clearly conveys contributions"]
### 1. Introduction
- [Observation 1]: [Specific suggestion]
- [Observation 2]: [Specific suggestion]
- [Continue for all sections...]
### 2. Related Work
- [Observation 1]: [Specific suggestion]
### 3. Method
- [Observation 1]: [Specific suggestion]
### 4. Experiments
- [Observation 1]: [Specific suggestion]
### 5. Conclusion
- [Observation 1]: [Specific suggestion]
[Add sections for any additional major sections like Discussion, Appendix, etc.]
## Equation Review
| Eq. # | Issue | Suggestion |
|-------|-------|------------|
| (1) | [Issue if any] | [Corrected version or clarification] |
| (2) | [Issue if any] | [Corrected version or clarification] |
[Include all equations with issues; omit table if no issues found]
## Figure & Table Review
| Item | Current State | Suggested Improvement |
|------|--------------|----------------------|
| Figure 1 | [Description of what is shown] | [Specific actionable improvement] |
| Table 1 | [Description of what is shown] | [Specific actionable improvement] |
| Figure 2 | [Description of what is shown] | [Specific actionable improvement] |
[Include all figures and tables with suggested improvements]
## Missing Citations
| Location | Statement/Claim | Suggested Citation | Why It Should Be Cited |
|----------|-----------------|-------------------|----------------------|
| Sec 2, p.3 | "Method X is commonly used..." | Citation 1: Author et al. (Year) | Seminal work establishing method X |
| Sec 3, p.5 | "We compare against baseline Y" | Citation 2: Author et al. (Year) | Original paper introducing baseline Y |
[Include citations found to be missing; omit table if no critical gaps found]
## Revision Checklist
- [ ] **Priority 1 (Critical):** [Action item with specific location/instruction]
- [ ] **Priority 2 (Critical):** [Action item with specific location/instruction]
- [ ] **Priority 3 (Critical):** [Action item with specific location/instruction]
- [ ] **Priority 4 (Major):** [Action item with specific location/instruction]
- [ ] **Priority 5 (Major):** [Action item with specific location/instruction]
- [ ] **Priority 6 (Minor):** [Action item with specific location/instruction]
- [ ] **Priority 7 (Minor):** [Action item with specific location/instruction]
[Continue as needed; order strictly from critical to minor]
Save all outputs to ./output/paper-polishing/YYYY-MM-DD-HHMMSS/ with:
feedback.md: The complete polishing report following the format abovemissing_citations.json: Structured citation search results in format:
{
"paper_title": "...",
"analysis_timestamp": "...",
"missing_citations": [
{
"location": "Section 2, page 3",
"claim": "Method X is the standard approach",
"suggested_citations": [
{
"title": "...",
"authors": "...",
"year": 2023,
"venue": "...",
"reason": "Seminal work establishing this method",
"search_query": "..."
}
]
}
]
}
python BASE_DIR/scripts/paper_search.py \
--query "variational autoencoder training stability" \
--max-results 5 \
--sort citations
This returns the 5 most-cited papers matching the query, which you can then assess for relevance.
Last Updated: 2026-03-03 Version: 1.0