From academic-research
Conducts topic-driven literature surveys for AI/ML research: searches arXiv, Semantic Scholar, DBLP for recent papers, identifies gaps, explores cross-domain ideas, proposes 2-3 innovations with feasibility. Useful for reviews, related work, or idea generation.
npx claudepluginhub jeandiable/academic-research-plugin --plugin academic-researchThis skill uses the workspace's default tool permissions.
The literature-survey skill performs a comprehensive topic-driven literature survey for AI/ML research. It systematically searches multiple academic databases (arXiv, Semantic Scholar, DBLP), identifies research gaps, performs cross-domain exploration to discover transferable methods, and proposes 2-3 innovation directions with detailed feasibility assessments.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
The literature-survey skill performs a comprehensive topic-driven literature survey for AI/ML research. It systematically searches multiple academic databases (arXiv, Semantic Scholar, DBLP), identifies research gaps, performs cross-domain exploration to discover transferable methods, and proposes 2-3 innovation directions with detailed feasibility assessments.
Parse $ARGUMENTS as follows:
Topic (required): The first argument is the research topic string to survey. Example: "vision transformer", "graph neural networks", "federated learning privacy".
--date-range (optional): Time window for paper search. Default: 1y (1 year). Supported values: 1y, 2y, 3y. Controls the lookback period from today.
--max-papers (optional): Maximum number of papers to retrieve per search query. Default: 50. Useful for scoping large topics. Values: 10-200.
--venues (optional): Comma-separated list of conference/journal abbreviations to filter results. Example: --venues NeurIPS,ICML,ICCV. If omitted, all venues are included.
This skill requires one-time dependency installation. Run:
pip install -r BASE_DIR/scripts/requirements.txt
Replace BASE_DIR with the base directory of the academic-research-plugin project (shown at the top of this skill's loaded context).
Required dependencies typically include:
requests — for HTTP queries to research databasesarxiv — Python client for arXiv APIbibtexparser — for parsing and generating BibTeXpandas — for data aggregation and analysisFollow these 7 steps to complete a comprehensive literature survey:
Break the user's topic into 3-5 focused search queries to capture different aspects and terminology variations:
Example: For "vision transformer", generate:
vision transformerViT image classificationself-attention computer visionvisual attention mechanismtransformer architecture image tasksDocument all queries for reproducibility.
For each decomposed query, execute a paper search:
python BASE_DIR/scripts/paper_search.py --query "<query>" --max-results 20 --output json
--max-papers flag from arguments)title, authors, year, venue, abstract, url, citationsPerform targeted searches for foundational works in the field:
Organize all papers into 3-6 coherent sub-themes based on their core contributions:
Sub-themes should cluster papers by:
For each theme, write a 2-3 sentence summary of the papers it contains.
Research gaps are identified by examining what is NOT covered:
Document at least 3-5 distinct gaps with concrete descriptions.
Systematically search adjacent fields for transferable methodologies:
Approach:
Example: Vision Transformers (ViT)
"attention mechanism audio signal processing""transformer architecture time series forecasting""self-attention graph neural networks"Document findings that show successful methodology transfer and what made the transfer effective.
Generate 2-3 concrete innovation proposals, each with:
Description: 1-2 sentences explaining the core idea. Combine insights from gaps + cross-domain findings.
Feasibility Assessment:
Potential Weaknesses: Critical evaluation of limitations, edge cases, or reasons the idea might not work
Landing Plan: Concrete first steps
For all papers mentioned in the report, fetch complete BibTeX entries:
python BASE_DIR/scripts/bibtex_utils.py fetch --title "<paper_title>"
references.bib fileFirstAuthorYear format (e.g., vaswani2017)Generate a comprehensive report file at ./output/literature-survey/YYYY-MM-DD-HHMMSS/survey_report.md using this exact structure:
# Literature Survey: <Topic>
**Date:** YYYY-MM-DD | **Papers Found:** N | **Date Range:** [e.g., "Last 1 year"] | **Search Queries:** K
## Paper Summary Table
| # | Title | Authors | Year | Venue | Citations | Notes |
|---|-------|---------|------|-------|-----------|-------|
| 1 | [Title with link to PDF/arXiv] | First Author et al. | YYYY | Conference/Journal | NNNN | Seminal / Key contribution |
| 2 | ... | ... | ... | ... | ... | ... |
---
## Theme Clusters
### Theme 1: <Name>
**Summary**: [2-3 sentences describing papers in this theme]
**Key Papers**:
- Paper A (Year)
- Paper B (Year)
- Paper C (Year)
**Contribution**: [What this theme contributes to the field]
### Theme 2: <Name>
[Same structure as Theme 1]
[Additional themes as needed...]
---
## Research Gaps
1. **Gap Name 1**: [Concrete description of missing research area or unresolved question]
2. **Gap Name 2**: [Another gap]
3. **Gap Name 3**: [Another gap]
[Additional gaps as identified...]
---
## Cross-Domain Findings
This section documents successful methodology transfer opportunities:
- **Finding 1**: [Methodology X from domain Y successfully applied to domain Z because of reason A. Example: Author Year]
- **Finding 2**: [Another cross-domain insight]
- **Finding 3**: [Another cross-domain insight]
---
## Innovation Proposals
### Proposal 1: <Innovation Title>
**Description**: [1-2 sentences of core idea]
**Feasibility**:
- Data: [Availability and requirements]
- Compute: [Estimated GPU/compute needs]
- Novelty: [How different from existing work, target venues]
- Timeline: [Realistic implementation duration]
**Potential Weaknesses**: [Critical evaluation of limitations and risks]
**Landing Plan**:
1. [First concrete step - MVP scope]
2. [Second step - intermediate milestone]
3. [Third step - validation/refinement]
- Success Metrics: [How to measure success]
- Fallback Strategy: [What to try if approach fails]
### Proposal 2: <Innovation Title>
[Same structure as Proposal 1]
### Proposal 3: <Innovation Title>
[Same structure as Proposal 1]
---
## References
See `references.bib` in this directory for complete BibTeX entries.
---
**Generated**: YYYY-MM-DD HH:MM:SS UTC | **Tool**: literature-survey skill v1.0
All outputs are saved to:
./output/literature-survey/YYYY-MM-DD-HHMMSS/
The directory contains:
survey_report.md — Main report with all findings and proposalsreferences.bib — Complete BibTeX file with all cited paperssearch_log.json — Metadata on all executed searches (queries, result counts, dates)papers_raw.json — Full JSON dump of all retrieved papers for referenceTopic Decomposition: Spend time on Step 1. Better queries lead to more relevant papers.
Date Range Selection: Use --date-range 3y for emerging fields (last 3 years of rapid innovation). Use 1y for stable fields with good coverage.
Venue Filtering: For rigorous surveys, use --venues NeurIPS,ICML,ICCV,ICLR to focus on top-tier venues.
Seminal Papers: Always manually verify that truly foundational papers are included, even if they're older.
Gap Identification: Gaps are most valuable when they're specific and actionable (i.e., suggest a concrete research direction rather than vague limitations).
Innovation Proposals: The best proposals combine insights from research gaps + cross-domain findings. Avoid purely speculative ideas.
Feasibility Assessment: Be honest about compute and data requirements. This makes proposals more credible and actionable.