From academic-research
Analyzes paper from PDF, arXiv URL, or tweet; surveys related work including missed citations, explores cross-domain applications, proposes 2-3 innovations.
npx claudepluginhub jeandiable/academic-research-plugin --plugin academic-researchThis skill uses the workspace's default tool permissions.
This skill performs a targeted literature survey anchored around a specific paper provided by the user. It combines:
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Explores codebases via GitNexus: discover repos, query execution flows, trace processes, inspect symbol callers/callees, and review architecture.
Share bugs, ideas, or general feedback.
This skill performs a targeted literature survey anchored around a specific paper provided by the user. It combines:
The skill systematically maps the paper's research landscape, identifies gaps in cited work, and proposes novel research directions based on the paper's core contributions and methods.
The skill automatically detects the input type from $ARGUMENTS:
PDF file — Argument ends in .pdf
arXiv URL — Argument contains arxiv.org/abs/
2301.12345)paper_search.py with --arxiv-id to fetch paper metadata and PDFTweet URL — Argument contains twitter.com or x.com
Search query — Any other format
paper_search.py --query to find the paperBefore running the skill, install dependencies:
pip install -r "BASE_DIR/scripts/requirements.txt"
Required packages:
arxiv — arXiv API accessrequests — HTTP requests for paper fetchingbibtexparser — BibTeX parsing and generationsemanticscholar — Semantic Scholar API integration$ARGUMENTS to determine input typepaper_search.py --arxiv-id <ID> to fetchpaper_search.py --query "<query>" --max-results 5 to find paperFrom the paper's abstract, introduction, and methodology sections:
Create a structured summary (200-300 words) capturing these elements.
Parse the references section of the paper:
From extracted keywords and concepts, generate 3-5 targeted search queries:
Each query should be specific enough to find relevant papers while broad enough to discover related work.
For each query generated in Step 4:
python "BASE_DIR/scripts/paper_search.py" \
--query "<query>" \
--max-results 20 \
--output json \
--sort relevance
Collect results and deduplicate across queries. Keep papers with relevance scores above threshold (e.g., 0.6).
If Semantic Scholar paper ID available:
python "BASE_DIR/scripts/paper_search.py" \
--paper-id <S2_ID> \
--citing \
--limit 50
python "BASE_DIR/scripts/paper_search.py" \
--paper-id <S2_ID> \
--cited-by \
--limit 50
This finds:
Extract 10-15 most relevant papers from each direction.
Compare papers found in Steps 5-6 with paper's reference list:
Identify the anchor paper's core methodology, then explore applications in other domains:
Example: Paper on diffusion models for image generation → Search for:
Generate 3-5 cross-domain search queries and run Step 5 for each. Output innovative combinations not explored by anchor paper.
Generate 2-3 innovation proposals extending the anchor paper. For each proposal include:
Base proposals on:
For all papers in final output, collect BibTeX entries:
python "BASE_DIR/scripts/bibtex_utils.py" \
fetch \
--title "<paper-title>"
If bibtex_utils.py unavailable, manually format BibTeX from collected metadata.
Save results to timestamped directory: ./output/paper-triggered-survey/YYYY-MM-DD-HHMMSS/
Create survey-report.md with the following structure:
# Paper-Triggered Survey
## Anchor Paper
**Title:** [Full title]
**Authors:** [Author list]
**Year:** [YYYY]
**Venue:** [Conference/Journal]
**arXiv ID:** [Optional]
**DOI:** [Optional]
### Problem Formulation
[2-3 sentences on the core problem]
### Core Methods
[Bullet list of main techniques introduced]
### Key Concepts
[Bullet list of important ideas/terms]
### Main Contributions
[2-3 sentence summary of contributions]
---
## Step 2: Extracted Concepts & Methods
**Problem Statement:** [Detailed problem description]
**Methodology Overview:**
[Technical approach description, 150-200 words]
**Key Terms & Concepts:**
- Term 1: Definition
- Term 2: Definition
- (Continue for 8-10 key terms)
**Novelty vs. Prior Work:** [How this differs from related work]
---
## Step 3: Referenced Works Summary
**Total References:** [N]
**Seminal Works (Highly Cited):**
| Title | Authors | Year | Citation Count |
|-------|---------|------|-----------------|
**Foundational Works:**
[Bullet list of 5-7 most important cited papers]
---
## Step 5: Related Work Discovered (N papers)
| # | Title | Authors | Year | Venue | Citations | Relevance | Relation to Anchor |
|---|-------|---------|------|-------|-----------|-----------|-------------------|
| 1 | ... | ... | ... | ... | ... | ... | ... |
| 2 | ... | ... | ... | ... | ... | ... | ... |
| (Continue for all discovered papers) |
**Search Queries Used:**
1. Query 1 → N results
2. Query 2 → N results
3. (List all queries and result counts)
---
## Step 6: Citation Graph Analysis
### Papers Citing the Anchor Paper (N papers)
[List 5-10 most relevant papers that cite anchor, with brief relevance notes]
### Papers Cited by the Anchor Paper (N papers)
[Sampling of most important foundational papers cited, organized by category]
---
## Step 7: Missing Related Work
Papers the anchor should have cited but didn't:
**Recent Work (Published After Anchor):**
| Title | Authors | Year | Why Relevant |
|-------|---------|------|--------------|
| ... | ... | ... | ... |
**Concurrent Work (Same Year as Anchor):**
| Title | Authors | Year | Why Relevant |
|-------|---------|------|--------------|
| ... | ... | ... | ... |
**Prior Work (Gaps in Literature Review):**
| Title | Authors | Year | Why Relevant |
|-------|---------|------|--------------|
| ... | ... | ... | ... |
---
## Step 8: Cross-Domain Exploration
### Domain Extension: [Domain 1]
**Query:** [Search query used]
**Findings:** [2-3 key papers showing how anchor's methodology applies]
**Innovation Opportunity:** [Brief insight on how methodology could transfer]
### Domain Extension: [Domain 2]
**Query:** [Search query used]
**Findings:** [2-3 key papers]
**Innovation Opportunity:** [Brief insight]
### Domain Extension: [Domain 3]
**Query:** [Search query used]
**Findings:** [2-3 key papers]
**Innovation Opportunity:** [Brief insight]
---
## Step 9: Innovation Proposals
### Proposal 1: [Direction Name]
**Description:**
[2-3 sentences describing the research direction]
**Connection to Anchor Paper:**
[How it extends/builds on the anchor work]
**Cross-Domain Insight:**
[Which cross-domain finding inspired this, if any]
**Feasibility:** [Low/Medium/High]
- **Timeline:** [Estimated months to validate]
- **Required Resources:** [Key skills, datasets, compute needed]
**Weaknesses to Overcome:**
1. Challenge 1: [Description and mitigation]
2. Challenge 2: [Description and mitigation]
3. Challenge 3: [Description and mitigation]
**Landing Plan:**
1. Month 1-2: [Validation step 1]
2. Month 3-4: [Implementation step 1]
3. Month 5-6: [Evaluation/publication]
**Key References:**
- [Related papers from discovery phase]
---
### Proposal 2: [Direction Name]
[Same structure as Proposal 1]
---
### Proposal 3: [Direction Name]
[Same structure as Proposal 1]
---
## Step 10: Bibliography
Complete BibTeX for all papers referenced in this survey:
\`\`\`bibtex
@article{anchor2024title,
title={Full Title},
author={Author, A. and Author, B.},
journal={Journal Name},
year={2024},
volume={1},
pages={1-20}
}
[Continue for all papers]
\`\`\`
---
## Metadata
**Survey Generated:** [Timestamp]
**Total Papers Found:** [N]
**Total Proposals Generated:** 3
**Base Directory:** [BASE_DIR]
./output/paper-triggered-survey/YYYY-MM-DD-HHMMSS/
├── survey-report.md # Main report (above structure)
├── anchor-paper.pdf # Copy of anchor paper if available
├── related-papers/
│ ├── cited-by.json # Papers citing the anchor
│ └── citing.json # Papers cited by anchor
├── search-results/
│ ├── query-1-results.json
│ ├── query-2-results.json
│ └── query-N-results.json
├── cross-domain/
│ ├── domain-1.json
│ ├── domain-2.json
│ └── domain-3.json
└── bibtex/
└── all-papers.bib # Complete bibliography
Trigger 1: Direct PDF
/paper-triggered-survey /path/to/paper.pdf
Trigger 2: arXiv URL
/paper-triggered-survey https://arxiv.org/abs/2301.12345
Trigger 3: Tweet with paper
/paper-triggered-survey https://x.com/user/status/123456789
Trigger 4: Paper title
/paper-triggered-survey "Diffusion Models Beat GANs on Image Synthesis"
Trigger 5: Suggested usage
User: "Survey this paper: https://arxiv.org/abs/2306.06465"
→ Activates paper-triggered-survey skill
paper_search.py — Core search utility with arXiv and Semantic Scholar integrationbibtex_utils.py — BibTeX reference management (if available)requirements.txt — Python package dependencies