Read papers from a folder and extract structured knowledge into skills: methodology, experiments, and metrics.
Extracts structured knowledge from research papers into reusable skills for methodology, experiments, and metrics.
/plugin marketplace add hdubey-debug/orion/plugin install hdubey-debug-orion@hdubey-debug/orionRead papers from a folder and extract structured knowledge into skills: methodology, experiments, and metrics.
/literature-review <folder-path>
/literature-review research/papers
/literature-review ~/downloads/vlm-papers
/literature-review ./papers/tarsier
IMPORTANT: This command MUST use Plan Mode. Create a plan first, get user approval, then execute.
When user invokes /literature-review, follow this process:
Use EnterPlanMode, then scan the folder:
ls <folder-path>/*.pdf
ls <folder-path>/*.md
List all papers found:
Found papers in <folder>:
1. paper1.pdf
2. paper2.pdf
3. notes.md
For each paper, plan what to extract:
## Literature Review Plan
### Papers to Review
1. **paper1.pdf** - [Detected title if possible]
2. **paper2.pdf** - [Detected title if possible]
### For Each Paper, Extract:
#### A. Methodology Skills
- Core technique/approach
- Architecture details (if ML)
- Novel contributions
- How it differs from prior work
#### B. Experiment Skills
- Benchmarks used
- Datasets mentioned
- Evaluation setup
- Ablation studies performed
#### C. Metrics Skills
- Evaluation metrics used
- How metrics are computed
- Baseline comparisons
- State-of-the-art numbers
### Output Files
- research/skills/literature/paper_001/summary.md
- research/skills/literature/paper_001/methodology.md
- research/skills/literature/paper_001/experiments.md
- research/skills/literature/_overview.md (updated)
### Estimated Effort
- Papers: [N]
- Extraction per paper: ~5-10 minutes reading
Use ExitPlanMode to get user approval before reading papers.
For each paper in the folder:
For each paper, create research/skills/literature/paper_XXX/methodology.md:
# Methodology: [Paper Title]
## Core Approach
[2-3 sentences describing the main technique]
## Architecture (if ML)
[Model architecture description]
## Key Innovations
1. [Innovation 1]
2. [Innovation 2]
## Comparison to Prior Work
- Differs from [Method X] by [difference]
- Builds on [Method Y] by [extension]
## Implementation Details
- Framework: [PyTorch/TensorFlow/etc]
- Key hyperparameters: [if mentioned]
- Training details: [if mentioned]
## When to Use This Method
[Conditions where this approach is applicable]
Create research/skills/literature/paper_XXX/experiments.md:
# Experiments: [Paper Title]
## Benchmarks Used
| Benchmark | Task | Size | Source |
|-----------|------|------|--------|
| [Name] | [Task] | [Size] | [URL/Citation] |
## Evaluation Setup
- Train/Val/Test split: [details]
- Preprocessing: [details]
- Input format: [details]
## Ablation Studies
| Ablation | Finding |
|----------|---------|
| [What was varied] | [What they found] |
## Baseline Comparisons
| Method | Score | Note |
|--------|-------|------|
| [Method] | [Score] | [Note] |
Create research/skills/literature/paper_XXX/metrics.md:
# Metrics: [Paper Title]
## Metrics Used
| Metric | What It Measures | How Computed |
|--------|------------------|--------------|
| [Name] | [Description] | [Formula/Method] |
## Reported Results
| Benchmark | Metric | Score |
|-----------|--------|-------|
| [Benchmark] | [Metric] | [Score] |
## State-of-the-Art Comparison
- Previous SOTA: [Method] at [Score]
- This paper: [Score] ([+X% improvement])
Create research/skills/literature/paper_XXX/summary.md:
# Paper: [Title]
**Authors**: [Names]
**Venue**: [Conference/Journal, Year]
**PDF**: [Local path]
## TL;DR
[One sentence summary]
## Abstract Summary
[2-3 sentences]
## Key Contributions
1. [Contribution 1]
2. [Contribution 2]
3. [Contribution 3]
## Relevance to Our Research
- [How this paper helps our goals]
- [Specific techniques we might use]
## Links
- Methodology: ./methodology.md
- Experiments: ./experiments.md
- Metrics: ./metrics.md
Update research/skills/literature/_overview.md:
# Literature Overview
Papers analyzed and key insights extracted.
## Papers
| ID | Title | Key Method | Status |
|----|-------|------------|--------|
| P001 | [Title] | [Method] | Reviewed |
| P002 | [Title] | [Method] | Reviewed |
## Methodologies Extracted
1. **[Method 1]** (P001): [Brief description]
2. **[Method 2]** (P002): [Brief description]
## Experiments & Benchmarks
| Benchmark | Used By | Task | Metrics |
|-----------|---------|------|---------|
| [Benchmark] | P001, P002 | [Task] | [Metrics] |
## Metrics Summary
| Metric | Papers Using It | Best Reported |
|--------|-----------------|---------------|
| [Metric] | P001 | [Score] |
## Key Insights Across Papers
1. [Pattern or insight from multiple papers]
2. [Another insight]
## Potential Benchmarks to Download
Based on papers reviewed:
- [ ] [Benchmark 1] - Used by P001, P002
- [ ] [Benchmark 2] - Used by P001
---
*Last updated: [timestamp]*
*Run /benchmark-setup to download and study benchmarks*
{
"phases": {
"literature_review": "complete"
},
"papers_folder": "<folder-path>",
"papers_reviewed": ["paper1.pdf", "paper2.pdf"]
}
Literature Review Complete!
Papers reviewed: [N]
├── P001: [Title] - methodology, experiments, metrics extracted
├── P002: [Title] - methodology, experiments, metrics extracted
Skills created:
├── Methodologies: [count]
├── Experiment setups: [count]
├── Metrics: [count]
Benchmarks identified for download:
├── [Benchmark 1] - used by [papers]
├── [Benchmark 2] - used by [papers]
View knowledge: /knowledge literature
Next step: /benchmark-setup
If PDF can't be read:
If paper is too long: