From corbis-literature-starter-kit
Generates structured literature reviews on research topics via Corbis searches, organizes into themes with synthesized prose, outputs as Markdown, LaTeX section, or standalone document with BibTeX citations.
npx claudepluginhub agentic-assets/corbis-literature-starter-kitThis skill uses the workspace's default tool permissions.
Write a comprehensive, structured literature review on a user-specified topic. This skill produces a standalone review of what the field knows, where it disagrees, and what remains open. It is not for positioning a specific paper's contribution (use `literature-positioning-map` for that) or for writing a related-literature section within a manuscript (use `research-paper-writer` for that).
Conducts systematic literature reviews: scopes research questions, searches arXiv/Semantic Scholar/Google Scholar, screens/extracts/synthesizes papers, identifies gaps. For research surveys and related work.
Conducts systematic literature reviews across PubMed, arXiv, bioRxiv, Semantic Scholar; synthesizes findings into markdown/PDFs with verified citations (APA, Vancouver). For meta-analyses, research synthesis.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Share bugs, ideas, or general feedback.
Write a comprehensive, structured literature review on a user-specified topic. This skill produces a standalone review of what the field knows, where it disagrees, and what remains open. It is not for positioning a specific paper's contribution (use literature-positioning-map for that) or for writing a related-literature section within a manuscript (use research-paper-writer for that).
Before starting, confirm these with the user:
| Input | Required? | Default |
|---|---|---|
| Topic or research question | Yes | — |
Output format: markdown / latex-section / latex-standalone | Yes | markdown |
Scope: quick (~15 papers, field orientation) / focused (~25 papers) / comprehensive (~50 papers) | No | comprehensive |
Target .tex file (if latex-section) | If applicable | — |
Existing .bib file path | No | Auto-detect or create new |
| Known key papers to include | No | — |
| Time period filter | No | All years |
| Specific journals to emphasize | No | — |
If the user provides a topic and format in their initial message, proceed without asking. Fill defaults for anything not specified.
If scope is quick, skip the full review workflow. Instead:
sortBy: "citedByCount", matchCount: 15) and frontier search (minYear: 2020, matchCount: 15).get_paper_details_batch on the top 10 results from the architecture search.output/field_orientation.md:# Field Orientation: [Topic]
## 10 Must-Read Papers
[Ranked by citation count. For each: author (year), title, journal, 1-sentence contribution.]
## 3 Main Debates
[What the field disagrees about, with papers on each side.]
## 3 Dominant Methods
[How this field typically does empirical work.]
## 3 Common Datasets
[What data most papers use, via search_datasets.]
## 5 Frontier Questions
[What the recent papers (2020+) are working on that remains unresolved.]
output/paper_set.json and log searches to output/search_log.md./lit-review [topic] with focused or comprehensive scope, /brainstorm [topic], or paper-reader on the top 3 papers.Target ~50 unique papers for comprehensive scope, ~25 for focused scope. Execute searches in this order:
Step 1 — Architecture search (always first):
search_papers (query: the core topic, sortBy: "citedByCount", matchCount: 15) to immediately see the field's citation hierarchy: which papers define it, which are the most influential.Step 2 — Frontier search (always second):
search_papers (query: core topic, minYear: 2020, matchCount: 15) to find the latest published and working papers that represent the current edge of the field.Step 3 — Thematic angles:
search_papers (query: first sub-question or thematic angle, matchCount: 15)search_papers (query: second sub-question or thematic angle, matchCount: 15)Step 4 — Topic-filtered journal search:
top_cited_articles (journalNames: relevant top journals, query: the specific topic, compact: false) to find seminal papers on the topic within those journals that may not have appeared in the keyword searches.Step 5 — Adjacent or methodological:
search_papers (query: methodological approach or adjacent field angle, matchCount: 10) for breadth.Step 6 — Verify and enrich:
get_paper_details_batch (up to 25 paper IDs per call) on the top 30-40 unique papers to read abstracts, confirm relevance, and extract key findings. Use 2 batch calls rather than 30+ individual calls.get_paper_details to confirm details.Save to shared data files:
id, title, authors, year, journal, citedByCount, abstract, fullText when available, doi, source_queries) to output/paper_set.json. If the file exists, merge and deduplicate by id.output/search_log.md.Relative citation tiering: After deduplication, sort all collected papers by citedByCount and assign influence tiers using relative ranking within the collected set:
| Tier | Label | Rule | Treatment in the review |
|---|---|---|---|
| 1 | Foundational | Top 10% by citation count within the collected set | 3-5 sentences each. Describe what they found, how, and why it mattered. These anchor the review. |
| 2 | Established | Next 30% by citation count | 1-2 sentences each, or grouped into synthesized claims with 2-3 papers per sentence. |
| 3 | Emerging | Bottom 60%, especially papers published in the last 5 years | Grouped into frontier paragraphs. Cited parenthetically to support collective findings. |
A paper that appears across 3+ separate search queries is likely a network hub. Promote it one tier (e.g., Established to Foundational) regardless of citation rank.
When a paper has fullText available in the paper set, use it (not just the abstract) to make more informed judgments about mechanism, method, and contribution.
Ranking criteria (for deciding which papers to keep when cutting to target count):
After collecting papers, propose 4-6 thematic strands. Present to the user for approval before writing.
Deliver this structure:
# Proposed Literature Review Structure
## Topic: [user's topic]
## Scope: [focused/comprehensive] — [N] unique papers collected
## Influence tiers: [X] Foundational / [Y] Established / [Z] Emerging
### Proposed strands:
1. **[Strand name]** — [1-sentence description]
Key papers: [3-5 author-year citations]
Narrative arc: [what story this strand tells, from early work to current state]
2. **[Strand name]** — [1-sentence description]
Key papers: [3-5 author-year citations]
Narrative arc: [story]
3. ...
### Cross-cutting themes:
- [e.g., methodological evolution from X to Y]
- [e.g., a key empirical debate between findings A and B]
### Identified gaps:
- [Gap 1: what the literature has not addressed]
- [Gap 2: where findings conflict without resolution]
### Papers that don't fit neatly:
- [Paper] — could go in strand X or Y; recommend [placement]
Strand organization principles (in order of preference):
Never organize by topic label alone ("this literature relates to X, Y, and Z" with no internal structure).
Checkpoint: Wait for user approval or modifications. Do not proceed to writing until the user confirms the strand structure.
For each strand, write in this order:
Opening frame (1-2 sentences): State the strand's central question or contribution to understanding the topic. Why does this line of work exist?
Foundational work (Tier 1 papers, 3-5 sentences each): Describe the key papers that anchor this strand. These get the most individual attention. State what they found, how they found it, and why it mattered. Do not mention citation counts in the review prose; let the depth of treatment signal the paper's importance. Citation counts belong in the reading list, not in the narrative.
Established evidence (Tier 2 papers, synthesized, not enumerated): Group the body of work by finding, not by author. Write about what the literature collectively shows, with citations supporting claims. Example:
Recent frontier (Tier 3 papers, 2-3 sentences): What has the last 2-3 years added? New data, new methods, new findings that shift understanding? These papers may have low citation counts simply because they are new.
Gaps, tensions, or open questions (1-2 sentences): What does this strand leave unresolved? Where do findings conflict? What has not been studied?
Transition (1 sentence): Connect to the next strand.
After all strands, write a synthesis section covering:
Follow all project writing norms (references/writing-norms.md, references/banned-words.md):
After writing the review:
export_citations (list of paper IDs, format: bibtex) to generate BibTeX entries..bib file:
*.bib files in the project. If found, ask the user which to use..bib exists, create one:
markdown format: notes/literature_review_references.biblatex-section format: same directory as the .tex filelatex-standalone format: paper/literature_review/references.bib.bib file.For papers where export_citations does not return a result (e.g., the paper was mentioned by the user but not found in Corbis), construct a manual BibTeX entry from known information and flag it for the user to verify.
Markdown (markdown):
notes/literature_review_[topic_slug].md.bib file path at the bottom for referenceLaTeX section (latex-section):
.tex file\section{} marker)\citet{} for textual citations ("Author (Year) finds...") and \citep{} for parenthetical citations ("...as shown in prior work \citep{author2020}").bib fileLaTeX standalone (latex-standalone):
latex_template/ to paper/literature_review/ if it does not exist.tex file to understand its structurereferences.bib in the same directory\bibliography{references} points to the correct fileFor all output formats, also produce a reading list using assets/reading-list-template.md. This is a curated table of the 10-15 most important papers with one-line descriptions of their key contributions.
notes/reading_list_[topic_slug].mdLab notebook: Append an entry to notes/lab_notebook.md:
---
### [DATE] — Literature Review: [Topic]
**What was done**: Comprehensive literature review on [topic]. Searched [N] papers via Corbis, reviewed [M] abstracts, organized into [K] thematic strands.
**Strands covered**:
- [Strand 1]: [1-sentence summary]
- [Strand 2]: [1-sentence summary]
- ...
**Key gaps identified**:
- [Gap 1]
- [Gap 2]
**Key conflicts identified**:
- [Conflict 1]
**Output files**:
- Review: [path]
- Reading list: [path]
- Bibliography: [path]
**Next steps**: [e.g., use findings to inform research-idea-generator, or feed into literature-positioning-map for a specific paper]
Project state: If notes/project_state.md exists, update the literature positioning section with the strand names and key gaps.
After all outputs are written, present a brief report in chat:
# Literature Review — Coverage Report
## Topic: [topic]
## Scope: [focused/comprehensive]
## Papers cited: [N] (of [M] unique papers found)
## Influence tiers: [X] Foundational / [Y] Established / [Z] Emerging
## Strands:
1. [Strand 1] — [N papers]
2. [Strand 2] — [N papers]
...
## Key gaps identified:
- [Gap 1]
- [Gap 2]
## Key conflicts identified:
- [Conflict 1]
## Output files:
- Review: [path]
- Reading list: [path]
- Bibliography: [path]
literature-positioning-map)research-paper-writer)research-idea-generator, though the gaps from this review are excellent inputs)finance-idea-screening)This skill can feed into all of the above. A natural workflow is: literature-review to map the field, then research-idea-generator to brainstorm from the gaps, then literature-positioning-map to position a chosen idea.
get_paper_details or get_paper_details_batch to verify before including it. If unavailable, flag it for the user.