From corbis-literature-starter-kit
Maps closest literature and sharpens contribution claims for finance or real-estate papers. Use for related-literature sections, novelty maps, and closest-paper comparisons.
npx claudepluginhub agentic-assets/corbis-literature-starter-kitThis skill uses the workspace's default tool permissions.
Your job is not to summarize everything ever written. Your job is to help the paper occupy a precise place in the literature.
Generates structured literature reviews on research topics via Corbis searches, organizes into themes with synthesized prose, outputs as Markdown, LaTeX section, or standalone document with BibTeX citations.
Builds narrative-driven literature surveys, annotated bibliographies, and gap analyses for political science and computational social science projects using verified citations from Zotero and academic APIs.
Adds strict citations from Nature Portfolio, AAAS Science family, and Cell Press journals to manuscript text by segmenting passages, filtering by date, and exporting RIS/ENW/Zotero RDF formats.
Share bugs, ideas, or general feedback.
Your job is not to summarize everything ever written. Your job is to help the paper occupy a precise place in the literature.
Always search before writing. Do not rely on parametric knowledge alone. Corbis searches 250,000+ papers via hybrid semantic+keyword search.
Step 0 — Check existing data and run architecture + frontier searches:
output/paper_set.json exists, read it first. Papers already collected for this topic can inform the positioning without redundant searches.search_papers (query: the core topic, sortBy: "citedByCount", matchCount: 15) → immediately see the field's citation hierarchy. The most-cited papers are what referees will compare you to.search_papers (query: core topic, minYear: 2020, matchCount: 15) → the recent frontier and scooping risks.output/paper_set.json (merge if exists) and append queries to output/search_log.md.Step 1 — Inner ring (direct competitors):
search_papers (query: the exact question + method, matchCount: 15) → find papers doing the closest thing.get_paper_details_batch (paper IDs from top 5 results) → read abstracts to confirm true overlap.Step 2 — Middle ring (same question, different methods OR same method, different question):
search_papers (query: the same question with alternative methods, matchCount: 10)search_papers (query: the same method applied to related questions, matchCount: 10)Step 3 — Outer ring (seminal and contextual):
top_cited_articles (journalNames + query: topic) → identify canonical papers within key journals that may not have appeared in keyword searches.Step 4 — Verify specific papers:
get_paper_details or get_paper_details_batch (paper IDs) → when the user mentions a specific paper or when you need to verify what a close paper actually does vs. what its title suggests.The comparison set is what a referee would invoke when evaluating the paper's contribution. This is heavily correlated with citation count:
When identifying the "closest 3-5 papers," include at least one high-citation anchor and at least one recent paper. Do not let the comparison set consist entirely of niche recent work that a referee has never heard of.
format_citation (paper ID, style: apa or chicago) → generate properly formatted citations for individual papers.export_citations (list of paper IDs, format: bibtex) → batch export references for the LaTeX bibliography file. Use this after completing the literature map to give the user a ready-to-use .bib file.When comparing the current paper to the closest work, be specific about which dimension the novelty lies in:
| Dimension | Example claim |
|---|---|
| Mechanism | "Unlike X who study channel A, we identify channel B using..." |
| Identification | "X documents the correlation; we provide causal evidence using..." |
| Data/Setting | "X studies large public firms; we use novel private-credit data that reveals..." |
| Scope | "X examines one state; our national sample allows us to..." |
| Time period | "X's sample ends in 2005; we study the post-crisis regime where..." |
| Prediction | "X predicts effect A; our mechanism predicts the opposite in subgroup..." |
| Method | "X uses hedonic regressions; our repeat-sales design differences out..." |
Weak differentiators (be cautious):
Organize by intellectual contribution, not by topic label:
Option A — By disagreement: Group papers by which side of a debate they support, then explain where the current paper enters.
Option B — By mechanism: Group papers by the economic channel they emphasize, then explain the new channel or evidence.
Option C — By method/setting: Group papers by empirical approach, then explain why the new approach changes the answer.
Never use Option D — By topic label ("this paper relates to the literature on X, the literature on Y, and the literature on Z" with no differentiation within each bucket).
Produce:
# Literature positioning memo
## Closest papers (3-5, with specific differentiation)
## Comparison dimensions (which dimension of novelty is strongest)
## Where this paper overlaps (be honest)
## Where this paper differs (be specific)
## What claim is safe
## What claim is too strong
## Draft contribution paragraph
## Draft related-literature outline (organized by disagreement, mechanism, or method)
## Papers to watch (recent working papers that could scoop or complement)
Read if needed: