This skill should be used when the user asks to "survey related work", "find prior art", "literature review", "what papers are related to mine", "search for references", or needs to conduct a systematic literature survey. Reads thesis from .papermill/state.md, searches academic sources, classifies references, identifies gaps, generates BibTeX, and updates the state file. Can launch the surveyor agent for deep autonomous search.
From papermillnpx claudepluginhub queelius/claude-anvil --plugin papermillThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Generates FastAPI project templates with async routes, dependency injection, Pydantic schemas, repository patterns, middleware, and config for PostgreSQL/MongoDB backends.
Conduct a collaborative, iterative literature survey with the user. This is not a one-shot dump of references -- work together to map the landscape of existing work, identify gaps, and position the user's contribution.
Begin by gathering everything you need to understand the research context.
Read .papermill/state.md in the project root (Read tool), if it exists. Extract:
prior_art entries (key references, gaps, last survey date).If .papermill/state.md does not exist, the survey can still proceed — ask the user to describe their research topic and thesis directly. Suggest running /papermill:init afterward to persist the results.
Read existing .bib file(s) (Glob/Read tools). Scan for all BibTeX files in the project (commonly references.bib, paper/references.bib, or similar). These are seed references. Parse out author names, titles, years, and keywords -- these seeds will anchor the search.
Summarize your understanding back to the user before proceeding. Confirm: "Here is what I understand your paper is about, and here are the N existing references I found. Shall I begin the survey from this starting point?"
Do NOT skip the confirmation step. The user may want to adjust scope, add keywords, or exclude certain directions.
From the thesis and seed references, derive 5-10 search queries with:
Present the proposed queries to the user. Ask: "Are there additional terms, subfields, or authors I should include? Any directions to exclude?"
Revise the query list based on feedback before searching.
Use WebSearch (WebSearch tool) to query academic sources. For each search query, formulate searches targeting:
Use multiple formulations per concept -- rephrase, use synonyms, try with and without quotes around key phrases. Academic search is noisy; redundancy is essential.
For each query, collect the top results. Do not chase every link. Focus on results that appear in multiple searches or that are highly cited.
For each candidate result, extract and present:
| Field | Description |
|---|---|
| Title | Full title of the work |
| Authors | First author et al. (or all if few) |
| Year | Publication year |
| Venue | Journal, conference, or preprint server |
| Summary | 1-2 sentence abstract summary focused on relevance to the user's thesis |
| Relevance | Brief note on why this may matter for the user's paper |
Do NOT fabricate citations. If you cannot verify a reference exists, say so explicitly. Mark any reference you are uncertain about with "[unverified]" and suggest the user confirm it.
Categorize each relevant reference into one of four classes:
Foundational: Established the field, method, or theoretical framework the user builds on. These typically appear in the introduction and background sections.
Competing: Addresses the same problem as the user but with a different approach. These require the most careful discussion. They appear in the related work section and sometimes in the discussion.
Complementary: Addresses a related but distinct problem. The user's work could combine with theirs, or theirs provides tools/data the user leverages.
Tangential: Loosely related. Useful for context but not central. Include sparingly.
Present your classification rationale. The user may reclassify references, and that is expected and valuable.
Show 3-5 references at a time. For each batch:
Iterate. Run additional searches based on what you learn. Follow citation chains: if a confirmed reference cites something interesting, pursue it. If a confirmed reference is cited by many others, look at the citers.
Continue until the user says the coverage is sufficient or you have exhausted productive search directions.
After accumulating confirmed references, synthesize a gap analysis:
Present this as a structured summary. This analysis directly feeds into the paper's introduction and related work sections.
For each confirmed reference that is not already in the .bib file:
After approval, append the new entries to the appropriate .bib file (Edit tool).
Update .papermill/state.md (Edit tool) with:
prior_art.key_references: Add each confirmed reference with citation key, classification, and a one-sentence relation description.prior_art.last_survey: Set to today's date.prior_art.gaps: A concise summary of identified gaps.Append a timestamped note to the markdown body documenting the survey.
If the user wants broader coverage, offer to launch the surveyor agent (Task tool with subagent_type: "papermill:surveyor"):
I can launch the surveyor agent for a deeper autonomous search. It will systematically explore citation networks and compile an extended reference list. Would you like me to launch it?
Only offer this after the interactive survey has established a solid baseline.
When launching, pass the agent:
.bib entries or key papers confirmed so far)The agent writes its results to .papermill-survey-results.md in the project root. After it completes:
.papermill-survey-results.md (Read tool)..bib file (Edit tool).prior_art in .papermill/state.md with any new key references and the refined gap analysis (Edit tool).Close with a structured summary:
Survey Summary
- References found: N total (F foundational, C competing, X complementary)
- New BibTeX entries added: M
- Key gaps identified: [list]
- Suggested positioning: [1-2 sentences]
- Coverage assessment: [honest assessment of completeness]
Based on what the survey revealed, suggest the most relevant next step:
/papermill:thesis to sharpen the claim."/papermill:outline/papermill:experiment/papermill:review