From autoconference
Systematic multi-database literature survey. 3 researchers cover non-overlapping sources (databases, time periods, or methodologies) with citation-chain knowledge transfer. TRIGGER when: user wants a literature survey, systematic review, paper survey, wants to cover multiple databases or time periods. DO NOT TRIGGER when: user wants to optimize a metric (use autoconference) or read a single paper (use scientific-reading).
npx claudepluginhub wjgoarxiv/autoconference-skillThis skill is limited to using the following tools:
*Run a multi-researcher literature survey where each researcher covers non-overlapping sources, then synthesizes findings via citation-chain knowledge transfer.*
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Processes PDFs: extracts text/tables/images, merges/splits/rotates pages, adds watermarks, creates/fills forms, encrypts/decrypts, OCRs scans. Activates on PDF mentions or output requests.
Share bugs, ideas, or general feedback.
Run a multi-researcher literature survey where each researcher covers non-overlapping sources, then synthesizes findings via citation-chain knowledge transfer.
The Session Chair is relentless. Once the survey begins:
max_rounds is 2, run 2 complete rounds.max_rounds exhausted (budget spent — this is normal, not failure)Before launching any survey, the Session Chair MUST ask the user these questions. Do NOT assume defaults — present options and wait for the user's answers.
Ask: "What is the topic or research question for this literature survey?"
Accept a free-form description. This becomes the lens for all searches and the basis for auto-generating taxonomy categories.
Ask: "How should researchers divide their search coverage?"
Present options:
By database (recommended for broad coverage):
By time period (recommended for tracking evolution of a field):
By methodology (recommended for contrasting research designs):
Custom: User defines their own partitioning scheme (e.g., by language, geography, or application domain)
Record the confirmed partitioning before proceeding.
Ask: "What is the minimum number of papers each researcher should find per taxonomy category? (Default: 5)"
If the user accepts the default, use 5. Record this as min_papers.
Ask: "What categories should papers be organized into? You can provide a list, or I can auto-generate categories from your research topic."
Record the confirmed taxonomy as a numbered list.
Ask: "How many survey rounds should be run? (Default: 2 — surveys usually need fewer rounds than optimization loops)"
If the user accepts the default, set max_rounds = 2. Record this value.
The Session Chair (you, the orchestrating agent) owns the survey lifecycle:
survey-report.mdEach round consists of 4 phases executed in sequence. Do not skip phases.
Spawn 3 researcher agents in parallel. Each agent receives:
min_papers)Each researcher MUST:
Search their assigned sources using the research topic. Use WebSearch and WebFetch to find papers. Try multiple search queries — not just the exact topic phrase. Use synonyms, sub-topics, and related terms to maximize coverage.
For each paper found, extract and record:
Organize findings into the taxonomy. Produce a table or structured list grouped by category.
Self-assess coverage using a qualitative 1–10 score per category:
Log results to researcher_{ID}_results.tsv in the conference working directory. Format:
category\ttitle\tauthors\tyear\tvenue\tsummary\tkey_findings\tcoverage_score
Report citation leads — a list of papers encountered in references or related work sections that the researcher could not access from their assigned sources but may be findable by other researchers.
The Session Chair collects all researcher outputs and runs the poster session analysis:
2a. Aggregate results
Build a unified table: for each taxonomy category, list all papers found across all researchers, with researcher ID noted.
2b. Citation overlap analysis
Identify papers that appear in multiple researchers' results (cited by multiple researchers or independently discovered). These high-overlap papers are likely foundational — flag them as anchor papers.
2c. Coverage gap analysis
For each taxonomy category:
min_papers as coverage gaps2d. Citation chain knowledge transfer
This is the key differentiator from independent search. The Session Chair synthesizes citation leads across researcher boundaries:
Produce a Citation Lead Dossier listing:
2e. Print summary to terminal:
=== POSTER SESSION: ROUND {N} ===
Papers found this round: {total}
Anchor papers (multi-researcher): {count}
Coverage gaps: {list of categories below min_papers}
Citation leads generated: {count}
Coverage metric: {categories_at_or_above_min} / {total_categories}
Spawn a Reviewer agent (use highest available model for rigor) to review the accumulated findings:
The Reviewer checks:
Accuracy of summaries — spot-check 3–5 papers by fetching their abstracts directly. Do the researcher summaries accurately reflect the paper content? Flag any misrepresentation.
Categorization correctness — are papers assigned to the right taxonomy category? Flag any papers that seem miscategorized.
Obvious missing papers — given the research topic and taxonomy, are there well-known papers or research groups that should appear but don't? List up to 5 specific suggestions.
Balance across categories — is coverage suspiciously skewed? (e.g., one category has 20 papers while others have 0) If so, flag it.
Quality of citation leads — are the citation lead dossier suggestions actionable and relevant?
The Reviewer produces a short peer-review-round-{N}.md with:
The Session Chair processes the peer review and prepares for the next round:
Update the shared citation graph — merge all newly discovered papers and citation relationships into a running graph (text format):
[Paper A] --cites--> [Paper B]
[Paper C] --cites--> [Paper A]
Incorporate peer review flags — if accuracy issues were found, add a correction task for the next round. If miscategorizations were found, move papers to correct categories now.
Assign gap-filling tasks — based on the Citation Lead Dossier and peer review missing-paper suggestions, assign specific search tasks to each researcher for the next round:
Researcher A (next round): Chase citation lead [Paper Y]; try search query "[alternative term]" in PubMed
Researcher B (next round): Investigate Category Z gap; search for [specific suggested papers]
Researcher C (next round): Verify claim in [Paper X]; search for RCT cited in [meta-analysis title]
Compute convergence metric:
coverage_metric = (categories with >= min_papers) / total_categories
If coverage_metric == 1.0, the coverage target is met. Note this — the survey may conclude after this round completes.
Print status:
=== ROUND {N} COMPLETE ===
Coverage metric: {coverage_metric:.2f} ({categories_at_min}/{total})
Target met: YES / NO
Budget remaining: {max_rounds - N} rounds
Next round: AUTO-STARTING
If target met AND budget exhausted, or if target met early: proceed to Final Output. If budget remains and target not met: begin Round N+1 immediately with the gap-filling assignments.
Stop the survey when ANY of the following is true:
coverage_metric == 1.0 (all categories have >= min_papers papers) — coverage target metN == max_rounds — budget exhausted (this is a normal, expected stopping point)On stopping, proceed immediately to Final Output.
Write survey-report.md to the conference working directory. The report must contain all of the following sections:
3–5 paragraphs covering:
A complete table with one row per paper, organized by category. Columns:
| Category | Title | Authors | Year | Venue | Key Findings | Methodology |
|---|
Sort within each category by year (descending).
Text-format citation graph showing the key citation chains discovered across researchers. Format:
[CATEGORY: {name}]
[Paper A (Year)] --cites--> [Paper B (Year)] --cites--> [Paper C (Year)]
[Paper D (Year)] --cites--> [Paper A (Year)]
[CROSS-CATEGORY LINKS]
[Paper E (Category 1)] --cites--> [Paper F (Category 3)]
List anchor papers (cited by 2+ researchers or discovered via multiple chains) with a note: (ANCHOR).
A text-format table showing coverage per category per researcher:
| Category | Researcher A | Researcher B | Researcher C | Total |
|---|---|---|---|---|
| Category 1 | 5 | 3 | 4 | 12 |
| Category 2 | 2 | 6 | 1 | 9 |
| ... |
Color coding (in markdown bold/italics):
min_papers (coverage target met)min_papers (gap)A structured list of:
min_papers) — with specific search suggestionsBrief description of the survey protocol used:
This skill produces a literature foundation that feeds naturally into other skills:
autoconference:survey → autoconference: Use the survey's taxonomy and anchor papers as the background section of a conference.md to bootstrap an optimization conference with an established literature base.autoconference:survey → autoconference:ship: Pass survey-report.md to the ship skill to format the survey findings as a publication-ready systematic review.When chaining, note which papers are anchor papers — these are the highest-priority references to include in the downstream conference or publication.