Initializes Zotero-based research project: creates collections, searches/imports papers on topic, analyzes them, generates literature review and/or proposal. Args: topic (req), scope, output_type.
From claude-scholarnpx claudepluginhub galaxy-dawn/claude-scholar --plugin claude-scholarLaunch a complete literature survey workflow for the research topic "$topic", with scope "$scope" and output type "$output_type".
/research-init "transformer interpretability"
/research-init "few-shot learning" focused
/research-init "neural architecture search" broad both
Execute the following steps in order:
zotero_create_collection to create the main collection, named Research-{Topic}-{YYYY-MM} (extract a short PascalCase keyword from the topic, use the current year and month)Core PapersMethodsApplicationsBaselinesTo-Readcollection_key for each sub-collection (needed for import in Step 2)zotero_search_items with the DOI string when available to find potential matcheszotero_get_item_metadata on results to confirm the DOI field matches exactlyzotero_add_items_by_identifier, check whether the chosen URL is likely an abstract-only pageabstract, page title/heading is an abstract listing, page body lacks PDF/full-text links, and no DOI/arXiv identifier is visibleTo-Read only, never as a confirmed paper source for Core Papers, Methods, Applications, or BaselinesSkipped abstract-only page; searching better sourcezotero_add_items_by_identifier with the target sub-collection's collection_key, attach_pdf=true, and fallback_mode="webpage"
webpage when no reliable DOI/arXiv identifier is foundwebpagepaper imports in Core Papers, Methods, Applications, or Baselines
Saved as webpage, move or create that entry only in To-Readzotero_add_items_by_identifier(..., attach_pdf=true) already runs the PDF cascade by default. If the import result says PDF not attached, optionally call zotero_find_and_attach_pdfs({ item_keys: [...] }) for a second pass. If it still fails, log it and continue.zotero_reconcile_collection_duplicates on the main research collection with:collection_key = {main research collection key}include_subcollections = truedry_run = falsereconcile_local_only = truelocal_db_fallback = falsedry_run=false requires Zotero MCP write/delete permission, which means UNSAFE_OPERATIONS=items must already be enabledlocal_db_fallback=true automatically inside /research-init; only mention it in debug/recovery mode if residual duplicates remain after the standard pass| Input | Zotero Key | Collection | Status |
|-------|------------|------------|--------|
| ... | ABC123 | Core Papers | Imported as paper + PDF attached |
Status should use only user-facing phrases:
Imported as paper + PDF attachedImported as paperSaved as webpage + PDF attachedSaved as webpageImport failedSkipped abstract-only page; searching better sourceCollection dedupe summary: duplicate groups 0, duplicates trashed 0Collection dedupe summary: duplicate groups N, duplicates trashed MMissing PDF postpass: repaired 0 itemsMissing PDF postpass: repaired N itemszotero_reconcile_collection_duplicates summary; do not invent themroute=..., pdf_source=..., fallback_reason=..., local_item_key=..., or reconcile internals in the default terminal outputZOTERO_MCP_DEBUG_IMPORT=1Note: Zotero items can still be added to or removed from collections later, but /research-init should prefer correct collection_key assignment during import so analytical sub-collections stay clean. The default command path should now rely on zotero_reconcile_collection_duplicates as the standard postpass cleanup, not the older item-by-item local reconcile helper.
When in doubt between a full-paper source and an abstract-only page for the same title, always prefer the full-paper source, even if the abstract-only page ranks higher in search results. Use canonical Zotero MCP tool names consistently in this workflow: zotero_create_collection, zotero_search_items, zotero_get_item_metadata, zotero_add_items_by_identifier, zotero_find_and_attach_pdfs, and zotero_reconcile_collection_duplicates.
zotero_get_collection_items to list imported paperszotero_get_item_metadata with include_abstract: true to get metadata and abstracts (ensures abstracts are available as fallback if full-text retrieval fails)zotero_get_item_fulltext to read full text of papers with PDFsliterature-review.md (they are not a separate output file)Generate corresponding files based on output_type "$output_type":
?format=bibtex to export accurate, complete BibTeX entries
GET https://api.zotero.org/users/{user_id}/collections/{collection_key}/items?format=bibtex
Note: The REST API ?format=bibtex on a collection only exports items directly in that collection, not items in sub-collections. You must iterate each sub-collection key individually, or collect all item keys and use the items endpoint: GET https://api.zotero.org/users/{user_id}/items?itemKey=KEY1,KEY2,...&format=bibtexzotero_get_item_metadata metadata (note: volume, issue, pages, and publisher fields are not available via this tool — entries will be incomplete)Use TodoWrite to track progress throughout the workflow.
If the default Zotero MCP path fails during execution, use these workflow fallbacks:
zotero_create_collection fails → Create via Zotero REST API directlyzotero_add_items_by_identifier fails → Retry with a narrower identifier (explicit DOI or arXiv ID). If the source is a publisher landing page or direct PDF, allow the smart importer to use connector/browser-session rescue and optional Playwright-assisted PDF rescue first. If smart import still fails, use an out-of-band fallback such as CrossRef metadata lookup (https://api.crossref.org/works/{DOI}) and retry the DOI-specific path or save the page as a manual webpage.zotero_get_item_fulltext fails → Use WebFetch on the paper's DOI URL to scrape abstract → fall back to abstractNote from zotero_get_item_metadata + domain knowledgezotero_find_and_attach_pdfs fails → Log and continue; PDFs are not required for analysis. If a needed paper still lacks a PDF, ask the user to attach it manually in Zotero Desktop and rerun analysis later.zotero_reconcile_collection_duplicates fails → Keep the import results, log that postpass dedupe failed, and continue with analysis. In debug mode, inspect the tool's summary and consider rerunning with local_db_fallback=true only if local-only duplicates remain and the user explicitly wants aggressive cleanup.Before finishing, verify:
Research-{Topic}-{YYYY-MM} created with sub-collectionsliterature-review.md, references.bib, and optionally research-proposal.mdThe command generates the following files:
{project_dir}/
├── literature-review.md # Structured literature review (with Zotero citations)
├── research-proposal.md # Research proposal (if requested)
└── references.bib # BibTeX references
This command will:
research-ideation - Research ideation methodologyliterature-reviewer - Literature search and analysis/zotero-review - Analyze existing Zotero collections, /zotero-notes - Batch generate reading notes