From scientific-method
Search for relevant academic and technical sources on a topic, build a structured bibliography in references.md, and download available articles. Use this skill whenever the user wants to find papers, survey existing research, build a reading list, gather citations, do a literature review, or explore what's been published on a topic -- even if they don't say 'literature review' explicitly.
npx claudepluginhub pipemind-com/pipemind-marketplace --plugin scientific-methodThis skill is limited to using the following tools:
Searches for relevant sources on a given topic, builds a structured bibliography in `references.md`, and downloads available articles to `references/`. Works standalone or as a step in a larger research pipeline.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Searches for relevant sources on a given topic, builds a structured bibliography in references.md, and downloads available articles to references/. Works standalone or as a step in a larger research pipeline.
references.md is the single source of truth for all sources in a problem directory. Every source gets a stable REF-NNN ID that other skills and documents can cite.
Parse from the skill argument:
Create references/ if it does not exist:
bash -c "mkdir -p <problem-dir>/references"
Read existing <problem-dir>/references.md if present. Note the highest REF-NNN number so new entries continue the sequence. Collect all URLs already listed so you skip duplicates in later steps. If no file exists, start from REF-001.
Tool selection: If mcp__mcp-semantic-scholar__search_papers is available (test by calling it — if it errors or is missing, fall back), use the MCP path below. Otherwise use the WebSearch path. Both paths feed the same Step 3.
Run 3-4 search_papers calls across two tracks:
Recency track (2 calls, limit 10 each):
Landmark track (1-2 calls, limit 10 each):
No date filter.
Citation snowball (MCP-only bonus): For the 2-3 most relevant papers found so far, call get_references (backward snowball) and get_citations (forward snowball) with limit 5-10. This surfaces connected papers that keyword search misses. Deduplicate against papers already collected.
For any paper where search_papers returned incomplete metadata, call get_paper with its ID to fill in authors, year, and open-access PDF links.
Run 4-6 WebSearch queries across two tracks. Both tracks use standard queries — no date operators. Recency is determined post-hoc in Step 3 by reading the publication year from each fetched source.
Recency track (target: cutting-edge work from the last 3 years) Run 2-3 queries approaching the topic from different angles:
Landmark track (target: foundational papers with no recency filter) Run 2-3 queries that surface older, definitional work:
No date filter on the landmark track.
For recency-track sources, keep only those published in the last 3 years. Older sources found in the recency track are discarded unless they also surface in the landmark track. Collect all result URLs across both tracks. Deduplicate against URLs already in references.md.
For each new source (up to 10 per run):
Quality gate: Before recording any source, assess its type. Peer-reviewed journal articles, conference papers, and established technical reports are included. Blog posts, opinion pieces, press releases, and low-signal web pages are silently dropped — they do not appear in references.md and no exclusion log is kept. Prefer depth over breadth: 5 authoritative sources outweigh 10 mediocre ones. Every included source must directly advance understanding of the topic.
If source came from MCP (search_papers, get_references, get_citations): metadata (title, authors, year, citation count, open-access URL) is already structured. Skip to relevance assessment (step 2 below). Use get_paper only if critical fields are missing.
If source came from WebSearch: follow the full fetch pipeline below.
openAccessPdf URL — use it when present):
<first-author-lastname>-<year>-<slug>.pdf<problem-dir>/references/<filename>Append new entries to <problem-dir>/references.md. If the file does not exist, create it with a header first.
Each entry format:
## REF-NNN: <Title>
- **Authors:** <names, or "Unknown">
- **Year:** <year, or "n.d.">
- **Type:** <Peer-reviewed article | Preprint | Web resource | Book | Other>
- **URL:** <url>
- **Downloaded:** <filename in references/ | Not available>
**Relevance:** <2-3 sentences: what this covers and why it matters to the topic>
---
Output a summary:
references/