Provides Semantic Scholar Graph API deep research: backward citations, recommendations, batch paper lookups (up to 500 IDs), snippet search, multi-hop BFS citation graph traversal.
npx claudepluginhub codealive-ai/ai-driven-development --plugin ai-driven-developmentThis skill is limited to using the following tools:
Purpose: fill the gaps that `semantic-scholar-lookup` (allenai) leaves — `references`, `recommendations`, `batch`, and multi-hop citation-graph traversal.
Searches academic papers across arXiv, PubMed, Semantic Scholar, bioRxiv, medRxiv, Google Scholar, and more. Retrieves BibTeX citations, downloads PDFs, analyzes citation networks for literature reviews and research.
Looks up current research info via parallel-cli search (fast web/academic) or Parallel Chat API (deep synthesis), auto-routing for papers, data gathering, scientific verification.
Searches academic papers across Google Scholar, Consensus CLI, Paperpile bib with deduplication, DOI resolution, journal filtering, and LLM validation. Activates on literature search requests.
Share bugs, ideas, or general feedback.
Purpose: fill the gaps that semantic-scholar-lookup (allenai) leaves — references, recommendations, batch, and multi-hop citation-graph traversal.
ss_client.py + citation_graph.pyTwo execution modes:
Use when the user asks for one specific endpoint:
ss_client.py references <id>ss_client.py recommendations <id>ss_client.py batch ...ss_client.py snippets "..."Fast, cheap, no orchestration overhead.
deep-paper-researcher subagentUse when the task is multi-step or would otherwise flood the context:
Mandatory prompt contents. The subagent runs in isolated context with no access to this conversation's system reminders. Include exactly these two things:
Today is YYYY-MM-DD. Pull from the currentDate system-reminder field, or run date -I via Bash before delegating if it's missing. Never rely on training-data intuitions about the current year.Do NOT do any of these:
Call:
Agent(
subagent_type="deep-paper-researcher",
description="<3–5 word task>",
prompt="Today is 2026-04-22.\n\nUser's request: найди современные 10 статей про AI Code Review на arXiv.\n\n<optional: output format hints, language preference>"
# model: "opus" ← add only when the user opts in (see below)
)
The subagent's Freshness Mode section handles classification; keep this layer thin.
The subagent's model frontmatter is sonnet — that's the default.
Override to Opus by passing model: "opus" to the Agent tool only if the user explicitly requests deeper reasoning. Triggers (any of):
Never auto-upgrade to Opus without a user signal — Sonnet handles the default literature-review workflow fine and costs less.
Trigger this skill for:
Do NOT use for:
semantic-scholar-lookup (faster, no Python)web_search_advanced_exa with category: "research paper" (Exa MCP)deep-paper-researcher subagent, which orchestrates all three toolsLocated under ${SKILL_DIR}/scripts/.
ss_client.py — raw API clientSubcommands (all output JSON on stdout):
| Command | Endpoint | Notes |
|---|---|---|
search <query> | /graph/v1/paper/search | --bulk switches to /search/bulk (up to 1000/page) |
paper <id> | /graph/v1/paper/{id} | ID forms: raw, DOI:, ARXIV:, CorpusId:, PMID:, URL: |
citations <id> | /graph/v1/paper/{id}/citations | paginated; up to 1000 per page |
references <id> | /graph/v1/paper/{id}/references | paginated; up to 1000 per page |
recommendations <id> | /recommendations/v1/papers/forpaper/{id} | `--pool recent |
batch <id1> <id2> ... | POST /graph/v1/paper/batch | up to 500 IDs |
author-search <query> | /graph/v1/author/search | |
author <id> | /graph/v1/author/{id} | |
author-papers <id> | /graph/v1/author/{id}/papers | |
snippets <query> | /graph/v1/snippet/search | Full-text snippets |
Common flags: --limit, --offset, --fields, --year, --fields-of-study, --venue, --min-citation-count.
citation_graph.py — BFS traversalpython3 ${SKILL_DIR}/scripts/citation_graph.py <paperId> \
--direction both \
--depth 2 \
--max-nodes 200 \
--per-hop-limit 50 \
--output graph.json
Directions: forward (citations), backward (references), both. Output schema described in the script docstring — nodes: {paperId → metadata+depth}, edges: [{src, dst, direction}].
SEMANTIC_SCHOLAR_API_KEY env var: much higher limits.Retry-After.references/endpoints.md — complete field list per endpoint + query examplesreferences/workflows.md — lit-review, novelty-check, seed-expansion patternsScripts emit raw JSON — redirect to files for anything beyond ~20 results. For graphs >50 nodes always pass --output graph.json to avoid flooding the conversation context.
Typical pipeline inside the deep-paper-researcher subagent:
mcp__exa__web_search_advanced_exa (neural + multi-source)ss_client.py search / batch to get paperId from titles or DOIscitation_graph.py with the top 3-5 seedsA paired subagent definition ships alongside the skill at agents/deep-paper-researcher.md. It orchestrates Exa MCP + allenai semantic-scholar-lookup + this skill's scripts into a token-isolated research agent with:
Anchor date / Mode / Window headerTo install for Claude Code (manual, one-time):
cp ~/.agents/skills/semantic-scholar-deep/agents/deep-paper-researcher.md ~/.claude/agents/
(Path may differ on other agents — copy to the agent's subagents directory, then restart the session.)
Prerequisites for full pipeline: Exa MCP connected, allenai/asta-plugins@"Semantic Scholar Lookup" skill installed.