Two-stage paper screening - abstract scoring then deep dive for specific data extraction
Two-stage paper screening: quick abstract scoring followed by deep data extraction from promising papers. Use when you have search results and need to identify which papers contain specific measurements, protocols, or datasets.
/plugin marketplace add kthorn/research-superpower/plugin install kthorn-research-superpowers@kthorn/research-superpowerThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Two-stage screening process: quick abstract scoring followed by deep dive into promising papers.
Core principle: Precision over breadth. Find papers that actually contain the specific data/methods user needs, not just topically related papers.
Use this skill when:
Small searches (<50 papers):
Large searches (50-150 papers):
Very large searches (>150 papers):
Goal: Quickly identify promising papers
Score 0-10 based on:
Decision rules:
IMPORTANT: Report to user for EVERY paper:
š [N/Total] Screening: "Paper Title"
Abstract score: 8 ā Fetching full text...
or
š [N/Total] Screening: "Paper Title"
Abstract score: 4 ā Skipping (insufficient relevance)
Never screen silently - user needs to see progress happening
Goal: Extract specific data/methods from promising papers
If paper describes medicinal chemistry / SAR data:
Use skills/research/checking-chembl to check if paper is in ChEMBL database:
curl -s "https://www.ebi.ac.uk/chembl/api/data/document.json?doi=$doi"
If found in ChEMBL:
Continue to full text fetch for context, methods, discussion.
Try in order:
A. PubMed Central (free full text):
# Check if available in PMC
curl "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=pmc&term=PMID[PMID]&retmode=json"
# If found, fetch full text XML via API
curl "https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=pmc&id=PMCID&rettype=full&retmode=xml"
# Or fetch HTML directly (note: use pmc.ncbi.nlm.nih.gov, not www.ncbi.nlm.nih.gov/pmc)
curl "https://pmc.ncbi.nlm.nih.gov/articles/PMCID/"
B. DOI resolution:
# Try publisher link
curl -L "https://doi.org/10.1234/example.2023"
# May hit paywall - check response
C. Unpaywall (MANDATORY if paywalled): CRITICAL: If step B hits a paywall, you MUST immediately try Unpaywall before giving up.
Use skills/research/finding-open-access-papers to find free OA version:
curl "https://api.unpaywall.org/v2/DOI?email=USER_EMAIL"
# Often finds versions in repositories, preprint servers, author copies
# IMPORTANT: Ask user for their email if not already provided - do NOT use claude@anthropic.com
Report to user:
ā ļø Paper behind paywall, checking Unpaywall...
ā Found open access version at [repository/preprint server]
or
ā ļø Paper behind paywall, checking Unpaywall...
ā No open access version available - continuing with abstract only
D. Preprints (direct):
https://www.biorxiv.org/content/10.1101/{doi}If full text unavailable AFTER trying Unpaywall:
CRITICAL: Do NOT skip Unpaywall check. Many paywalled papers have free versions in repositories.
Focus on sections:
What to look for (adapt to research domain):
Use grep/text search (adapt search terms):
# Examples for different domains
grep -i "IC50\|Ki\|MIC" paper.xml # Medicinal chemistry
grep -i "expression\|FPKM\|RNA-seq" paper.xml # Genomics
grep -i "abundance\|population\|sampling" paper.xml # Ecology
grep -i "algorithm\|github\|code" paper.xml # Computational
Create structured extraction (adapt to research domain):
Example 1: Medicinal chemistry
{
"doi": "10.1234/medchem.2023",
"title": "Novel kinase inhibitors...",
"relevance_score": 9,
"findings": {
"data_found": [
"IC50 values for compounds 1-12 (Table 2)",
"Selectivity data (Figure 3)",
"Synthesis route (Scheme 1)"
],
"key_results": [
"Compound 7: IC50 = 12 nM",
"10-step synthesis, 34% yield"
]
}
}
Example 2: Genomics
{
"doi": "10.1234/genomics.2023",
"title": "Gene expression in disease...",
"relevance_score": 8,
"findings": {
"data_found": [
"RNA-seq data for 50 samples (GEO: GSE12345)",
"Differential expression results (Table 1)",
"Gene set enrichment analysis (Figure 4)"
],
"key_results": [
"123 genes upregulated (FDR < 0.05)",
"Pathway enrichment: immune response"
]
}
}
Example 3: Computational methods
{
"doi": "10.1234/compbio.2023",
"title": "Novel alignment algorithm...",
"relevance_score": 9,
"findings": {
"data_found": [
"Algorithm pseudocode (Methods)",
"Code repository (github.com/user/tool)",
"Benchmark results (Table 2)"
],
"key_results": [
"10x faster than BLAST",
"98% accuracy on test dataset"
]
}
}
PDFs:
# If PDF available
curl -L -o "papers/$(echo $doi | tr '/' '_').pdf" "https://doi.org/$doi"
Supplementary data:
# Download SI files if URLs found
curl -o "papers/${doi}_supp.zip" "https://publisher.com/supp/file.zip"
CRITICAL: Use ONLY papers-reviewed.json and SUMMARY.md. Do NOT create custom tracking files.
CRITICAL: Add EVERY paper to papers-reviewed.json, regardless of score. This prevents re-reviewing papers and tracks complete search history.
Add to papers-reviewed.json:
For relevant papers (score ā„7):
{
"10.1234/example.2023": {
"pmid": "12345678",
"status": "relevant",
"score": 9,
"source": "pubmed_search",
"timestamp": "2025-10-11T10:30:00Z",
"found_data": ["IC50 values", "synthesis methods"],
"has_full_text": true,
"chembl_id": "CHEMBL1234567"
}
}
For not-relevant papers (score <7):
{
"10.1234/another.2023": {
"pmid": "12345679",
"status": "not_relevant",
"score": 4,
"source": "pubmed_search",
"timestamp": "2025-10-11T10:31:00Z",
"reason": "no activity data, review paper"
}
}
Always add papers even if skipped - this prevents re-processing and documents what was already checked.
Add to SUMMARY.md (examples for different domains):
Medicinal chemistry example:
### [Novel kinase inhibitors with improved selectivity](https://doi.org/10.1234/medchem.2023) (Score: 9)
**DOI:** [10.1234/medchem.2023](https://doi.org/10.1234/medchem.2023)
**PMID:** [12345678](https://pubmed.ncbi.nlm.nih.gov/12345678/)
**ChEMBL:** [CHEMBL1234567](https://www.ebi.ac.uk/chembl/document_report_card/CHEMBL1234567/)
**Key Findings:**
- IC50 values for 12 inhibitors (Table 2)
- Compound 7: IC50 = 12 nM, >80-fold selectivity
- Synthesis route (Scheme 1, page 4)
**Files:** PDF, supplementary data
Genomics example:
### [Transcriptomic analysis of disease progression](https://doi.org/10.1234/genomics.2023) (Score: 8)
**DOI:** [10.1234/genomics.2023](https://doi.org/10.1234/genomics.2023)
**PMID:** [23456789](https://pubmed.ncbi.nlm.nih.gov/23456789/)
**Data:** [GEO: GSE12345](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE12345)
**Key Findings:**
- RNA-seq data: 50 samples, 3 conditions
- 123 differentially expressed genes (FDR < 0.05)
- Immune pathway enrichment (Figure 3)
**Files:** PDF, supplementary tables with gene lists
Computational methods example:
### [Fast sequence alignment with novel algorithm](https://doi.org/10.1234/compbio.2023) (Score: 9)
**DOI:** [10.1234/compbio.2023](https://doi.org/10.1234/compbio.2023)
**Code:** [github.com/user/tool](https://github.com/user/tool)
**Key Findings:**
- New alignment algorithm (pseudocode in Methods)
- 10x faster than BLAST, 98% accuracy
- Benchmark datasets available
**Files:** PDF, code repository linked
IMPORTANT: Always make DOIs and PMIDs clickable links:
[10.1234/example.2023](https://doi.org/10.1234/example.2023)[12345678](https://pubmed.ncbi.nlm.nih.gov/12345678/)CRITICAL: Report to user as you work - never work silently!
For every paper, report:
š [N/Total] Screening: "Title..."Abstract score: X/10For relevant papers, report findings immediately (adapt to domain):
Medicinal chemistry example:
š [15/127] Screening: "Selective BTK inhibitors..."
Abstract score: 8 ā Fetching full text...
ā Found IC50 data for 8 compounds (Table 2)
ā Selectivity data vs 50 kinases (Figure 3)
ā Added to SUMMARY.md
Genomics example:
š [23/89] Screening: "Gene expression in liver disease..."
Abstract score: 9 ā Fetching full text...
ā RNA-seq data available (GEO: GSE12345)
ā 123 DEGs identified (Table 1, FDR < 0.05)
ā Added to SUMMARY.md
Computational methods example:
š [7/45] Screening: "Novel phylogenetic algorithm..."
Abstract score: 8 ā Fetching full text...
ā Code available (github.com/user/tool)
ā Benchmark results (10x faster, Table 2)
ā Added to SUMMARY.md
Update user every 5-10 papers with summary:
š Progress: Reviewed 30/127 papers
- Highly relevant: 3
- Relevant: 5
- Currently screening paper 31...
Why this matters: User needs to see work happening and provide feedback/corrections early
For medicinal chemistry papers:
skills/research/checking-chembl to find curated SAR dataDuring full text fetching:
skills/research/finding-open-access-papers (Unpaywall)After finding relevant paper:
| Score | Meaning | Action |
|---|---|---|
| 0-4 | Not relevant | Skip, brief note in summary |
| 5-6 | Possibly relevant | Note for later, skip deep dive for now |
| 7-8 | Relevant | Deep dive, extract data, add to summary |
| 9-10 | Highly relevant | Deep dive, extract data, follow citations, highlight in summary |
When screening many papers (>20), consider creating a helper script:
Benefits:
Create in research session folder:
# research-sessions/YYYY-MM-DD-query/screen_papers.py
Key components:
For large-scale screening, use two-script pattern:
Script 1: Abstract Screening (screen_papers.py)
evaluated-papers.json with basic metadataScript 2: Deep Dive (deep_dive_papers.py)
Benefits:
Script design:
When NOT to create helper script:
Not tracking all papers: Only adding relevant papers to papers-reviewed.json ā Add EVERY paper regardless of score to prevent re-review Skipping Unpaywall: Hitting paywall and giving up ā ALWAYS check Unpaywall first, many papers have free versions Creating unnecessary files for small searches: For <50 papers, use ONLY papers-reviewed.json and SUMMARY.md. For large searches (>100 papers), structured evaluated-papers.json and auxiliary files (README.md, TOP_PRIORITY_PAPERS.md) add significant value and should be used. Too strict: Skipping papers that mention data indirectly ā Re-read abstract carefully Too lenient: Deep diving into tangentially related papers ā Focus on specific data user needs Missing supplementary data: Many papers hide key data in SI ā Always check for supplementary files Silent screening: User can't see progress ā Report EVERY paper as you screen it No periodic summaries: User loses big picture ā Update every 5-10 papers Non-clickable DOIs/PMIDs: Plain text identifiers ā Always use markdown links Re-reviewing papers: Wastes time ā Always check papers-reviewed.json first Not using helper scripts: Manually screening 100+ papers ā Consider batch script
| Task | Action |
|---|---|
| Check if reviewed | Look up DOI in papers-reviewed.json |
| Score abstract | Keywords (0-3) + Data type (0-4) + Specificity (0-3) |
| Get full text | Try PMC ā DOI ā Unpaywall ā Preprints |
| Find data | Grep for terms, focus on Methods/Results/Tables |
| Download PDF | curl -L -o papers/FILE.pdf URL |
| Update tracking | Add to papers-reviewed.json + SUMMARY.md |
After evaluating paper:
skills/research/traversing-citationsUse this structure for research projects with 100+ papers:
Project Overview
Quick Start Guide
File Inventory
Key Findings Summary
Methodology
Next Steps
For datasets with >50 relevant papers, create curated priority list:
Example structure:
# Top Priority Papers
## Tier 1: Must-Read (Score 10)
### [Paper Title](https://doi.org/10.xxxx/yyyy) (Score: 10)
**DOI:** [10.xxxx/yyyy](https://doi.org/10.xxxx/yyyy)
**PMID:** [12345678](https://pubmed.ncbi.nlm.nih.gov/12345678/)
**Full text:** ā PMC12345678
**Key Findings:**
- Finding 1
- Finding 2
---
## Tier 2: High-Value (Score 8-9)
[Additional papers organized by priority...]
Use when working with Payload CMS projects (payload.config.ts, collections, fields, hooks, access control, Payload API). Use when debugging validation errors, security issues, relationship queries, transactions, or hook behavior.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.