From claude-paper
Generates concise summaries of research papers' core ideas and key points from PDF paths, arXiv URLs, or paper URLs. Auto-triggers on those inputs for quick understanding without deep analysis.
npx claudepluginhub alaliqing/claude-paper --plugin claude-paperThis skill is limited to using the following tools:
This skill generates a **concise summary** of a research paper's core ideas and key points.
Processes research paper PDFs from local paths, URLs, or arXiv; extracts metadata, content, links; generates study materials in user's language for deep analysis.
Reads, analyzes, and summarizes PDF academic papers into structured Markdown notes, extracting metadata, research questions, methods, results, and evaluations. Supports single papers, batches, and cross-comparisons.
Generates structured peer-review analyses of academic papers from PDFs or URLs. Assesses methodology, contributions, strengths, weaknesses, and provides feedback. Triggers on review requests or paper links.
Share bugs, ideas, or general feedback.
This skill generates a concise summary of a research paper's core ideas and key points.
When to use:
When NOT to use:
/claude-paper:study instead)Language Detection: Detect the user's language from their input and generate ALL materials in that language.
if [ ! -f "${CLAUDE_PLUGIN_ROOT}/.installed" ]; then
echo "First run - installing dependencies..."
cd "${CLAUDE_PLUGIN_ROOT}"
npm install || exit 1
# Install Python dependencies for image extraction
python3 -m pip install pymupdf --user 2>/dev/null || pip3 install pymupdf --user 2>/dev/null || echo "Warning: Failed to install pymupdf"
touch "${CLAUDE_PLUGIN_ROOT}/.installed"
echo "Dependencies installed!"
fi
Supports multiple input formats:
~/Downloads/paper.pdfhttps://arxiv.org/pdf/1706.03762.pdfhttps://arxiv.org/abs/1706.03762USER_INPUT="<user-input>"
# Check if input is a URL (starts with http:// or https://)
if [[ "$USER_INPUT" =~ ^https?:// ]]; then
# Download PDF from URL
INPUT_PATH=$(node ${CLAUDE_PLUGIN_ROOT}/skills/study/scripts/download-pdf.cjs "$USER_INPUT")
else
# Use local path directly
INPUT_PATH="$USER_INPUT"
fi
For URLs, the download script will:
/tmp/claude-paper-downloads//abs/ URLs to PDF URLs automaticallyExtract structured information:
node ${CLAUDE_PLUGIN_ROOT}/skills/study/scripts/parse-pdf.js "$INPUT_PATH"
Output includes:
Save to:
~/claude-papers/papers/{paper-slug}/meta.json
Copy original PDF:
cp <pdf-path> ~/claude-papers/papers/{paper-slug}/paper.pdf
Create the paper folder:
mkdir -p ~/claude-papers/papers/{paper-slug}
Generate quick-summary.md with the following structure:
# Quick Summary: [Paper Title]
## One Sentence
[One sentence that captures what the paper is about]
## Problem
[What problem does this paper solve? Why is it important?]
## Core Idea
[The key innovation explained in 2-3 sentences. What makes this paper novel?]
## Key Contributions
- [Contribution 1]
- [Contribution 2]
- [Contribution 3]
- [Contribution 4 if applicable]
## Main Results
| Metric | Value | Dataset/Benchmark |
|--------|-------|-------------------|
| [metric1] | [value] | [dataset] |
| [metric2] | [value] | [dataset] |
## Why It Matters
[Practical implications. How does this advance the field? What can we now do that we couldn't before?]
## Limitations
- [Limitation 1]
- [Limitation 2]
Guidelines for each section:
| Section | Length | Focus |
|---|---|---|
| One Sentence | 1 sentence | High-level summary |
| Problem | 2-3 sentences | Context and motivation |
| Core Idea | 2-3 sentences | The main innovation |
| Key Contributions | 3-5 bullets | What's new/novel |
| Main Results | 1 table | Quantitative metrics from the paper |
| Why It Matters | 2-3 sentences | Practical value |
| Limitations | 2-3 bullets | What the paper doesn't solve |
Total length: ~300-500 words (excluding results table)
CRITICAL: Read existing index.json first, then append the new paper. Never overwrite the entire file.
If index.json does not exist, create:
{"papers": []}
Append new entry to the papers array:
{
"id": "paper-slug",
"title": "Paper Title",
"slug": "paper-slug",
"authors": ["Author 1", "Author 2"],
"abstract": "Paper abstract...",
"year": 2024,
"date": "2024-01-01",
"tags": ["quick-summary"],
"githubLinks": ["https://github.com/..."],
"codeLinks": ["https://..."]
}
IMPORTANT: The index.json file must be located at:
~/claude-papers/index.json
Invoke:
/claude-paper:webui
After generating the summary:
Show the user the quick-summary.md content - Display the full summary
Offer next steps:
/claude-paper:study for comprehensive materials."File location reminder:
~/claude-papers/papers/{paper-slug}/quick-summary.mdhttp://localhost:5815# Quick Summary: Attention Is All You Need
## One Sentence
This paper introduces the Transformer, a neural network architecture based entirely on attention mechanisms, achieving state-of-the-art results in machine translation.
## Problem
Sequence transduction models at the time (RNNs, LSTMs, GRUs) process data sequentially, limiting parallelization and struggling with long-range dependencies.
## Core Idea
Replace recurrent layers with self-attention mechanisms, enabling full parallelization during training and direct modeling of dependencies regardless of distance. The Transformer uses multi-head attention to jointly attend to information from different representation subspaces.
## Key Contributions
- First transduction model relying entirely on self-attention, no recurrence
- Multi-head attention mechanism for joint attention across subspaces
- Positional encodings to inject sequence order information
- Achieved 28.4 BLEU on WMT 2014 English-to-German (2+ BLEU improvement)
- Training was significantly faster than previous state-of-the-art
## Main Results
| Metric | Value | Dataset/Benchmark |
|--------|-------|-------------------|
| BLEU (EN-DE) | 28.4 | WMT 2014 |
| BLEU (EN-FR) | 41.8 | WMT 2014 |
| Training cost | 3.3 × 10^18 FLOPs | WMT 2014 EN-DE |
| Training time | 12 hours on 8 P100 | WMT 2014 EN-DE |
## Why It Matters
The Transformer eliminated recurrence, enabling massive parallelization and scaling. This architecture became the foundation for BERT, GPT, and virtually all modern large language models, fundamentally changing NLP and beyond.
## Limitations
- Self-attention has O(n²) complexity, limiting sequence length
- No explicit modeling of position beyond learned encodings
- Requires large amounts of training data
/claude-paper:study to generate comprehensive materials