By trapoom555
Ingest research papers from URLs, arXiv IDs, DOIs, or PDFs into a local Obsidian vault. Auto-generate 4-section summaries, extract atomic findings with metadata, propose typed knowledge graph edges (supports/contradicts/etc.), query the graph for cited answers, and audit vault for orphans/duplicates/staleness.
npx claudepluginhub trapoom555/claude-paperloomThis plugin requires configuration values that are prompted when the plugin is enabled. Sensitive values are stored in your system keychain.
vault_pathLocal directory for the research vault. Created by /paperloom:init if it does not exist.
${user_config.vault_path}open_in_obsidianIf true, ingest opens the new paper page via the obsidian:// URI scheme.
${user_config.open_in_obsidian}Lite ingest — fetch a paper and write a short 4-section triage summary (Key Takeaways, Background, Main Idea & Summary, Critique). No figures, fast.
Scaffold a new PaperLoom at the configured path (or at an optional path argument). Idempotent — fills missing files without clobbering existing ones.
Scan the vault for orphan pages, frontmatter schema drift, duplicate findings, unmarked contradictions, and stale wikilinks. Reports issues without auto-fixing.
Ask a question of the research vault. Searches papers and findings, synthesizes an answer with wikilink citations.
Extracts atomic, testable findings from a single research paper. Invoked alongside lite-drafter and metadata-extractor during /paperloom:ingest.
Compares new findings against a shortlist of existing findings in the vault and proposes typed edges (supports / contradicts / extends / uses / similar-to). Invoked by /paperloom:ingest.
Produces a short, triage-grade paper summary — Key Takeaways, Background, Main Idea & Summary, Critique. Invoked alongside metadata-extractor and finding-extractor during /paperloom:ingest. Returns JSON only; page assembly is handled by scripts/assemble_paper.py.
Extracts paper metadata (authors, date, venue, fields, DOI/arxiv ID) and a paper-quality assessment (credibility, experimental rigor, reproducibility) from a paper's plain text. Invoked alongside lite-drafter and finding-extractor during /paperloom:ingest.
A Claude Code Plugin for Self-maintaining research knowledge graph for Claude Code + Obsidian.
|
|
| Knowledge graph view — 🟡 papers · 🟢 findings · 🟣 fields · 🔴 authors. Example shown: Typhoon.AI research papers. | Per-paper info page — 4-section summary (incl. critique), atomic findings, metadata + quality scores, and typed edges to related work. |
Keep every paper you care about. See how they connect. Drop in a URL, arXiv ID, DOI, or PDF — Claude files it into an Obsidian vault, extracts the atomic claims, and wires them to everything you've read before with typed edges: supports, contradicts, extends, uses, similar-to.
Based on Andrej Karpathy's LLM Wiki pattern, tuned for research papers. 4 LLM calls per paper. Zero manual filing. Linking cost stays constant as the graph grows.
supports, contradicts, extends, uses, similar-to; auto-aggregated to paper-level (cites, builds-on, …).# 1. Install the plugin
claude --plugin-dir /path/to/claude-paperloom
# 2. Scaffold your vault (default ~/PaperLoom)
# On first run, this creates `.venv` at the plugin root and installs
# dependencies from `requirements.txt` automatically.
/paperloom:init
# 3. Ingest your first paper
/paperloom:ingest https://arxiv.org/abs/1706.03762
Open the vault folder in Obsidian and turn off Restricted Mode once to activate Dataview.
| You run | Claude does |
|---|---|
/paperloom:init [path] | Scaffold the vault. Idempotent. Seeds Dataview. |
/paperloom:ingest <url|arxiv-id|doi|pdf> | Fetch, summarize, extract findings, link. Skips if already in vault. |
/paperloom:query <question> | Search + synthesize across the vault with wikilink citations. |
/paperloom:lint | Health check. Auto-wires near-duplicate findings with bidirectional similar-to. |
LLM-maintained knowledge base skill — structured wiki with Obsidian, milestone-based source clustering, proactive write-back, and autonomous lint
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
A research infrastructure for AI agents. Search, read, and analyze papers from your local knowledge base while coding. Includes arXiv discovery, layered reading, ingestion, topic modeling, citation graphs, insights analytics, Office document inspection, scientific tool docs, and academic writing workflows. Requires Python 3.10+ and pip install.
Semi-automated research assistant for academic research and software development, with skills for literature review, experiments, analysis, writing, and project knowledge management
Automated research paper discovery, PDF monitoring, and AI-powered summarization for academic and technical literature
9 research-hub skills: literature search, comparison matrix, planning manifests, design dialog, multi-AI routing, NotebookLM brief verification, paper-memory builder, Zotero curator. Auto-discovered from skills/<name>/SKILL.md.
Quality assurance and linting tool for second brain knowledge systems ensuring consistency and structure. Knowledge base maintenance utility.