By ctoth
Build annotated collections of research papers by retrieving PDFs from arXiv, DOIs, titles, or sci-hub; extract structured notes, claims, concepts, justifications from them; cross-reference citations, identify new leads, tag, audit directories, and integrate into propstore knowledge stores via CLI workflows.
npx claudepluginhub ctoth/research-papers-plugin --plugin research-papersSystematically adjudicate disagreements across a paper collection. Produces ruthless verdicts on who was wrong, what supersedes what, and what the best current understanding is. Organized by topic clusters with actionable replacement values for implementation.
Enrich an existing claims.yaml generated by generate_claims.py. Fixes page numbers, aligns concept references with the paper-local concepts.yaml inventory, converts SymPy expressions, adds variable bindings, conditions, notes, uncertainty, and missing claims.
Extract propositional claims from a paper directory, building claims.yaml from scratch using notes.md. Produces machine-readable claims conforming to the propstore claim schema. If a concepts.yaml exists (from register-concepts), uses canonical concept names.
Extract intra-paper justification structure from a paper's notes.md and claims.yaml. Produces justifications.yaml mapping premise sets to conclusions via typed inference rules. Requires claims to already exist.
Extract inter-claim stances from a paper collection. Reads each paper's notes.md and claims.yaml, identifies argumentative relationships between claims across papers, and writes standalone stances.yaml files. Requires claims to already exist.
Orchestrate a full knowledge store rebuild from a paper collection. Four-phase pipeline: per-paper finalize, concept alignment, first promote, then cross-paper stances with re-promote. Builds sidecar at the end.
Check paper directories for completeness, format compliance, and index consistency. Run on a single paper or --all for the entire collection.
Create new skills from existing prompts or workflow patterns. Analyzes prompt files to extract reusable structure, determines appropriate frontmatter settings, and generates properly formatted SKILL.md files.
Retrieve a paper, extract notes, and ingest into propstore. Combines paper-retriever, paper-reader, register-concepts, extract-claims, and extract-justifications into one pks-aware pipeline. Give it a URL, DOI, or title.
Read scientific papers and extract implementation-focused notes. Converts PDFs to page images, then reads them. Papers <=50pp are read directly; papers >50pp are chunked into 50-page ranges for thorough parallel extraction. Creates structured notes in papers/ directory.
Retrieve a scientific paper PDF given an arxiv URL, DOI, or paper title. Downloads to papers/ directory. Uses direct download for arxiv, Chrome + sci-hub for paywalled papers.
Extract all "New Leads" from the paper collection and process them via paper-process. Retrieves and reads papers that other papers in your collection cite but you don't have yet. Use --all to process everything, or pass a number to limit (e.g., "10" for first 10). Add --parallel N to process N leads concurrently via subagents (default: sequential).
Process all unprocessed PDF files in the papers/ root directory. If subagents are available, parallelize across papers immediately after listing them; otherwise process sequentially. Any PDF in papers/ root is unprocessed by convention (processed papers live in subdirectories). Invokes paper-reader on each PDF.
Reconcile paper-local concept inventories across a paper collection. Identifies collision groups, proposes shared canonical names, and optionally rewrites per-paper concepts.yaml files.
---
Register concepts needed by a paper into a propstore source branch. Runs propose_concepts.py to extract concept inventory from claims, then enriches definitions via notes.md, and calls pks source add-concepts.
Research a topic using web search and create structured findings. Use when you need to investigate approaches, find papers, compare implementations, or gather knowledge on a topic. Creates structured notes in reports/ directory.
Add tags to papers that are missing them. Reads notes.md and description.md to pick 2-5 tags, preferring tags already in use. Run on a single paper directory or use --all for the entire collection.
A research infrastructure for AI agents. Search, read, and analyze papers from your local knowledge base while coding. Includes arXiv discovery, layered reading, ingestion, topic modeling, citation graphs, insights analytics, Office document inspection, scientific tool docs, and academic writing workflows. Requires Python 3.10+ and pip install.
Semi-automated research assistant for academic research and software development, with skills for literature review, experiments, analysis, writing, and project knowledge management
Automated research paper discovery, PDF monitoring, and AI-powered summarization for academic and technical literature
Karpathy-style LLM wiki for research papers. Ingest a URL / arXiv ID / DOI / PDF, write a structured summary into a local Obsidian vault, and maintain a finding-level knowledge graph via wikilinks + Dataview.
Share bugs, ideas, or general feedback.
9 research-hub skills: literature search, comparison matrix, planning manifests, design dialog, multi-AI routing, NotebookLM brief verification, paper-memory builder, Zotero curator. Auto-discovered from skills/<name>/SKILL.md.
A plugin for studying research papers with automated material generation and code demos
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim