From llm-obsidian-wiki
Ingests sources like URLs, YouTube videos, GitHub repos/PRs/issues, PDFs, or text into a vault, creating SHA-256 hashed raw files and Markdown summaries with key claims.
npx claudepluginhub ignromanov/llm-obsidian-wiki --plugin llm-obsidian-wikiThis skill uses the workspace's default tool permissions.
Ingest one source into `raw/<type>/<slug>.<ext>` (immutable, SHA-256 hashed) and create `wiki/sources/<slug>.md` (source-summary with `key_claims[]`).
Ingests content from Confluence, Google Docs, GitHub repos, remote URLs, or local files (DOCX, PDF, etc.) into Second Brain vault. Converts to Markdown via docling, runs graphify extraction, persists entities.
Ingests files, URLs, and images into Obsidian wiki vault: extracts entities/concepts, creates/updates Markdown pages, cross-references, tracks deltas via manifest, supports batch mode.
Ingests source files from raw/ into wiki: reads content, discusses takeaways, creates summary pages for sources/entities/concepts, updates index/log.
Share bugs, ideas, or general feedback.
Ingest one source into raw/<type>/<slug>.<ext> (immutable, SHA-256 hashed) and create wiki/sources/<slug>.md (source-summary with key_claims[]).
Used by wiki-scribe (primary, passive intake) and wiki-researcher (via research workflow).
One of:
Two files per capture:
raw/<type>/<slug>.<ext> — immutable, SHA-256 in frontmatter, full contentwiki/sources/<slug>.md — source-summary with key_claims: [{quote, anchor, confidence}], quality, captured_byDetect source type from URL or path:
*.pdf → capture-pdf.shyoutube.com/* or youtu.be/* → capture-youtube.shgithub.com/*/issues/* or */pull/* → capture-github.shgithub.com/*/pulls → capture-prs.shInvoke the appropriate script with --vault $VAULT_PATH and --source $INPUT. Scripts handle SHA-256 and frontmatter.
Extract key_claims for source-summary:
Set quality field in source-summary:
high if extraction succeeded with >500 charsmedium if 200-500 chars (paywall preview, partial)low if <200 chars or extraction errorsSet captured_by to invoking agent name (wiki-scribe or wiki-researcher).
The script tries three extractors in order:
defuddle (Node CLI, MIT, local) — primarytrafilatura (Python CLI, Apache-2.0, local) — fallback if defuddle fails OR content <500 charsr.jina.ai (cloud) — opt-in via env WIKI_ALLOW_CLOUD=1If all three fail, exit 3 and tell user to provide content via capture-text.sh (manual clipboard).
wiki/sources/<slug>.md is in tier 4key_claims is never empty for sources >500 words (if empty, log warning)captured_by is set${CLAUDE_PLUGIN_ROOT}/scripts/capture-url.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-youtube.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-github.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-prs.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-pdf.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-text.sh${CLAUDE_PLUGIN_ROOT}/scripts/capture-git-log.sh