From semantic-skills
Generate AI summaries for markdown notes using Ollama and populate the frontmatter `summary` property. Use hierarchical map-reduce for notes exceeding model context. Trigger when asked to summarize notes, generate note abstracts, add AI summaries to frontmatter, or batch-summarize Obsidian vault notes.
npx claudepluginhub yixin0829/semantic-obsidian --plugin semantic-skillsThis skill is limited to using the following tools:
1. Ask the user which Ollama model to use (e.g., `qwen3:8b`, `llama3`, `gemma2`). The model must already be pulled in Ollama.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Share bugs, ideas, or general feedback.
Ask the user which Ollama model to use (e.g., qwen3:8b, llama3, gemma2). The model must already be pulled in Ollama.
Dry-run first to preview summaries without modifying files:
uv run --with ollama,pyyaml \
skills/summarize-note/scripts/summarize_note.py <model> --dry-run <file_path> [...]
If summaries look good, run without --dry-run to write them:
uv run --with ollama,pyyaml \
skills/summarize-note/scripts/summarize_note.py <model> <file_path> [...]
Review the JSON output to confirm summaries were generated and written correctly.
[AI] prefix) are skipped automatically.<think>) are stripped automatically.--chunk-size to adjust for models with smaller context windows (default: 50000 chars, ~12K tokens, sized for 32K+ context models).--base-url to point to a remote Ollama instance.