From sundial-org-awesome-openclaw-skills-4
Performs local keyword, semantic, and hybrid searches on markdown notes, docs, and knowledge bases using qmd CLI. Use for searching notes, finding related content, or retrieving from indexed collections.
npx claudepluginhub joshuarweaver/cascade-ai-ml-agents-misc-2 --plugin sundial-org-awesome-openclaw-skills-4This skill uses the workspace's default tool permissions.
Local search engine for Markdown notes, docs, and knowledge bases. Index once, search fast.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Local search engine for Markdown notes, docs, and knowledge bases. Index once, search fast.
qmd search (BM25). It's typically instant and should be the default.qmd vsearch only when keyword search fails and you need semantic similarity (can be very slow on a cold start).qmd query unless the user explicitly wants the highest quality hybrid results and can tolerate long runtimes/timeouts.brew install sqlite (SQLite extensions)$HOME/.bun/binInstall Bun (macOS): brew install oven-sh/bun/bun
bun install -g https://github.com/tobi/qmd
qmd collection add /path/to/notes --name notes --mask "**/*.md"
qmd context add qmd://notes "Description of this collection" # optional
qmd embed # one-time to enable vector + hybrid search
**/*.md).qmd search (default): fast keyword match (BM25)qmd vsearch (last resort): semantic similarity (vector). Often slow due to local LLM work before the vector lookup.qmd query (generally skip): hybrid search + LLM reranking. Often slower than vsearch and may timeout.qmd search is typically instant.qmd vsearch can be ~1 minute on some machines because query expansion may load a local model (e.g., Qwen3-1.7B) into memory per run; the vector lookup itself is usually fast.qmd query adds LLM reranking on top of vsearch, so it can be even slower and less reliable for interactive use.qmd search "query" # default
qmd vsearch "query"
qmd query "query"
qmd search "query" -c notes # Search specific collection
qmd search "query" -n 10 # More results
qmd search "query" --json # JSON output
qmd search "query" --all --files --min-score 0.3
-n <num>: number of results-c, --collection <name>: restrict to a collection--all --min-score <num>: return all matches above a threshold--json / --files: agent-friendly output formats--full: return full document contentqmd get "path/to/file.md" # Full document
qmd get "#docid" # By ID from search results
qmd multi-get "journals/2025-05*.md"
qmd multi-get "doc1.md, doc2.md, #abc123" --json
qmd status # Index health
qmd update # Re-index changed files
qmd embed # Update embeddings
Set up a cron job or hook to automatically re-index. For example, a daily 5 AM reindex:
# Via Clawdbot cron (isolated job, runs silently):
clawdbot cron add \
--name "qmd-reindex" \
--cron "0 5 * * *" \
--tz "America/New_York" \
--session isolated \
--message "Run: export PATH=\"\$HOME/.bun/bin:\$PATH\" && qmd update && qmd embed"
# Or via system crontab:
0 5 * * * export PATH="$HOME/.bun/bin:$PATH" && qmd update && qmd embed
This ensures your vault search stays current as you add or edit notes.
~/.cache/qmd/models/ (override with XDG_CACHE_HOME).qmd searches your local files (notes/docs) that you explicitly index into collections.memory_search searches agent memory (saved facts/context from prior interactions).memory_search for "what did we decide/learn before?", qmd for "what's in my notes/docs on disk?".