From neo-research
Show what's indexed in the neo-research knowledge store β sources, chunk counts, file sizes. Use when the user asks what docs are available, what's been indexed, or wants a status check before searching.
npx claudepluginhub shihwesley/shihwesley-plugins --plugin neo-researchThis skill uses the workspace's default tool permissions.
Check and report the current state of the neo-research knowledge store β what's indexed, how much, and what you can search.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Check and report the current state of the neo-research knowledge store β what's indexed, how much, and what you can search.
Use this skill when the user:
rlm_search/neo-research:research or rlm_fetch before starting workCall the rlm_knowledge_status() MCP tool. No arguments needed β it uses the current project's hash automatically.
rlm_knowledge_status()
The tool returns a structured report with:
.mv2 file (e.g. ~/.neo-research/knowledge/a3f8c1...mv2).md files stored under .claude/docs/{library}/.This is the normal state. Report it to the user with the library breakdown. The user can now call rlm_search("query") for any indexed library.
Example interpretation:
rlm_search callA store under 100 KB usually means only a few pages got indexed β maybe a manual rlm_ingest call or a partial fetch. Warn the user that search coverage is thin. Suggest running /neo-research:research <topic> to build up the index.
The .mv2 file doesn't exist. No indexing has happened for this project. This is normal for a fresh project or first-time plugin install.
Tell the user:
rlm_search will return nothing right now.This means raw .md files exist in .claude/docs/ (from prior rlm_fetch calls) but they weren't ingested into the .mv2 index. This can happen if the server crashed during indexing or the store was cleared without clearing the raw files.
Suggest running rlm_load_dir(".claude/docs/**/*.md") to re-ingest the existing markdown files.
When the store is empty or missing a library the user needs, recommend one of these approaches (in order of preference):
/neo-research:research <topic>Best for common libraries. This skill handles everything β finds the doc site, fetches pages, indexes them. Works for any topic in the KNOWN_DOCS list (fastapi, dspy, pydantic, httpx, pytest, django, numpy, pandas, pytorch, transformers, etc.) and attempts pattern-matching for unknown topics.
Example:
/neo-research:research fastapi
rlm_fetch(url)When you know the exact documentation URL. Good for:
Example:
rlm_fetch("https://htmx.org/docs/")
rlm_fetch_sitemap(sitemap_url)When a doc site has a sitemap. Fetches all pages listed in the sitemap XML and indexes each one. Best for comprehensive coverage of an entire doc site.
Example:
rlm_fetch_sitemap("https://fastapi.tiangolo.com/sitemap.xml")
rlm_load_dir(glob_pattern)When documentation is already on disk β downloaded repos, exported docs, or markdown collections. The glob pattern is relative to the project root.
Example:
rlm_load_dir("vendor/some-lib/docs/**/*.md")
rlm_ingest(title, text)Last resort for small content. Paste specific text directly into the index. Useful for API responses, error messages, or snippets from paywalled sites.
Format the status report as a concise summary. The user cares about what's searchable and what's missing, not internal details like project hashes.
Good report format:
Knowledge store: 3 libraries indexed (2.4 MB)
- fastapi: 47 pages
- pydantic: 23 pages
- httpx: 12 pages
Ready to search. Use rlm_search("your query") to find relevant docs.
If the store is empty:
Knowledge store is empty β no docs indexed yet.
To populate it:
/neo-research:research fastapi (automatic β finds and indexes docs)
rlm_fetch("https://url") (manual β fetch a specific page)
This is the most common trigger. The user is starting implementation and wants to confirm they can look up API details as they go. Run the status check, then tell them which libraries are indexed and whether the coverage is sufficient. If a key dependency is missing, suggest the research skill before they start coding.
When rlm_search returns irrelevant results despite the store having content, run status to check:
sentence-transformers is installed.The user wants to index their own content β meeting notes, architecture decisions, internal docs. Point them to rlm_ingest(title, text) for small items or rlm_load_dir("path/**/*.md") for bulk loading. These get the same hybrid search treatment as fetched documentation.
Call rlm_knowledge_clear() to wipe the .mv2 index. This preserves raw .md files in .claude/docs/, so previously fetched content can be re-ingested. If the user also wants to clear the raw cache, they need to delete .claude/docs/ manually.
When suggesting follow-up searches after checking status, the user may want to know about search modes:
| Mode | What it does | Best for |
|---|---|---|
auto | BM25 + vector fusion (default) | General queries β "how does X work?" |
vec | Vector similarity only | Conceptual queries β "patterns like dependency injection" |
lex | BM25 keyword match only | Exact term lookups β "ValidationError class" |
The default auto mode works well for most queries. Suggest lex when the user searches for specific class names, function signatures, or error messages.
The knowledge store falls back to lexical-only (BM25) search when the sentence-transformers embedder can't load. This is noted in server logs but not always visible to the user. If search results seem poor despite having content indexed, mention this possibility β reinstalling sentence-transformers or running scripts/setup.sh again usually fixes it.
The knowledge directory defaults to ~/.neo-research/knowledge/. If the user gets permission errors, the directory either doesn't exist (run scripts/setup.sh) or has wrong ownership.
Rare, but if the .mv2 file is corrupted (partial write during crash), rlm_knowledge_clear() wipes it so a fresh index can be built. The raw .md files in .claude/docs/ survive clearing, so they can be re-ingested with rlm_load_dir.
For convenience, here are all knowledge-related MCP tools the user might ask about after checking status:
| Tool | Purpose | Needs Docker? |
|---|---|---|
rlm_search(query, top_k, mode) | Hybrid search over indexed docs | No |
rlm_ask(question, context_only) | RAG Q&A or context-only retrieval | No |
rlm_timeline(since, until) | Browse docs by recency | No |
rlm_ingest(title, text) | Manually add content to index | No |
rlm_fetch(url) | Fetch URL β .md + .mv2 | No |
rlm_fetch_sitemap(sitemap_url) | Bulk fetch from sitemap | No |
rlm_load_dir(glob) | Bulk load local files | No |
rlm_research(topic) | Find, fetch, and index docs | No |
rlm_knowledge_status() | This tool β show what's indexed | No |
rlm_knowledge_clear() | Wipe the .mv2 index | No |
None of the knowledge tools require Docker. They work on any machine with the Python venv set up.
rlm_knowledge_status() once β don't call it repeatedly in the same interaction.