From neo-research
Research any topic — builds question tree, discovers sources, fetches to disk (zero context cost), indexes into .mv2, distills into expertise artifact. Agent becomes domain expert. Use when you need to learn about a technology, protocol, framework, or domain before working with it.
npx claudepluginhub shihwesley/shihwesley-plugins --plugin neo-researchThis skill uses the workspace's default tool permissions.
Turn "$ARGUMENTS" into genuine expertise using the unified research pipeline.
Conducts multi-round deep research on GitHub repos via API and web searches, generating markdown reports with executive summaries, timelines, metrics, and Mermaid diagrams.
Dynamically discovers and combines enabled skills into cohesive, unexpected delightful experiences like interactive HTML or themed artifacts. Activates on 'surprise me', inspiration, or boredom cues.
Generates images from structured JSON prompts via Python script execution. Supports reference images and aspect ratios for characters, scenes, products, visuals.
Turn "$ARGUMENTS" into genuine expertise using the unified research pipeline.
Output lives at ~/.claude/research/<topic-slug>/.
HOME_DIR=$(echo ~)
ls "$HOME_DIR/.claude/research/" 2>/dev/null
If the topic (or something close) already has a directory with expertise.md:
The research agent handles the full pipeline. Spawn it with the user's input:
Task(
subagent_type="general-purpose",
name="researcher",
description="Research pipeline for topic",
model="sonnet",
prompt="""
You are the research-agent. Follow the pipeline in agents/research-agent.md exactly.
## Input
$ARGUMENTS
## Instructions
Run all 6 phases (Phase 0 is new and critical):
0. Search existing knowledge stores FIRST (~/.neo-research/knowledge/*.mv2)
1. Parse input → build question tree, annotate branches as [COVERED]/[PARTIAL]/[MISSING]
2. Discover sources (WebSearch) ONLY for [PARTIAL] and [MISSING] branches
3. Fetch → disk → index into .mv2 (zero context cost)
4. Distill: query .mv2 systematically → write expertise.md
5. Report results
Write all artifacts to ~/.claude/research/<slug>/.
Return the expertise.md content and a summary report when done.
## MCP Tools Available
Use ToolSearch to load: rlm_search, rlm_ask, rlm_ingest, rlm_exec, rlm_knowledge_status
## BM25 Query Rules (CRITICAL)
The .mv2 stores use Tantivy BM25. Multi-word queries silently return 0 hits
because Tantivy treats them as boolean AND. Use SINGLE KEYWORDS or OR-joined terms:
- GOOD: mem.find("MeshResource", k=5)
- GOOD: mem.find("MeshResource OR generateSphere OR texture", k=5)
- BAD: mem.find("MeshResource generateSphere texture", k=5) → 0 results
- BAD: mem.find("how to create a sphere mesh?", k=5) → 0 results
## Rules
- Search existing knowledge stores before any web research.
- Never read fetched content. Files go disk → knowledge store.
- Never use WebFetch. Use curl via Bash for fetching, WebSearch for discovery.
- Question tree before searching. Structure first.
- Be honest about gaps.
"""
)
When the agent returns:
~/.claude/research/<slug>/expertise.mdResearch complete: <topic>
- Expertise: ~/.claude/research/<slug>/expertise.md
- Knowledge store: ~/.claude/research/<slug>/knowledge.mv2
- Deep-dive: rlm_search(query="...", project="<slug>")
- Reload later: /research load <topic>
After the agent returns, check sources.json for a coupling assessment. If coupling.recommendation == "skill-graph", append to your report:
Domain coupling: high (score N/5)
→ Create navigable skill graph: /create-skill-graph <slug>
If the user says /research load <topic>:
~/.claude/research/expertise.mdNo need to re-fetch or re-index. The knowledge store is also available for rlm_search queries.
If the user says /research with no arguments or /research list:
HOME_DIR=$(echo ~)
ls -1 "$HOME_DIR/.claude/research/" 2>/dev/null
List what topics have been researched with their dates and sizes.