From brain-os
Researches topics via X and web searches focused on top AI voices, synthesizes cited bullet points, optionally ingests to vault. Use /research [topic].
npx claudepluginhub sonthanh/brain-os-pluginThis skill uses the workspace's default tool permissions.
**Vault path:** Read from `${CLAUDE_PLUGIN_ROOT}/brain-os.config.md`
Diagnoses research quality, guides systematic query expansion, and runs Tavily AI web searches via Deno CLI. Use when starting research, stuck, or unsure if complete.
Conducts AI-powered deep research on any topic via triggers like '/deep-research [topic]' or 'deep research on [topic]'. Uses interactive AskUserQuestion for focus, output, and audience selection.
Researches topics across Reddit, X/Twitter, YouTube transcripts, and web sources for recent trends, discussions, recommendations, news, and prompting tips. Use for 'research X' queries or trending subjects.
Share bugs, ideas, or general feedback.
Vault path: Read from ${CLAUDE_PLUGIN_ROOT}/brain-os.config.md
Research agent optimized for tracking cutting-edge thinking from top voices: plan → person-specific + topic searches across platforms → synthesize into cited weekly digest → optionally ingest to vault.
Default behavior: X-first, voice-driven. Generic web articles are low-signal. Prioritize original thinking from researchers, engineers, and founders who are actually building.
| Param | Default | Description |
|---|---|---|
topic | required | What to research |
--depth | quick | quick = 5-8 searches, deep = 15-20 searches with subagents |
--ingest | false | Save findings to vault for /think, /connect, /emerge |
--platforms | x,web | Comma-separated: x, web, youtube, reddit, linkedin, all |
--voices | auto-discover | Comma-separated key people to track; merged with Voices Registry |
--lang | en | Language hint for search queries |
Maintain a living table of key voices in {vault}/knowledge/research/voices.md. Auto-populated on first run, user can add/remove anytime.
| Person | Affiliation | Focus | X Handle |
|--------|------------|-------|----------|
| Andrej Karpathy | ex-OpenAI/Tesla | Autoresearch, agentic engineering | @karpathy |
| Harrison Chase | LangChain | Agent harness patterns, flow engineering | @hwchase17 |
| Swyx (Shawn Wang) | Latent Space | AI Engineer role, agent engineering | @swyx |
| Simon Willison | Independent | LLM tooling, practical AI, MCP | @simonw |
| Jediah Katz | Cursor | Context engineering, dynamic discovery | @jediahkatz |
| Tobi Lütke | Shopify | AI-first org transformation | @tobi |
| Dex Horthy | HumanLayer | 12-Factor Agents, production patterns | @dexhorthy |
| Anthropic team | Anthropic | Claude, MCP, multi-agent research | @AnthropicAI |
| OpenAI team | OpenAI | Codex, deep research, reasoning | @OpenAI |
User adds voices via: /research --add-voice "Name, Affiliation, Focus, @handle"
{vault}/knowledge/research/voices.md — load tracked voices--voices overridetopic into 2-4 angles (not generic subtopics — think: paradigm shifts, tools, architecture patterns, org transformation)"{person_name} {topic_angle}" site:x.com"{topic_angle} 2026 latest" site:x.comRound 1: Voice tracking (highest signal) For each tracked voice relevant to the topic:
WebSearch: "{person_name} {topic}" site:x.com
This catches their latest takes, threads, and announcements.
Round 2: Topic scanning
WebSearch: "{topic_angle} site:x.com 2026"
WebSearch: "{topic_angle} new approach paradigm 2026"
This discovers new voices and conversations not yet in registry.
Round 3: Deep-dives (web) For high-signal X posts that reference blogs/repos/papers:
WebFetch: extract key points from linked blog/repo
WebSearch: "{specific_concept_from_X} explained 2026"
Round 4 (if --platforms includes youtube/reddit):
WebSearch: "{topic} site:youtube.com podcast 2026"
WebSearch: "{topic} site:reddit.com discussion 2026"
If the topic search surfaces a YouTube watch link, podcast episode, or audio URL with direct verbatim signal (e.g. a key talk by a tracked voice), DO NOT extract quotes from same-cycle aggregator articles. Aggregators conflate quotes from different talks by the same person — citing them as primary source is a recurring failure mode.
Instead, invoke /transcribe-video (skill) before quote extraction:
bun ${CLAUDE_PLUGIN_ROOT}/scripts/transcribe-video.ts <URL> --out {vault}/knowledge/research/findings/{slug}/
This writes _transcript-verbatim.md to the findings folder. ALL subsequent finding files cite quotes that grep into this file; aggregator articles inform structure (which topics matter) but never source quotes. Source-tag findings as [primary — verbatim] not [paraphrase].
When the topic is technical and aggregator articles agree on framing, this step is optional. But when the user pushes back ("did they really say that?") or quotes are load-bearing for content extraction, transcribe-video is mandatory.
See [[skills/transcribe-video/SKILL.md]] for usage contract, two transcription paths (auto-subs vs whisper-cpp), and known-typo replacement table.
5-8 WebSearch calls: focus on voice tracking + topic scanning.
Spawn up to 3 subagents in parallel:
{vault}/knowledge/research/reports/research-findings-{slug}-{n}.md# {Topic}: State of the Art — Week of YYYY-MM-DD
## Paradigm Shifts
- **{shift name}** — {one-line explanation} ([Person on X](url))
- ...
## Key Approaches & Patterns
- **{pattern name}** — {what it is, why it matters} ([Source](url))
- ...
## Tools & Infrastructure
- **{tool}** — {what's new} ([Source](url))
- ...
## Org Transformation Signals
- **{company/person}** — {what they did/said} ([Source](url))
- ...
## Key Voices & Current Focus
| Person | Affiliation | Current Focus | Follow |
|--------|------------|---------------|--------|
| ... | ... | ... | [@handle](url) |
*Add your own voices to this table as you discover them.*
## Open Questions
- {what remains unclear, contradictory, or worth watching}
---
*Sources: {n} searches across {platforms} | Voices tracked: {n}*
Save to: {vault}/knowledge/research/reports/YYYY-MM-DD-research-{slug}.md
If --ingest flag is set:
{vault}/knowledge/research/findings/{slug}/source: "research-{date}"---
source: "research-{date}"
topic: "{topic}"
url: "{source_url}"
voice: "{person_name}"
signal: {high|medium|low}
tags: [{tags}]
---
# {Finding Title}
{1-2 sentences: the finding, precise and factual}
## Implication
{Why this matters — one sentence}
## Related
- [[related-vault-note-if-exists]]
| After research... | Use skill | Why |
|---|---|---|
| Reflect on findings | /think | Deep thinking about implications |
| Connect to existing knowledge | /connect {topic} {domain} | Find bridges between research and vault |
| Surface hidden patterns | /emerge | Let vault + new research reveal insights |
| Generate ideas from research | /ideas-gen | Cross-domain brainstorming |
| Write content from research | Use writing skills | Research as input for content |
# Monday morning: scan what happened last week
/research "AI engineering" --depth deep --platforms x,web --ingest
# Midweek: deep dive on something specific from Monday's scan
/research "autoresearch autonomous loops" --depth deep --platforms x,web,youtube
# Ad-hoc: check a specific person's latest thinking
/research "context engineering" --voices "Jediah Katz, Anthropic"
/research "AI agents state of the art"
/research "agentic engineering" --depth deep --platforms x,web,youtube
/research "MCP protocol ecosystem" --voices "Simon Willison, Anthropic" --ingest
/research "AI transformation enterprise" --platforms all --lang vi
/research --add-voice "Yann LeCun, Meta, AI skeptic counterpoint, @ylecun"
Follow skill-spec.md § 11. Append one line to {vault}/daily/skill-outcomes/research.log:
{date} | research | {action} | ~/work/brain-os-plugin | {vault}/knowledge/research/reports/{date}-research-{slug}.md | commit:{hash} | {result}
action: quick, deep, or add-voiceresult: pass if report written with citations, partial if low signal, few sources, or some platforms failed, fail if search errorsargs="{topic}", score={sources_found}