From domain-intel
Use when the user says 'research', 'deep research', 'research topic', or wants comprehensive internet-wide investigation of a specific topic. Supports full deep research, incremental updates, and evolving focus profiles. Entry point for targeted topic intelligence.
npx claudepluginhub n0rvyn/indie-toolkit --plugin domain-intelThis skill uses the workspace's default tool permissions.
Targeted deep research on a specific topic. Searches as many internet sources as possible, produces comprehensive reports, and supports incremental updates with an evolving focus profile (FOCUS.md).
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Targeted deep research on a specific topic. Searches as many internet sources as possible, produces comprehensive reports, and supports incremental updates with an evolving focus profile (FOCUS.md).
Uses sonnet because the 3-tier filter requires precise arithmetic (Jaccard similarity, weighted scoring) and the multi-phase pipeline requires careful orchestration.
Bash(command="pwd")
Store the result as WD. All file paths in this skill are relative to WD — prefix every ./ path with {WD}/ when calling Read, Write, Glob, or Grep. Bash commands can use relative paths as-is.
Extract from user input:
topic — the research subject (e.g., "OpenCLaw"). If no topic provided, go to Action: help.subcommand — optional: refine or update. If absent, no subcommand.Slugify the topic for directory name:
slugSet RESEARCH_DIR = {WD}/Research/{slug}
Check if {RESEARCH_DIR}/FOCUS.md exists → route:
| State | Subcommand | Action |
|---|---|---|
| No profile | (none) | → Action: init + full research |
| Has profile | (none) | → Action: status |
| Has profile | refine | → Action: refine |
| Has profile | update | → Action: update |
| No profile | refine/update | → Error: [research] No research profile for "{topic}". Run /research {topic} first. → stop |
| No topic | (none) | → Action: help |
Output directly:
[research] Deep topic research with evolving focus
Usage:
/research <topic> — Start new research (or show status if exists)
/research <topic> refine — Update research focus based on your interests
/research <topic> update — Incremental scan with evolved focus
Concepts:
FOCUS.md — your research profile for a topic (evolves over time)
Findings — collected and analyzed items in ./Research/<topic>/findings/
Reports — comprehensive research reports in ./Research/<topic>/reports/
Then list active research profiles:
Glob(pattern="{WD}/Research/*/FOCUS.md")
For each found FOCUS.md:
topic and created {topic} (created {date}) — ./Research/{slug}/If no profiles found: No active research profiles in this directory.
→ stop
Show overview of existing research profile.
Read {RESEARCH_DIR}/FOCUS.md:
topic, created, aliases, key_entitiesRead {RESEARCH_DIR}/state.yaml:
last_scan, total_findings, total_scansCount findings:
Grep(pattern="read: false", path="{RESEARCH_DIR}/findings/", output_mode="count")
Glob(pattern="{RESEARCH_DIR}/findings/**/*.md")
Find latest report:
Glob(pattern="{RESEARCH_DIR}/reports/*.md")
Take the most recent by filename.
Check pending signals:
Read {RESEARCH_DIR}/.focus-signals.yaml if it exists, count entries.
Output:
[research] {topic}
Created: {date}
Aliases: {comma-separated aliases}
Last scan: {last_scan}
Total scans: {total_scans}
Findings: {total} ({unread} unread)
Key entities: {N} people, {N} orgs, {N} projects, {N} papers
Latest report: {path} ({date})
Angles of Interest:
{list from FOCUS.md}
Active Questions:
{list from FOCUS.md}
If signals pending: Evolution signals: {N} pending — run /research {topic} refine to review
→ stop
Create directories:
Bash(command="mkdir -p \"./Research/{slug}/findings\" \"./Research/{slug}/reports\"")
Ask for core research question:
AskUserQuestion: "What's your core question about {topic}? What are you trying to understand?"
Ask for initial angles:
AskUserQuestion: "What specific angles or dimensions do you want to explore? (List the aspects you care about most, in priority order)"
Auto-discover aliases:
WebSearch(query="{topic}" also known as OR abbreviation OR alias OR alternative name)
Extract candidate aliases (alternative names, abbreviations, translations). Present discovered aliases via AskUserQuestion for user confirmation (multiSelect). Add confirmed aliases to the list. Always include the original topic name.
Read templates:
Read ${CLAUDE_PLUGIN_ROOT}/templates/default-focus.md
Read ${CLAUDE_PLUGIN_ROOT}/templates/default-research-config.yaml
Generate {RESEARCH_DIR}/FOCUS.md from template:
topic with display name, created with today's datealiases with confirmed aliasesCopy default config:
Write {RESEARCH_DIR}/config.yaml from template (no modifications needed — all sources enabled by default).
Initialize state:
Write {RESEARCH_DIR}/state.yaml:
last_scan: "never"
total_findings: 0
total_scans: 0
seen_urls: []
Create empty timeline:
Write {RESEARCH_DIR}/timeline.md:
# {topic} — Timeline
*Auto-generated and updated by /research. Newest entries first.*
---
Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_url.py")
Store as fetch_script_path.
Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_rendered.py")
Store as fallback_script_path.
Read {RESEARCH_DIR}/config.yaml to get browser_fallback setting.
Read {RESEARCH_DIR}/FOCUS.md:
aliasesDispatch the research-scanner agent with:
topic field"broad"Wait for completion. The agent returns:
items:
- url, title, source, snippet, metadata, collected_at
failed_sources:
- url, source_type, error
stats:
search: N, github: N, academic: N, youtube: N, community: N, media: N, official: N, institution: N, failed: N, total: N
If total items == 0 → output [research] No items collected for "{topic}". Check your topic name and try again. → stop
Output progress: [research] Collected {total} items from {N} sources. Filtering...
Apply filters sequentially. Track counts at each stage.
Skip Tier 1 on first run (no prior findings exist). Set after_url_dedup = total items and proceed to Tier 2.
For subsequent runs (update action):
For each item, normalize the URL:
utm_*, ref=, source=Regex-escape the normalized URL: replace . with \\., + with \\+, ? with \\?, [ with \\[, ] with \\].
Check if the normalized URL exists in existing findings:
Grep(pattern="{escaped_url}", path="{RESEARCH_DIR}/findings/", output_mode="files_with_matches", head_limit=1)
Also check against seen_urls from state.yaml (catches items that were previously collected but didn't pass analysis threshold).
Remove items whose URL already exists. Track: after_url_dedup = N
Get titles from existing findings:
Grep(pattern="^title:", path="{RESEARCH_DIR}/findings/", output_mode="content")
For each remaining item, compare its title against existing titles:
|intersection| / |union|Remove duplicates. Track: after_title_dedup = N
For each remaining item, compute relevance score:
score = 1 # Baseline: all research-scanner items were query-targeted for this topic
# Topic/alias matching
For each alias in FOCUS.md aliases[]:
if alias appears in item.title OR item.snippet (case-insensitive):
score += 1
# Angle of Interest matching
For each angle in FOCUS.md "Angles of Interest":
if any keyword from angle appears in item.title OR item.snippet (case-insensitive):
score += 1
# Active Question matching (higher weight)
For each question in FOCUS.md "Active Questions":
if any keyword from question appears in item.title OR item.snippet (case-insensitive):
score += 2
# De-prioritized blacklist
For each deprioritized in FOCUS.md "De-prioritized":
if deprioritized appears in item.title OR item.snippet (case-insensitive):
score -= 3
Drop items with score <= 0. Sort remaining by score descending.
Take top 50 items. Hard cap at 50 to stay within agent turn budgets while allowing deeper research than /scan's 30.
Track: after_relevance = N
Output progress: [research] Filtered: {after_url_dedup} → {after_title_dedup} → {after_relevance} items. Analyzing...
Get today's date:
Bash(command="date +%Y-%m-%d")
Store as today. Also get month: Bash(command="date +%Y-%m")
Ensure findings month directory:
Bash(command="mkdir -p \"./Research/{slug}/findings/{YYYY-MM}\"")
Map research source categories to insight-analyzer source types:
| Research source | Analyzer source_type |
|---|---|
| search | web |
| github | github |
| academic | academic |
| youtube | youtube |
| community | community |
| media | web |
| official | web |
| institution | web |
| figure | figure |
Group filtered items by their mapped source_type.
Read FOCUS.md body and remap section names to LENS.md format so insight-analyzer's LENS-aware screening activates correctly:
## Angles of Interest → ## What I Care About## Active Questions → ## Current Questions## De-prioritized → ## What I Don't Care About## Core Question → prepend to ## What I Care About as contextStore the remapped text as focus_context.
For each non-empty group, dispatch one insight-analyzer agent with:
[{name: "{topic}"}] (research uses topic as the primary domain)Dispatch all groups in parallel (multiple Agent tool calls in one message).
Wait for all to complete. Merge results. Each analyzer returns:
insights:
- id, source, url, title, significance, tags, category, domain,
problem, technology, insight, difference, selection_reason
dropped:
- url, reason
For each insight with significance >= significance_threshold:
Verify the ID doesn't collide with existing files. If collision, increment sequence number.
Write finding file to {RESEARCH_DIR}/findings/{YYYY-MM}/{id}.md:
---
id: {id}
source: {source}
url: "{url}"
title: "{title}"
significance: {N}
tags: [{tags joined by comma}]
category: {category}
domain: {topic}
date: {YYYY-MM-DD}
read: false
---
# {title}
**Problem:** {problem}
**Technology:** {technology}
**Insight:** {insight}
**Difference:** {difference}
---
*Selection reason: {selection_reason}*
Track: stored = N
seen_urls list for state.yaml.Output progress: [research] Stored {stored} findings. Running depth pass...
Skip this step if fewer than 5 findings were stored in Step 8.
Extract key entities from stored findings:
problem, technology, insight, difference fieldsFilter to high-signal entities: mentioned in 2+ findings OR found in a finding with significance >= 4.
For each high-signal entity (budget: max 10 WebSearch calls + 5 fetch calls total):
For people:
WebSearch(query="{entity_name}" "{topic}" opinion OR perspective OR position OR interview)
WebSearch(query="{entity_name}" blog OR talk OR keynote about "{topic}")
For orgs/projects:
WebSearch(query="{entity_name}" "{topic}" announcement OR analysis OR report)
If a URL was discovered in findings, fetch with:
Bash(command="python3 \"{fetch_script_path}\" \"{url}\" --timeout 30")
Collect second-pass items. Apply Tier 1 + Tier 2 filtering against existing findings (URL + title dedup).
If second-pass items remain after filtering:
figure for people, web for orgs/projectsseen_urls and stored countOutput progress: [research] Depth pass: found {N} additional findings from {M} key entities.
Collect all stored findings (first pass + second pass).
Read them from files (use ** to capture all months):
Glob(pattern="{RESEARCH_DIR}/findings/**/*.md")
Read all matching files.
Dispatch the research-synthesizer agent with:
"comprehensive"Wait for completion. The agent returns the structured report data (overview, findings_by_category, entity_graph, opinion_spectrum, timeline, information_gaps, suggested_next_steps).
Update FOCUS.md key_entities with discovered entities from synthesizer's entity_graph:
key_entities in frontmatter: merge new people, orgs, projects, papersWrite timeline:
Read existing {RESEARCH_DIR}/timeline.md.
Prepend new timeline entries from synthesizer output (newest first, after the header).
Write report to {RESEARCH_DIR}/reports/{YYYY-MM-DD}-report.md:
---
date: {YYYY-MM-DD}
topic: "{topic}"
finding_count: {N}
mode: comprehensive
---
# Research Report — {topic}
*Generated: {YYYY-MM-DD} | Findings: {N} | Sources: {source_count}*
## Overview
{overview from synthesizer}
## Key Findings
{For each category in findings_by_category:}
### {category}
{For each finding: significance badge + title + summary}
## Entity Graph
### Key People
| Name | Role/Affiliation | Referenced In |
|------|-----------------|---------------|
{people entries}
### Organizations
| Name | Type | Referenced In |
|------|------|---------------|
{org entries}
### Projects
| Name | Description | URL | Referenced In |
|------|-------------|-----|---------------|
{project entries}
### Papers
| Title | Authors | Year | URL |
|-------|---------|------|-----|
{paper entries}
## Opinion Spectrum
### Supportive
{supportive positions with evidence}
### Neutral
{neutral positions with evidence}
### Critical
{critical positions with evidence}
## Timeline
{timeline entries, newest first}
## Information Gaps
{list of what couldn't be found or verified}
## Suggested Next Steps
{specific actions for deeper research}
---
*Generated by /research — domain-intel*
last_scan: "{YYYY-MM-DD}T{HH:MM:SS}"
total_findings: {stored count}
total_scans: 1
seen_urls:
- {all collected URLs}
last_scan_stats:
collected: {raw items from scanner}
after_url_dedup: {N}
after_title_dedup: {N}
after_relevance: {N}
analyzed: {sent to analyzers}
stored: {above threshold}
depth_pass_findings: {N}
failed_sources: {N}
[research] Research complete — {topic}
Collected: {N} → Filtered: {N} → Analyzed: {N} → Stored: {N}
Depth pass: {N} additional findings from {M} entities
Key entities: {N} people, {N} orgs, {N} projects, {N} papers
Report: {report_path}
Failed sources: {N}
Next steps:
Read the full report: {report_path}
Refine your focus: /research {topic} refine
Run incremental update: /research {topic} update
→ stop
Interactive FOCUS.md update based on user's evolving interests.
{RESEARCH_DIR}/FOCUS.md (full content)Glob(pattern="{RESEARCH_DIR}/reports/*.md")
Read the most recent by filename.{RESEARCH_DIR}/.focus-signals.yaml if it exists.Output current focus summary:
[research] Current focus for "{topic}"
Core Question:
{from FOCUS.md}
Angles of Interest:
{numbered list from FOCUS.md}
Active Questions:
{list from FOCUS.md, or "(none)"}
De-prioritized:
{list from FOCUS.md, or "(none)"}
Key Entities:
People: {names}
Orgs: {names}
Projects: {names}
If .focus-signals.yaml has entries:
Pending evolution signals: {N}
Then prompt the user:
What aspects interest you more? What do you want to explore further, or stop tracking? Express your thoughts naturally.
After user responds with natural language feedback:
Dispatch the focus-evolver agent with:
Wait for completion. The agent returns:
proposed_changes:
- section, action, current (for remove/reword), proposed, reason
summary: "..."
For each proposed change, present using AskUserQuestion:
For approved changes:
current text with proposedClear processed signals from .focus-signals.yaml (remove entries that were presented, whether approved or skipped).
Output:
[research] Focus updated for "{topic}"
Applied: {N} changes
Skipped: {N}
Updated focus:
Angles: {updated list}
Questions: {updated list}
De-prioritized: {updated list}
Run /research {topic} update to scan with your refined focus.
→ stop
Incremental scan based on evolved FOCUS.
{RESEARCH_DIR}/FOCUS.md:
topic, aliases, key_entities{RESEARCH_DIR}/config.yaml{RESEARCH_DIR}/state.yaml:
seen_urls, total_findings, total_scansGlob(pattern="{RESEARCH_DIR}/reports/*.md")
Same as Step 4 in full research:
Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_url.py")
Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_rendered.py")
Weight angles of interest by position (first = highest weight):
This weighting shapes the angles input passed to the scanner — include weight hints so the scanner allocates its query budget accordingly.
Dispatch research-scanner with:
"targeted"Output progress: [research] Incremental scan for "{topic}"...
Apply 3-tier filtering (same as Step 6 in full research), with one addition:
Tier 1 enhancement: In addition to Grep against existing findings, also check each normalized URL against seen_urls from state.yaml. This catches items that were previously collected but didn't pass analysis threshold.
Same as Step 7 in full research: group by source_type, dispatch insight-analyzer in parallel, store findings.
Same as Step 8.5 in full research: extract entities from new findings, do targeted second pass. Skip if fewer than 3 new findings stored (lower threshold for incremental since we have existing context).
Read previous report (latest from Step 3u).
Dispatch research-synthesizer with:
"incremental"The agent returns:
new_findings_summary: "..."
connections_to_previous: [{new_finding_id, previous_finding_id, relationship}]
entity_updates: {new_people: [], new_orgs: [], ...}
focus_signals: [{type, value, evidence}]
updated_timeline_entries: [{date, event, source_id}]
Update FOCUS.md key_entities: merge new entities from entity_updates.
Append timeline: Prepend new entries to {RESEARCH_DIR}/timeline.md (after header, before existing entries).
Write incremental report to {RESEARCH_DIR}/reports/{YYYY-MM-DD}-update.md:
---
date: {YYYY-MM-DD}
topic: "{topic}"
finding_count: {N}
mode: incremental
---
# Research Update — {topic}
*Generated: {YYYY-MM-DD} | New findings: {N}*
## New Findings Summary
{new_findings_summary}
## Connections to Previous Research
{For each connection: new finding → previous finding, relationship}
## New Entities Discovered
{people, orgs, projects added}
## New Timeline Entries
{entries}
---
*Generated by /research update — domain-intel*
Collect focus signals: Append focus_signals from synthesizer to {RESEARCH_DIR}/.focus-signals.yaml.
Update state.yaml:
seen_urlstotal_findings and total_scanslast_scan and last_scan_statsOutput summary:
[research] Update complete — {topic}
New findings: {N}
Connections to previous: {N}
New entities: {N}
Report: {report_path}
If focus_signals > 0:
Focus signals: {N} — run /research {topic} refine to review
→ stop