From domain-intel
Use when the user says 'scan', 'collect intel', 'run scan', or when invoked by cron. Orchestrates the full domain intelligence pipeline: collect from sources, filter duplicates, analyze insights, detect convergence signals, store results. Primary cron target.
npx claudepluginhub n0rvyn/indie-toolkit --plugin domain-intelThis skill uses the workspace's default tool permissions.
Pipeline orchestrator for domain-intel. Reads config, dispatches collection and analysis agents, applies 3-tier filtering, stores results, and detects cross-source convergence signals.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Pipeline orchestrator for domain-intel. Reads config, dispatches collection and analysis agents, applies 3-tier filtering, stores results, and detects cross-source convergence signals.
Uses sonnet because the 3-tier filter requires precise arithmetic (Jaccard similarity, weighted keyword scoring) and convergence signal detection requires topic clustering — haiku is unreliable for these.
Designed for automated cron execution — minimal output, no interactive prompts, fail-safe.
If the invocation prompt contains [cron]:
CRON_MODE = trueBash(command="pwd")
Store the result as WD. All file paths in this skill are relative to WD — prefix every ./ path with {WD}/ when calling Read, Write, Glob, or Grep. Bash commands can use relative paths as-is.
Read {WD}/config.yaml
[domain-intel] Not initialized. Run /intel setup in this directory. → stopExtract from config:
domains[] — each with namesources.github — enabled flag, languages, min_starssources.rss[] — list of {name, url}sources.official[] — list of {name, url, paths[]}sources.external[] — list of {name, path, pre_collect (optional)}sources.producthunt — enabled flag, client_id, client_secret, topics[]scan.max_items_per_source (default: 20)scan.significance_threshold (default: 2)scan.auto_digest (default: false)Read {WD}/LENS.md if it exists:
figures[] and companies[]lens_contextGet today's date and month:
date +%Y-%m-%d
date +%Y-%m
Ensure month directory exists:
mkdir -p ./insights/{YYYY-MM}
Read {WD}/state.yaml if it exists (for stats tracking).
If sources.external[] is empty or not defined → skip to Step 2.
For each external source that has a pre_collect field:
[domain-intel] Pre-collecting: {name} via {pre_collect}pre_collect (e.g., /youtube-scan)
[cron] mode: append [cron] to the skill invocation[domain-intel] Pre-collect failed for {name}: {reason}. Continuing without it. → continue to next external sourceDispatch the source-scanner agent with:
sources.rss[].url (for source signal detection)scan.browser_fallback (default: false)Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_rendered.py") and pass the absolute path. If CLAUDE_PLUGIN_ROOT is empty, set browser_fallback to false and log a warning.Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_url.py") and pass the absolute path. This is always required — it replaces WebFetch for all page fetching (with explicit timeout control).Bash(command="echo ${CLAUDE_PLUGIN_ROOT}/scripts/fetch_producthunt.py") and pass the absolute path.sources.producthunt (if present). Pass the full block: enabled, client_id, client_secret, topics[]. If the section is missing or enabled is false, omit this input.Wait for completion. The agent returns:
items:
- url, title, source, snippet, metadata, collected_at
failed_sources:
- url, source_type, error
source_signals:
- type, value, reason
stats:
github: N, producthunt: N, rss: N, official: N, figure: N, company: N, failed: N, total: N
Save source_signals for merging in Step 6.5.
If total items == 0 AND sources.external[] is empty or not defined → output [domain-intel] Scan complete — no items collected. Check source configuration. → update state → stop
If total items == 0 AND sources.external[] is defined → log [domain-intel] No items from built-in sources. Proceeding to external import. → skip Steps 3-5 → jump to Step 5.5.
Apply filters sequentially. Track counts at each stage.
For each item, normalize the URL:
utm_*, ref=, source=Regex-escape the normalized URL before using as Grep pattern: replace . with \\., + with \\+, ? with \\?, [ with \\[, ] with \\].
Check if the normalized URL exists in recent insight files only (current month + previous month):
Grep(pattern="{escaped_url}", path="{WD}/insights/{YYYY-MM}/", output_mode="files_with_matches", head_limit=1)
If current day is within the first 7 days of the month, also check previous month (only if it exists):
Glob(pattern="{WD}/insights/{PREV-YYYY-MM}/*.md", head_limit=1)
If glob returns results:
Grep(pattern="{escaped_url}", path="{WD}/insights/{PREV-YYYY-MM}/", output_mode="files_with_matches", head_limit=1)
Remove items whose URL already exists. Track: after_url_dedup = N
Get titles from recent insight files (current month + previous month if applicable):
Grep(pattern="^title:", path="{WD}/insights/{YYYY-MM}/", output_mode="content")
If within first 7 days and {WD}/insights/{PREV-YYYY-MM}/ exists (checked via Glob in Tier 1), also check previous month.
For each remaining item, compare its title against existing titles:
|intersection| / |union|Remove duplicates. Track: after_title_dedup = N
For each remaining item, compute relevance score using domain names and LENS context:
score = 0
# Domain name matching — item title/snippet matches a configured domain name
For each domain in domains:
if domain.name appears in item.title OR item.snippet (case-insensitive):
score += 1
# LENS "What I Don't Care About" blacklist (if LENS.md exists)
For each anti_interest extracted from LENS.md body:
if anti_interest appears in item.title OR item.snippet (case-insensitive):
score -= 3
# Source-type baseline — figure, company, and producthunt items have inherent relevance:
# figure/company were explicitly requested via LENS.md;
# producthunt items were already topic-filtered by the API script
if item.source == "figure" OR item.source == "company" OR item.source == "producthunt":
score += 1
Drop items with score <= 0. Sort remaining by score descending.
Take top N items where N = min(max_items_per_source * number of enabled source types, 30).
Hard cap at 30 total items to stay within agent turn budgets.
Track: after_keyword = N
Group filtered items by source type (github, producthunt, rss, official, figure, company).
For each non-empty group, dispatch one insight-analyzer agent with:
Dispatch all groups in parallel (multiple Agent tool calls in one message).
Wait for all to complete. If a single analyzer fails, log the failure and continue with the results from the others. Each successful analyzer returns:
insights:
- id, source, url, title, significance, tags, category, domain,
problem, technology, insight, difference, selection_reason
dropped:
- url, reason
Merge results from all analyzers. For each insight with significance >= significance_threshold:
Verify the ID doesn't collide with existing files. If it does, increment the sequence number.
Write insight file to {WD}/insights/{YYYY-MM}/{id}.md:
---
id: {id}
source: {source}
url: "{url}"
title: "{title}"
significance: {N}
tags: [{tags joined by comma}]
category: {category}
domain: {domain}
date: {YYYY-MM-DD}
read: false
---
# {title}
**Problem:** {problem}
**Technology:** {technology}
**Insight:** {insight}
**Difference:** {difference}
---
*Selection reason: {selection_reason}*
Track: stored = N
If sources.external[] is empty or not defined → skip to Step 6.
For each external source in sources.external[]:
~ in path to absolute path: Bash(command="echo {path}"){resolved_path}/*.md
[domain-intel] External source {name}: no files at {path} → skip.md file:
a. Read and parse YAML frontmatter
b. Validate required fields: id, source, url, title, significance, date
scan.significance_thresholddate field (not current scan month): {file_YYYY-MM}Bash(command="mkdir -p {WD}/insights/{file_YYYY-MM}")
f. Copy file to {WD}/insights/{file_YYYY-MM}/{id}.mdimported = N[domain-intel] Imported {N} external insights from {name}Include imported insights in the pool for Step 6 (Convergence Signal Detection) and Step 6.5 (Lens Signal Collection).
Read all insights stored today (from Step 5 and Step 5.5 combined). Glob {WD}/insights/*/ for files with today's date prefix {YYYY-MM-DD}-*.
Group by normalized topic:
problem fields share 2+ non-stop-words (using the same stop word list as Tier 2)For each topic that appears across 2+ different source types (e.g., github + rss):
Write a convergence signal file to {WD}/insights/{YYYY-MM}/{YYYY-MM-DD}-convergence.md:
---
id: {YYYY-MM-DD}-convergence
type: signal
date: {YYYY-MM-DD}
---
# Convergence Signals — {YYYY-MM-DD}
| Topic | Sources | Insight IDs | Summary |
|-------|---------|-------------|---------|
| {topic} | {source1}, {source2} | {id1}, {id2} | {1-sentence cross-source synthesis} |
If no convergence detected, skip this file. Track: convergence_signals = N
Skip this step if LENS.md does not exist.
Check today's stored insights for evolution signals — topics or entities that appear frequently but aren't reflected in LENS.md.
New interest detection: Extract all tags from today's insights with significance >= 4. If any tag appears 3+ times but is NOT mentioned in LENS.md "What I Care About" section → record as new-interest signal.
New figure detection: Scan problem, technology, insight, and difference fields across today's insights. Look for capitalized multi-word names that appear to reference a person (e.g., "Andrej Karpathy", "Tim Cook"). Exclude known technical terms (framework names, language names, domain names). If a person name appears in 2+ insights and is NOT in LENS.md figures[] frontmatter → record as new-figure signal. This is best-effort detection; false negatives are acceptable.
New company detection: Same field scan as above. Look for capitalized names that appear to reference an organization or company (e.g., "Mistral AI", "Hugging Face"). If an organization name appears in 2+ insights and is NOT in LENS.md companies[] frontmatter → record as new-company signal. Best-effort; false negatives acceptable.
New RSS detection: Group today's stored insights by URL domain (extract hostname from url field). If 3+ insights with significance >= 4 share the same URL domain, and that domain is NOT in sources.rss[].url or sources.official[].url → record as suggest-rss signal with value = the domain URL.
New domain detection: Group today's insights by primary tag (first tag). If a tag appears on 3+ insights but does NOT match any domains[].name (case-insensitive) → record as suggest-domain signal.
Merge source-scanner signals: Append any source_signals returned by the source-scanner in Step 2 (suggest-rss, suggest-official-path).
Append all signals to {WD}/.lens-signals.yaml:
- date: YYYY-MM-DD
type: new-interest # or new-figure, new-company, suggest-rss, suggest-official-path, suggest-domain
value: "{tag, name, or URL}"
evidence: [insight IDs or source description]
If {WD}/.lens-signals.yaml doesn't exist, create it. If it does, append to the existing list.
Track: lens_signals = N
Write {WD}/state.yaml:
last_scan: "{YYYY-MM-DD}T{HH:MM:SS}"
total_insights: {previous_total + stored + imported}
total_scans: {previous_scans + 1}
last_scan_stats:
collected: {raw items from scanner}
after_url_dedup: {N}
after_title_dedup: {N}
after_keyword: {N}
analyzed: {sent to analyzers}
stored: {above threshold}
imported: {N} # external insights imported
convergence_signals: {N}
lens_signals: {N}
failed_sources: {N}
Output a concise summary:
[domain-intel] Scan complete — {YYYY-MM-DD}
Collected: {N} → Filtered: {N} → Analyzed: {N} → Stored: {N}
Convergence signals: {N}
By domain: {domain1}: {N}, {domain2}: {N}
Failed sources: {N}
If failed_sources > 0, list them.
If imported > 0, include in summary:
External imports: {source1}: {N}, {source2}: {N}
If lens_signals > 0, append:
LENS evolution signals: {N} (run /intel evolve to review)
If scan.auto_digest is false or not defined → skip.
If stored + imported == 0 → skip (nothing new to digest).
Invoke /digest for today's date (daily mode).
[cron] mode: append [cron] to the invocation[domain-intel] Auto-digest generated. See digests/ directory.[domain-intel] Auto-digest failed: {reason}. Do not fail the scan.