From spotlight
Orchestrates OSINT investigations: preflight checks search tools and plugins, delegates research to agents, enforces user approval gates, archives findings to Obsidian vaults or directories.
npx claudepluginhub buriedsignals/skills --plugin spotlightThis skill uses the workspace's default tool permissions.
You are now orchestrating an OSINT investigation using Spotlight.
Executes multi-agent research pipeline on any topic with Scout, Investigators, Deep Diver, Verifier, Synthesizer, and Critic reviews to produce verified, sourced reports.
Deploys parallel agent researchers to deeply explore topics via web, codebase, and documents, synthesizing and validating claims into structured reports. Requires agent teams enabled.
Conducts deep web research with parallel agents, multi-wave exploration for gaps, and structured synthesis. Activates for investigating topics, comparing options, best practices, or comprehensive web info.
Share bugs, ideas, or general feedback.
You are now orchestrating an OSINT investigation using Spotlight.
This skill instructs. You — the host session — execute. You call Agent(), read files, evaluate criteria, and manage gates. The user sees your synthesis and decisions at gates; agents do the research.
Two absolute rules:
Run these checks in order. Stop at the first failure.
Read .spotlight-config.json in the working directory. If it exists and contains valid search_library + vault_path fields, update last_used to the current timestamp and skip to step 5 (project setup).
command -v firecrawl
command -v exa
command -v tavily
First found wins. If none found:
"No search library detected. Spotlight needs one to search and scrape. Install one of:
firecrawl,exa, ortavily."
STOP. Do not proceed without a search library.
Check if the /osint skill is available (it should appear in the host session's loaded skills). If not:
"Spotlight requires the OSINT toolkit for agent investigation techniques. Install with:
/plugin install osint@buriedsignals"
STOP.
Also confirm /social-media-intelligence is available — it ships with the osint@buriedsignals plugin. If the OSINT plugin is installed, this skill is already present.
No user action required. This step establishes what capabilities your agents have access to before you spawn them.
The following skills are pre-loaded by agents (via their frontmatter) as part of the installed plugins:
| Skill | Plugin | Agent(s) | Purpose |
|---|---|---|---|
spotlight:web-archiving | spotlight | investigator, fact-checker | Archive all evidence before citing |
spotlight:content-access | spotlight | investigator, fact-checker | Work through paywall hierarchy before marking sources inaccessible |
osint:social-media-intelligence | osint | investigator | Account authenticity, coordination detection, narrative tracking |
You do not need to check for these individually. If both plugins are installed, your agents have full access. When building spawn prompts, explicitly remind agents these are available and expected to be used.
Ask the user:
"Where should findings be archived when the investigation completes? (a) Obsidian vault — enter path (b) Local directory (defaults to
./vault/)"
If the user provides a path, check for .obsidian/ inside it to detect whether it's an Obsidian vault. Set vault_type to "obsidian" or "directory" accordingly.
Derive a project slug from the user's lead (lowercase, hyphens, no spaces). Create:
cases/{project}/
cases/{project}/data/
cases/{project}/research/
If cases/{project}/ already exists, prompt:
"An investigation named
{project}already exists. Resume the existing investigation, or start fresh?"
If resume: read existing state files and determine where the pipeline left off. If fresh: back up the existing directory to cases/{project}-{timestamp}/ and create a new one.
Scan cases/ for directories that do NOT contain summary.md. If any are found:
"Note: {N} investigation(s) in progress without a completed summary: {names}. Continuing with
{project}."
Write .spotlight-config.json:
{
"search_library": "<detected library>",
"vault_path": "<user-provided path or ./vault/>",
"vault_type": "obsidian | directory",
"cases_root": "cases/",
"integrations": {
"osint_navigator": false,
"cojournalist": false
},
"created_at": "<ISO timestamp>",
"last_used": "<ISO timestamp>",
"active_project": "<project slug>"
}
Check for optional API integrations. None are required — investigations work without them.
Step 1 — Detect env vars:
# OSINT Navigator
test -n "$OSINT_NAV_API_KEY" && echo "osint_navigator=true" || echo "osint_navigator=false"
# coJournalist monitoring
test -n "$COJOURNALIST_KEY" && echo "cojournalist=true" || echo "cojournalist=false"
Step 2 — Verify connectivity and version via OpenAPI spec:
For each integration where the env var is set, fetch the OpenAPI spec to confirm the API is reachable and extract the version:
# OSINT Navigator — verify API is live
curl -s -H "Authorization: Bearer $OSINT_NAV_API_KEY" \
"https://navigator.indicator.media/api/openapi.json" | python3 -c "
import sys, json
spec = json.load(sys.stdin)
print(f'navigator: {spec[\"info\"][\"title\"]} v{spec[\"info\"][\"version\"]}')"
# coJournalist — verify API is live
curl -s -H "Authorization: Bearer $COJOURNALIST_KEY" \
"https://www.cojournalist.ai/api/openapi.json" | python3 -c "
import sys, json
spec = json.load(sys.stdin)
print(f'cojournalist: {spec[\"info\"][\"title\"]} v{spec[\"info\"][\"version\"]}')"
If a spec fetch fails (non-200 or parse error), mark the integration as "degraded" in config and warn:
"Warning: $OSINT_NAV_API_KEY is set but Navigator API did not respond. Integration marked as degraded."
Step 3 — Write to config:
Add the results to .spotlight-config.json as an integrations block:
{
"integrations": {
"osint_navigator": true,
"cojournalist": false
}
}
Report to user:
"Integrations: OSINT Navigator v1.0.0 (active), coJournalist monitoring (not configured)"
If resuming an existing project AND integrations.cojournalist is true AND cases/{project}/data/monitoring.json exists:
curl -s -H "Authorization: Bearer $COJOURNALIST_KEY" \
"https://www.cojournalist.ai/api/v1/units?topic=spotlight:{project}"
If new units are found, present a monitoring briefing:
"Monitoring check — your [scout type] Scout on [target] has returned N new results since your last cycle:
- [date] Summary
- [date] Summary
These may be relevant to your investigation. Want to review before starting the next cycle?"
Wait for user response before proceeding.
If no data/monitoring.json exists or no new units, skip silently.
This is a conversation between you and the user. Do NOT spawn agents.
If the lead includes a URL, scrape it first:
{search_library} scrape '<URL>' -o cases/{project}/research/lead-source.md
Read the scraped content to understand the source material.
Restate the lead in one sentence.
Ask 1-3 clarifying questions if scope, angle, or priority is unclear. Keep it tight — the investigator agent handles planning, not you.
Summarize the agreed direction in a few sentences.
Gate: user approves the brief direction.
Write the approved direction to cases/{project}/brief-directions.txt.
After brief approval, spawn the investigator in PLANNING mode:
Agent(
subagent_type: "investigator",
prompt: "MODE: PLANNING\nPROJECT: {project}\nSEARCH_LIBRARY: {search_library}\nVAULT_PATH: {vault_path or 'none'}\nINTEGRATIONS: osint_navigator={config.integrations.osint_navigator}, cojournalist={config.integrations.cojournalist}\nSKILLS: web-archiving (archive all evidence before citing), content-access (paywalled sources — work through access hierarchy before marking inaccessible), social-media-intelligence (load when investigation touches social media accounts, coordination, or narrative spread)\n\nApproved brief directions:\n{directions}\n\nYou have OSINT Navigator access — use $OSINT_NAV_API_KEY for tool discovery.\nYou may recommend monitoring targets in your methodology.\nIf the investigation involves social media, plan to use Skill(osint:social-media-intelligence) for account authenticity and coordination detection.\n\nWrite methodology to cases/{project}/data/methodology.json.\nDo NOT execute the investigation.",
model: "opus",
run_in_background: true
)
When the agent completes:
cases/{project}/data/methodology.jsonWith approved methodology, begin the execution loop. No user involvement between cycles — decide autonomously.
CYCLE N (N starts at 1):
1. Spawn investigator (EXECUTION mode):
Agent(
subagent_type: "investigator",
prompt: "MODE: EXECUTION\nPROJECT: {project}\nSEARCH_LIBRARY: {search_library}\nVAULT_PATH: {vault_path or 'none'}\nINTEGRATIONS: osint_navigator={config.integrations.osint_navigator}, cojournalist={config.integrations.cojournalist}\nCYCLE: {N}\nSKILLS: web-archiving (archive all evidence before citing — use Skill(spotlight:web-archiving)), content-access (paywalled sources — use Skill(spotlight:content-access) before marking inaccessible), social-media-intelligence (use Skill(osint:social-media-intelligence) for account authenticity, coordination detection, and narrative tracking when social media is involved)\n\n{if N > 1: Previous findings gaps:\n{gaps}\n\nFact-check gaps:\n{fc_gaps}}\n\n{if monitoring_units: Monitoring results since last cycle:\n{monitoring_summary}}\n\nYou have OSINT Navigator access — use $OSINT_NAV_API_KEY.\nWhen you identify targets worth persistent monitoring, add them to monitoring_recommendations[] in data/findings.json.\n\nRead methodology from cases/{project}/data/methodology.json.\nWrite to cases/{project}/data/findings.json.\nAppend to cases/{project}/data/investigation-log.json.",
model: "opus",
run_in_background: true
)
2. When complete: read data/findings.json, verify data/investigation-log.json was appended.
3. Spawn fact-checker:
Agent(
subagent_type: "fact-checker",
prompt: "PROJECT: {project}\nSEARCH_LIBRARY: {search_library}\nINTEGRATIONS: osint_navigator={config.integrations.osint_navigator}\nSKILLS: web-archiving (archive sources before issuing verdict — use Skill(spotlight:web-archiving)), content-access (paywalled sources — use Skill(spotlight:content-access) before marking inaccessible)\n\nYou have OSINT Navigator access — use $OSINT_NAV_API_KEY for finding verification tools.\nApply SIFT source credibility check before searching for corroborating evidence.\nArchive every source before citing it. Work through the content-access hierarchy before marking any source inaccessible.\nIf you identify sources worth monitoring for ongoing verification, add them to monitoring_recommendations[] in data/findings.json.\n\nFact-check all claims in cases/{project}/data/findings.json.\nWrite to cases/{project}/data/fact-check.json.",
model: "opus",
run_in_background: true
)
4. When complete: read data/fact-check.json.
5. Run editorial standards check:
- Do findings have sources with URLs, timestamps, and local_file?
- Does data/investigation-log.json have substance (techniques, queries, failed approaches)?
- Do high-confidence findings have 2+ fact-check sources?
- Are there findings with no fact-check verdict?
If any fail: spawn the responsible agent again with specific fix instructions.
This counts as a cycle.
5.5. Process monitoring recommendations:
If `data/findings.json` contains `monitoring_recommendations[]` AND `integrations.cojournalist` is true:
1. Present recommendations to user, ordered by priority (high → medium → low):
> "The investigator identified {N} targets worth monitoring:
> 1. [HIGH] {target} — {rationale}
> 2. [MEDIUM] {target} — {rationale}
>
> Approve, modify, or skip each? (Creating scouts uses coJournalist credits.)"
2. For approved recommendations, create scouts via coJournalist V1 API:
```bash
curl -s -H "Authorization: Bearer $COJOURNALIST_KEY" \
-X POST https://www.cojournalist.ai/api/v1/scouts \
-H "Content-Type: application/json" \
-d '{"name":"...","type":"...","topic":"spotlight:{project}",...}'
```
3. Log results to `cases/{project}/data/monitoring.json`
If `integrations.cojournalist` is false, skip this step.
6. Evaluate readiness criteria (see references/pipeline.md):
| Criterion | Threshold |
|-----------|-----------|
| Minimum findings | 3+ at high confidence |
| Source independence | 2+ independent sources per key claim |
| No unresolved disputes | 0 claims with "disputed" verdict and no resolution path |
| Affected perspective | At least 1 finding from affected community/person |
| Document trail | Primary source documents cited (not just news reports) |
| Gap assessment | All gaps resolved or explicitly noted as limitations |
7. If ALL criteria met: proceed to Gate 1.
8. If NOT met AND N < 5: identify specific gaps, increment N, loop.
9. If NOT met AND N >= 5: trigger Stall Protocol.
"Investigation stalled after {N} cycles. Missing: {gaps}. Options: continue with more cycles, pivot angle, or review current findings as-is."
STOP and wait for the user's decision. Do not auto-advance.
Write cases/{project}/summary.md as a human-readable markdown document:
# {Investigation Title}
**Date:** YYYY-MM-DD | **Cycles:** N | **Status:** Pending review
## Overview
2-3 paragraph narrative overview.
## Scope
What was investigated and what was out of scope.
## Key Conclusions
- Conclusion 1
- Conclusion 2
## Findings
| # | Claim | Confidence | Verdict | Sources |
|---|-------|------------|---------|---------|
| F1 | ... | high | verified | 3 |
## Limitations
- Limitation 1
- Limitation 2
Headline: "{N} verified findings across {M} cycles"
Findings table:
| # | Claim | Confidence | Fact-Check Verdict | Source Count |
|---|
Methods summary: Techniques and tools used, drawn from data/investigation-log.json.
Limitations: Unresolved gaps from data/findings.json, noted as limitations.
Confidence assessment: Overall investigation strength — not just pass/fail on criteria, but how strongly each was met.
The user can request follow-up cycles targeting specific findings. If so, re-enter the execution loop with targeted gap instructions.
Gate: user approves the investigation.
After Gate 1 approval:
"Investigation complete. Ingest confirmed findings into your knowledge base?"
Skill(spotlight:ingest) — pass project path and vault config from .spotlight-config.json.| Task | Agent | Model | Mode |
|---|---|---|---|
| Design methodology | investigator | opus | PLANNING |
| Execute investigation | investigator | opus | EXECUTION |
| Verify findings | fact-checker | opus | -- |
Model fallback: If an Opus spawn fails, warn:
"Spotlight agents are designed for Opus. Running on a lighter model will reduce investigation depth."
Then re-spawn without the model parameter.
All state lives in files. If context is lost mid-investigation, re-read:
cases/{project}/
brief-directions.txt — Approved brief directions
summary.md — Investigation summary (generated at Gate 1)
data/
methodology.json — Approved investigation plan
findings.json — Investigator output (cumulative)
fact-check.json — Fact-checker output
investigation-log.json — Append-only cycle log
monitoring.json — Scout state and check results
Determine where the pipeline left off:
brief-directions.txt → restart at Phase 1data/methodology.json → restart at Phase 2data/findings.json → restart at Phase 3, cycle 1data/findings.json but no summary.md → restart at Phase 3, evaluate current cyclesummary.md → Gate 1 review