Generate a multi-agent research report using parallel web research with structural review. Three modes: basic (fast single-pass), detailed (multi-section with outline), deep (recursive tree exploration). Claims verification runs separately via verify-report. Supports configurable writing tones, auto/manual researcher roles, source URL pre-fetch, domain-restricted search, custom sub-question counts, and local document research. Three source modes: web (default), local (analyze user's documents), hybrid (web + documents). Use when the user asks to "research report", "investigate", "deep research", "write a report", "gpt-researcher", "multi-agent research", "analyze these documents", "research from my files", or requests comprehensive topic analysis with citations. Also use when the user wants to "resume research", "continue research report", "pick up the research", "finish the report", "what happened to my report", or resume an interrupted research run.
From cogni-researchnpx claudepluginhub cogni-work/insight-wave --plugin cogni-researchThis skill is limited to using the following tools:
references/deep-research-tree.mdreferences/report-types.mdreferences/review-criteria.mdreferences/sub-question-generation.mdDispatches code-reviewer subagent to evaluate code changes via git SHAs after tasks, major features, or before merging, with focused context on implementation and requirements.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Processes code review feedback technically: verify suggestions against codebase, clarify unclear items, push back if questionable, implement after evaluation—not blind agreement.
When this skill loads:
User: "Write a research report on quantum computing's impact on cryptography"
Skill presents:
Research Configuration Topic: "quantum computing's impact on cryptography" Detected: type = basic
Depth (research scope):
basic= 3-5K words, 5 sub-questions — standard reportdetailed= 5-10K words, up to 10 sub-questions — comprehensivedeep= 8-15K words, recursive tree — maximum depthoutline= structured framework only (no prose)resource= annotated source list / bibliographyTone: objective (default) | analytical | critical | persuasive | formal | informative | explanatory | descriptive | comparative | speculative | narrative | optimistic | simple Citations: APA (default) | MLA | Chicago | Harvard | IEEE | Wikilink Market: global (default) | dach | de | us | uk | fr (localizes search queries + authority sources) Advanced: output language, sub-question count, source mode (web/local/hybrid), domain filter, researcher role — ask about any of these
Reply with your choices, or "go" for defaults.
User: "detailed, analytical" (or just "go")
Result: A 5000-10000 word report with inline citations, produced via:
output/report.mdThen run /verify-report to verify claims against cited sources in a fresh context window.
| Type | Trigger | Sub-Questions | Words | Use Case |
|---|---|---|---|---|
| basic | "research report on X" | 5 | 3000-5000 | Standard research report |
| detailed | "detailed research report on X" | 5-10 | 5000-10000 | Comprehensive analysis |
| deep | "deep research on X" | 10-20 (tree) | 8000-15000 | Maximum depth + breadth |
| outline | "outline on X" | 5 | 1000-2000 | Structured framework, no prose |
| resource | "sources on X", "reading list" | 5 | 1500-3000 | Annotated bibliography |
Default: basic unless user specifies otherwise.
Read these reference files when the corresponding phase needs them:
| Reference | Read When |
|---|---|
references/report-types.md | Phase 1 — choosing report type and planning |
references/sub-question-generation.md | Phase 1 — decomposing user query |
references/deep-research-tree.md | Phase 1 (deep mode) — building research tree |
references/agent-roles.md | Phase 1 — auto-selecting researcher role |
references/writing-tones.md | Phase 0 — resolving tone parameter |
references/citation-formats.md | Phase 0 — resolving citation format |
references/review-criteria.md | Phase 5 — understanding review scoring |
A well-structured project directory is the foundation for resumability and cross-agent coordination. Without it, agents cannot find each other's outputs, the review loop cannot track iterations, and a crash mid-research loses all progress.
Scan the user's request and extract any options they already specified. These become "detected" settings that won't be re-asked in the configuration menu.
references/report-types.md)references/writing-tones.md. Default: "objective"references/citation-formats.mdPresent the user with a configuration menu using AskUserQuestion so they can see what options exist and choose before research starts. This is the heart of Phase 0 — it makes the skill's capabilities discoverable rather than hidden behind keyword detection.
Assemble the menu dynamically:
Reply with your choices, or "go" for defaults.Conditional skip: If the user's prompt already specified ALL primary options (type + tone + citations) OR included urgency signals ("just go", "start now", "defaults are fine"), collapse the menu to a compact confirmation:
"Starting detailed research on X — analytical tone, IEEE citations, English. Change anything? (or 'go')"
Handling user responses:
references/agent-roles.md, references/writing-tones.md, etc.), explain the option, then re-present the menuAfter research configuration is confirmed, always ask where to store the project — even if the user said "go" or "defaults". The only exception is when the user already explicitly specified a location in their original prompt or config responses (e.g., "save in standard", "put it in ~/research", "here").
Use AskUserQuestion to present:
Where should I store this project?
standard(recommended) —cogni-research/{project-slug}(organized under plugin namespace)here— current directory ({cwd}/{project-slug})- Or provide a custom path
The reason this is a separate, explicit question: reports that land in the wrong directory are hard to find later and break the user's workspace organization. Asking once upfront avoids that.
Handling location responses:
cogni-research/ relative to current working directoryOnce configuration is confirmed and the user has answered the location question (Step 2b), resolve the workspace path:
cogni-research/ relative to current working directoryThen initialize:
bash "${CLAUDE_PLUGIN_ROOT}/scripts/initialize-project.sh" \
--topic "<user topic>" --type <basic|detailed|deep|outline|resource> \
--workspace "<workspace path>" \
[--market "<region-code>"] \
[--output-language "<lang>"] \
[--tone "<tone>"] \
[--citation-format "<apa|mla|chicago|harvard|ieee>"] \
[--source-urls "<url1,url2,...>"] \
[--query-domains "<domain1,domain2,...>"] \
[--max-subtopics <N>] \
[--report-source "<web|local|hybrid>"] \
[--document-paths "<path1,path2,...>"] \
[--curate-sources]
Check the already_exists field in the JSON output before proceeding.
If already_exists is false: store the returned project_path and continue to Phase 0.1.
If already_exists is true: a project with the same slug already exists at that location. Do NOT silently continue — this would overwrite or mix into the user's prior research. Present a choice via AskUserQuestion:
A research project already exists at
{project_path}
- Topic: "{existing_topic}"
- Completed phases: {completed_phases}
What would you like to do?
- Resume — continue this existing project from where it left off
- New project — create a separate project alongside the existing one
- Different location — save the new project somewhere else
Handle the user's choice:
.metadata/execution-log.json from the existing project and jump to the Resumption logic (skip remaining Phase 0 steps)initialize-project.sh with --suffix 2. If that also collides, increment the suffix (3, 4, ...) until a fresh directory is createdinitialize-project.sh with the new --workspace valueRead market and output_language from project-config.json (stored by initialize-project.sh). These control search localization and report output language respectively.
MARKET=$(jq -r '.market // empty' "${PROJECT_PATH}/.metadata/project-config.json" 2>/dev/null)
if [[ -z "$MARKET" ]]; then
# Backward compat: derive market from legacy language field
LANG=$(jq -r '.language // "en"' "${PROJECT_PATH}/.metadata/project-config.json")
MARKET=$( [[ "$LANG" == "de" ]] && echo "dach" || echo "global" )
fi
OUTPUT_LANGUAGE=$(jq -r '.output_language // empty' "${PROJECT_PATH}/.metadata/project-config.json" 2>/dev/null)
if [[ -z "$OUTPUT_LANGUAGE" ]]; then
# Derive from market config default_output_language
OUTPUT_LANGUAGE=$(jq -r --arg m "$MARKET" '.[$m].default_output_language // ._default.default_output_language // "en"' "${CLAUDE_PLUGIN_ROOT}/references/market-sources.json" 2>/dev/null || echo "en")
fi
MARKET controls search localization for researcher agents:
${CLAUDE_PLUGIN_ROOT}/references/market-sources.json and use the market entry to generate intent-based bilingual queries, boost authority sources, and apply geographic modifiers_default (English-only, no authority boosts)OUTPUT_LANGUAGE controls report output for writer/reviewer/revisor:
Available markets: global (default), dach, de, us, uk, fr. The market and output language are usually aligned (e.g., market=dach → output_language=de) but can diverge (e.g., market=fr, output_language=en for an English report about the French market).
Before generating sub-questions, gather context about what information is actually available online. Sub-questions generated in a vacuum often target angles that have no searchable content, wasting researcher agents on dead ends.
The quality of sub-questions determines the quality of the entire report. Orthogonal decomposition prevents researchers from duplicating each other's work, while collectively exhaustive coverage prevents blind spots. Poor sub-questions produce redundant contexts and missing perspectives that the review loop cannot fix — it can only catch factual errors, not structural gaps.
Read: references/sub-question-generation.md for decomposition patterns.
Use the preliminary search context from Phase 0.5 to inform sub-question generation — ensure questions target angles that have actual web content available.
Agent Role Selection: If researcher_role is already set in project-config.json (user-specified via --researcher-role), use it directly. Otherwise, auto-select the best-fit persona by reading ${CLAUDE_PLUGIN_ROOT}/references/agent-roles.md and matching the topic's domain signals against the role catalog. Store the selected role in project-config.json as researcher_role and pass it to the writer agent as RESEARCHER_ROLE.
Generate sub-questions based on report type. If max_subtopics is set in project-config.json, use that count instead of the defaults below:
Basic (default 5 sub-questions, or max_subtopics if set):
Detailed (default 5-10 section outline, or max_subtopics if set):
Outline (default 5 sub-questions, or max_subtopics if set):
search_guidance emphasizing key findings and structure over depthResource (default 5 sub-questions, or max_subtopics if set):
search_guidance emphasizing source diversity and quality over depth of individual findingsDeep (research tree, max_subtopics controls leaf count if set):
Read references/deep-research-tree.md for tree decomposition algorithm.
For each sub-question, create entity:
bash "${CLAUDE_PLUGIN_ROOT}/scripts/create-entity.sh" \
--project-path "${PROJECT_PATH}" \
--entity-type sub-question \
--data '{"frontmatter": {"query": "...", "parent_topic": "...", "section_index": N, "report_type": "basic", "search_guidance": "...", "status": "pending"}, "content": ""}' \
--json
Parallel execution is the key throughput optimization — a basic report with 5 sub-questions completes research in the time of one. All researchers use sonnet for richer source extraction and better findings quality. Batching at 4-5 agents prevents overwhelming the host with concurrent WebFetch requests and avoids rate limiting from search providers. Each agent runs independently, so a failure in one does not block the others.
Read report_source from project-config.json (default: "web"). This determines which researcher agents to spawn.
Resolve the current year for recency-aware search queries:
CURRENT_YEAR=$(date +%Y)
Spawn section-researcher agents in parallel batches (max 5 per batch):
Basic/Detailed/Outline/Resource mode:
For each sub-question entity in 00-sub-questions/data/:
Task(section-researcher,
SUB_QUESTION_PATH=<path>,
PROJECT_PATH=<project_path>,
MARKET=<market>,
CURRENT_YEAR=<current_year>,
SOURCE_URLS=<from project-config.json, if set>,
QUERY_DOMAINS=<from project-config.json, if set>,
run_in_background=true)
Deep mode:
For each leaf sub-question in 00-sub-questions/data/:
Task(deep-researcher,
SUB_QUESTION_PATH=<path>,
PROJECT_PATH=<project_path>,
MARKET=<market>,
CURRENT_YEAR=<current_year>,
SOURCE_URLS=<from project-config.json, if set>,
QUERY_DOMAINS=<from project-config.json, if set>,
DEPTH=2,
run_in_background=true)
Spawn local-researcher agents instead of section-researchers. All sub-questions research from the same document set, but each agent extracts findings relevant to its specific sub-question.
For each sub-question entity in 00-sub-questions/data/:
Task(local-researcher,
SUB_QUESTION_PATH=<path>,
PROJECT_PATH=<project_path>,
DOCUMENT_PATHS=<from project-config.json document_paths>,
OUTPUT_LANGUAGE=<output_language>,
run_in_background=true)
Deep mode with local sources: use local-researcher (not deep-researcher). The recursive tree algorithm is designed for web search breadth — local documents don't benefit from recursive decomposition. If the user requests deep + local, run local-researchers with the deep sub-question tree but without internal recursion.
Run both local and web researchers for each sub-question, then merge their findings in Phase 3. This produces the richest context but uses 2x the agents.
For each sub-question entity in 00-sub-questions/data/:
# Local research first (documents)
Task(local-researcher,
SUB_QUESTION_PATH=<path>,
PROJECT_PATH=<project_path>,
DOCUMENT_PATHS=<from project-config.json document_paths>,
OUTPUT_LANGUAGE=<output_language>,
run_in_background=true)
# Web research in parallel
Task(section-researcher,
SUB_QUESTION_PATH=<path>,
PROJECT_PATH=<project_path>,
MARKET=<market>,
CURRENT_YEAR=<current_year>,
SOURCE_URLS=<from project-config.json, if set>,
QUERY_DOMAINS=<from project-config.json, if set>,
run_in_background=true)
Both agents create separate context entities for the same sub-question. The merge-context script (Phase 3) handles deduplication across local and web sources.
Batch in groups of 4-5 to respect concurrency limits. Wait for each batch before starting next.
After all researchers complete:
.logs/phase-2-research.jsonlSource curation ranks sources by quality, relevance, authority, and recency before the writer sees them. This prevents the writer from treating all sources equally when some are clearly more authoritative.
Activation rules (check in order):
curate_sources is explicitly false in project-config.json → skipcurate_sources is explicitly true → run (any report type, any source count)detailed or deep AND source entity count >= 8 → runWhen activated:
Task(source-curator,
PROJECT_PATH=<project_path>,
MARKET=<market>)
The source-curator produces .metadata/curated-sources.json with quality rankings (primary/secondary/supporting tiers) and diversity analysis. The writer agent reads this in Phase 4 to prioritize citations.
Aggregation deduplicates sources and enforces a context word limit (25,000 words) to prevent writer overload. Without this step, deep reports with 15+ researchers can produce far more raw context than a single writer agent can meaningfully synthesize, leading to shallow treatment of all topics rather than deep treatment of each.
python3 "${CLAUDE_PLUGIN_ROOT}/scripts/merge-context.py" \
--project-path "${PROJECT_PATH}" --json
Verify output: contexts count, sources count, total words. If too few sources (< 3), consider re-running failed sub-questions.
Spawn writer agent:
Task(writer,
PROJECT_PATH=<project_path>,
DRAFT_VERSION=1,
REPORT_TYPE=<type>,
RESEARCHER_ROLE=<role from project-config.json>,
TONE=<tone from project-config.json, default "objective">,
CITATION_FORMAT=<citation_format from project-config.json, default "apa">,
OUTPUT_LANGUAGE=<output_language>)
Verify: draft written to output/draft-v1.md, reasonable word count.
This phase runs a lightweight structural-only review to catch organizational and stylistic issues before finalization. Claims verification — the factual accuracy check — runs separately via the verify-report skill in a dedicated context window. This architectural split ensures claims verification gets full context attention rather than competing with research data from Phases 0-4.
Read: references/review-criteria.md for scoring rubric.
Outline and Resource modes: Accept the draft if structural score >= 0.65 or iterate once. Then proceed to Phase 6.
Basic, Detailed, and Deep modes: Run one structural review iteration as described below.
Task(reviewer,
PROJECT_PATH=<project_path>,
DRAFT_PATH="output/draft-v{N}.md",
REVIEW_ITERATION=1,
OUTPUT_LANGUAGE=<output_language>)
Note: no CLAIMS_DASHBOARD parameter — the reviewer runs structural criteria only (completeness, coherence, source diversity, depth, clarity). The higher accept threshold (0.82) for structural-only review applies automatically.
Task(revisor,
PROJECT_PATH=<project_path>,
DRAFT_PATH="output/draft-v{N}.md",
VERDICT_PATH=".metadata/review-verdicts/v1.json",
NEW_DRAFT_VERSION=N+1,
OUTPUT_LANGUAGE=<output_language>,
MARKET=<market>)
Maximum 1 structural review iteration. After revision (or if the first review accepts), proceed to Phase 5.5.
Ask the user whether to generate a themed HTML version of the report with interactive charts and diagrams. This transforms the markdown report into a polished, presentation-ready HTML deliverable.
cogni-visual:enrich-report is available. If not installed, display a warning and skip to Phase 6."Generate themed HTML with interactive charts and diagrams? (cogni-visual:enrich-report)"cogni-visual:enrich-report skill with source_path pointing to the final accepted draft (output/report.md if already copied, otherwise the latest output/draft-v{N}.md). The enrich-report skill handles theme selection, enrichment planning, and interactive review — do not duplicate that logic here.output/report.md
{project_path}/output/report.md — the self-contained project directory is the unit of output (report + sources + metadata, all Obsidian-browsable). If the user wants a different format or location, the enrich-report phase (Phase 5.5) handles that.cost_estimate.estimated_usd from all agent outputs collected during Phases 2-5. Group by agent role (researchers, writer, reviewer, revisor, claim_extractor, source_curator). Write cost summary to execution-log.json.metadata/execution-log.json with:
phase_5_review.claims_verification: "deferred to verify-report"{"total_estimated_usd": N, "breakdown": {"researchers": N, "writer": N, ...}}enrich_report_applied: true/falseenrich_report_path: path to enriched HTML or nulloutput/report.mdNext steps:
/verify-report— Verify claims against cited sources. Runs in a clean context window for thorough fact-checking.
If a project directory already exists at init:
.metadata/execution-log.json for phase completion state| Scenario | Recovery |
|---|---|
| All researchers fail | Ask user to rephrase topic or try different sub-questions |
| Most researchers fail | Proceed with available contexts, note gaps in report |
| Writer produces empty draft | Re-run with more explicit instructions |
| Claims verification needed | Handled by verify-report skill in a separate context window — not run here |
| Review loop reaches max (3) | Accept current draft with quality warning |
| Local documents unreadable | Log skipped files, proceed with readable ones. If none readable, ask user for alternative paths |
| No relevant content in local docs | Suggest switching to web mode or providing different documents |
| Hybrid mode: local fails, web succeeds | Proceed with web-only context, note in report that local sources were unavailable |
| Document path glob matches nothing | Report error, ask user to verify paths |