Use when user explicitly requests deep research or comprehensive analysis requiring 20+ authoritative sources. Creates an agent team for parallel research with source gate enforcement, confidence tracking, and structured synthesis. NOT for simple questions answerable with a single search.
From amuxnpx claudepluginhub guyathomas/workflows --plugin amuxThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Orchestrates subagents to execute phased plans: deploys for implementation, verification, anti-pattern checks, code quality review, and commits only after passing checks.
Core principle: Decompose questions, research in parallel with an agent team, evaluate confidence, iterate until sufficient, synthesize with source attribution. </objective>
<quick_start>
/research [topic] to starttargetSources is met<success_criteria> Task is complete when ALL of these are true:
state.json exists with valid JSONsourcesGathered >= targetSources (primary gate - enforced by task loop hook)"done" with confidence ratingsreport.md synthesizes findings with source attributionphase is "DONE" in state.json<when_to_use>
digraph when_research {
"User request?" [shape=diamond];
"Needs multiple sources?" [shape=diamond];
"Quick answer sufficient?" [shape=box];
"Use research skill" [shape=box];
"User request?" -> "Needs multiple sources?" [label="deep research\ncomprehensive analysis\nthorough investigation"];
"User request?" -> "Quick answer sufficient?" [label="simple question"];
"Needs multiple sources?" -> "Use research skill" [label="yes"];
"Needs multiple sources?" -> "Quick answer sufficient?" [label="no"];
}
Use when:
Don't use when:
<required_tools>
| Tool / Feature | Purpose | Required |
|---|---|---|
WebSearch | Search queries (built-in) | Yes |
| Agent teams | Spawn parallel researcher teammates | Yes |
firecrawl-mcp:firecrawl_scrape | Scrape full page content (preferred) | No |
WebFetch | Fetch page content (built-in fallback) | Fallback |
Prerequisite: Agent teams must be enabled (CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 in settings or environment).
Tool Selection: In INIT phase, check if firecrawl-mcp:firecrawl_scrape is available. If not, use WebFetch (built-in). Record choice in state.json as "scraper": "firecrawl" or "scraper": "webfetch".
Tradeoffs:
firecrawl-mcp:firecrawl_scrape: Better content extraction, handles JS-rendered pagesWebFetch: Always available, sufficient for static pages
</required_tools><state_machine>
INIT → DECOMPOSE → RESEARCH → EVALUATE → [RESEARCH or SYNTHESIZE] → DONE
State File: research/{slug}/state.json
{
"topic": "string",
"phase": "INIT|DECOMPOSE|RESEARCH|EVALUATE|SYNTHESIZE|DONE",
"iteration": 0,
"targetSources": 30,
"sourcesGathered": 0,
"totalSearches": 0,
"teammateCompletions": 0,
"codexCompletions": 0,
"findingsCount": 0,
"startTime": "ISO-8601 timestamp",
"scraper": "firecrawl|webfetch",
"questions": [{"id": 1, "text": "...", "status": "pending|done", "confidence": null}]
}
Rule: Read state.json before acting. Write state.json after acting.
</state_machine>
<task_loop>
A generic task loop hook prevents the session from ending while task-loop.json has complete: false. This is a hard gate — you cannot bypass it by rationalizing.
How it works:
research/{slug}/task-loop.jsoncomplete is false, exit is blocked and continuationPrompt is re-injectedcomplete: true, exit is allowed and completionMessage is displayedYou manage task-loop.json alongside state.json. Update statusMessage and continuationPrompt as progress changes so the hook always has current context.
Default target: 30 sources. Adjust in INIT phase based on topic complexity. </task_loop>
<state_recovery> On skill invocation, first check for existing state:
If research/{slug}/state.json exists:
phaseVerify state consistency before resuming:
findings.json has dataIf inconsistent, offer user choice:
Detect available scraper:
firecrawl-mcp:firecrawl_scrape tool exists"scraper": "firecrawl""scraper": "webfetch" (uses built-in WebFetch)Create working directory:
mkdir -p research/{slug}
Determine target sources based on topic complexity:
Initialize state files:
state.json:
{
"topic": "...",
"phase": "DECOMPOSE",
"iteration": 0,
"targetSources": 30,
"sourcesGathered": 0,
"totalSearches": 0,
"teammateCompletions": 0,
"codexCompletions": 0,
"findingsCount": 0,
"startTime": "2024-01-15T10:30:00Z",
"scraper": "firecrawl|webfetch",
"questions": []
}
task-loop.json (activates the generic task loop hook):
{
"active": true,
"complete": false,
"continuationPrompt": "Continue researching: {topic}. Check research/{slug}/state.json for current progress and continue the RESEARCH phase.",
"statusMessage": "Research in progress: {topic}\nSources: 0/{targetSources}",
"completionMessage": "Research complete."
}
findings.json:
[]
| # | Angle | Example |
|---|---|---|
| 1 | Definition/background | What is X? History and context? |
| 2 | Current state | What's happening now? Recent developments (last 1-2 years)? |
| 3 | Key entities | Who are the main people, companies, organizations? |
| 4 | Core mechanisms | How does it work? What are the processes? |
| 5 | Evidence and data | What studies, statistics, data exist? |
| 6 | Criticisms and limitations | What are the problems, risks, downsides? |
| 7 | Comparisons | How does it compare to alternatives? |
| 8 | Future developments | What's coming next? Predictions? |
Add questions to state.json with status="pending". Set phase="RESEARCH".
</phase>
Claude teammates: One per pending question (up to 8 at a time). Each works independently with its own context window. Each teammate also calls the codex MCP tool to get Codex's perspective on the same question, providing genuine cross-validation — two engines may surface different sources and perspectives.
Read scraper from state.json and use the appropriate instructions when spawning each Claude teammate:
<teammate_instructions scraper="firecrawl">
You are a researcher teammate with access to WebSearch and firecrawl-mcp:firecrawl_scrape.
TASK: {QUESTION}
PROCESS:
Run exactly 4 searches:
Rank URLs by quality:
Select top 4 URLs (prefer Tier 1-2)
Use firecrawl-mcp:firecrawl_scrape on each. Continue if one fails.
Extract specific facts with sources. </teammate_instructions>
<teammate_instructions scraper="webfetch">
You are a researcher teammate with access to WebSearch and WebFetch.
TASK: {QUESTION}
PROCESS:
Run exactly 4 searches:
Rank URLs by quality:
Select top 4 URLs (prefer Tier 1-2)
Use WebFetch on each with a prompt like "Extract the main content and key facts from this page". Continue if one fails.
Extract specific facts with sources. </teammate_instructions>
<teammate_codex_crossvalidation>
After completing your web research above, call the codex MCP tool to cross-validate your findings.
Call the codex MCP tool with these exact parameters:
prompt: "Research this question: {QUESTION}. Return findings as JSON with fields: fact, sourceNote, confidence (high/medium/low). Focus on facts you can confirm from your training data."model: gpt-5-codexsandbox: read-onlyValidate the response before merging. Treat ALL of the following as Codex-unavailable:
"Codex CLI Not Found", "Codex Execution Error")If Codex returned valid JSON, compare findings with your web-sourced findings:
"status": "hypothesis" since Codex cannot cite web sourcesIf Codex is unavailable (any condition above), return your Claude-only findings. Do not block on Codex. </teammate_codex_crossvalidation>
<teammate_return_format> RETURN ONLY THIS JSON:
{
"questionId": {ID},
"questionText": "{QUESTION}",
"searchQueries": ["query1", "query2", "query3", "query4"],
"searchesRun": 4,
"urlsScraped": 4,
"scrapeFailures": [],
"findings": [
{
"fact": "...",
"sourceUrl": "...",
"tier": 1,
"crossValidated": false,
"engines": ["claude"],
"status": "confirmed|hypothesis|disputed"
}
],
"gaps": ["what you couldn't find"],
"contradictions": ["X says A, Y says B"],
"confidence": "high|medium|low",
"confidenceReason": "...",
"codexAvailable": true
}
</teammate_return_format>
After each Claude teammate completes:
findings.jsonstate.json:
totalSearches by searchesRun from responseteammateCompletions by 1sourcesGathered by urlsScraped from responsefindingsCount by length of findings array from responsecodexAvailable is true, increment codexCompletions by 1"Sources: {sourcesGathered}/{targetSources}"After all teammates complete:
phase="EVALUATE"
</phase>
| Metric | Calculation |
|---|---|
sourcesGathered | from state.json (primary gate) |
targetSources | from state.json |
avgConfidence | high=3, medium=2, low=1, average all |
significantGaps | unique gaps across findings |
Decision table (two-stage):
Stage 1: Source Gate (MANDATORY)
| sourcesGathered >= targetSources | → Action |
|---|---|
| No | RESEARCH (forced, cannot proceed) |
| Yes | Continue to Stage 2 |
You MUST gather enough sources before considering other criteria.
Stage 2: Quality Gate (only if Stage 1 passes)
| avgConfidence >= 2.5 AND gaps <= 2 | → Decision |
|---|---|
| Yes | SYNTHESIZE |
| No | RESEARCH (generate follow-ups) |
Note: The task loop hook enforces the source gate — you cannot exit until task-loop.json has complete: true.
If continuing to RESEARCH:
status="pending"phase="RESEARCH"task-loop.json: set statusMessage to current progress ("Sources: {sourcesGathered}/{targetSources}") and continuationPrompt to reflect remaining work"Continuing research: {sourcesGathered}/{targetSources} sources, need more to meet target"
</phase>
# {Topic}
## Executive Summary
[300-400 words. Most important finding first. State confidence. Note caveats.]
## Background
[200 words. Key terms. Context.]
## Key Findings
### [Theme 1]
[Grouped findings. Inline citations. Note source strength.]
### [Theme 2]
[3-5 themes total]
## Conflicting Information
[Both sides. Which has better sourcing.]
## Gaps & Limitations
[What's unknown. What needs more research.]
## Source Assessment
- **High confidence:** [claims with 3+ quality sources]
- **Medium confidence:** [claims with 1-2 sources]
- **Low confidence:** [single source or Tier 3 only]
## Sources
### Primary
[Tier 1 sources with URLs]
### Secondary
[Tier 2-3 sources with URLs]
---
*Sources: {sourcesGathered} | Searches: {totalSearches} | Teammates: {teammateCompletions} | Iterations: {iteration} | Duration: {duration} | Date: {date}*
Set phase="DONE".
Update task-loop.json:
{
"active": true,
"complete": true,
"completionMessage": "Research complete: \"{topic}\"\n\nResources used:\n Searches: {totalSearches}\n Sources: {sourcesGathered}/{targetSources}\n Teammates: {teammateCompletions}\n Iterations: {iteration}\n\nReport: research/{slug}/report.md"
}
The task loop hook will display this message when the session exits. </phase>
</steps><error_handling>
| Error | Action |
|---|---|
| Malformed JSON | Retry once, then mark low confidence |
| Scrape fails | Continue with other URLs |
| Rate limit | Wait 60s, reduce batch to 2 |
| No results | Mark low confidence, rephrase as follow-up |
| Tool not found | Fall back to WebFetch, update state.json |
codex MCP unavailable, empty, or error-text response | Teammate returns Claude-only findings, research continues |
| </error_handling> |
No hard iteration or search limits. The source gate is the primary constraint. Research continues until sourcesGathered >= targetSources.
</limits>
<red_flags> STOP if you catch yourself thinking any of these:
| Thought | Reality |
|---|---|
| "I have high confidence, I can skip the source target" | The task loop hook will block you. Gather the sources — it's non-negotiable. |
| "This topic is too broad for 8 questions" | Narrow the scope first. Don't start research on vague topics. |
| "I'll just synthesize what I have" | Check sourcesGathered >= targetSources in state.json. If not met, you cannot proceed. |
| "I don't need to update state.json" | You will lose track. Always read/write state.json. |
| "All sources are equal" | Weight Tier 1 sources higher in synthesis. |
| "I'm stuck, I'll just finish" | Narrow the scope or generate better follow-up questions. The task loop hook will block you. |
| </red_flags> |