From deep-research
Conducts AI-powered deep research on any topic via triggers like '/deep-research [topic]' or 'deep research on [topic]'. Uses interactive AskUserQuestion for focus, output, and audience selection.
npx claudepluginhub fivetaku/deep-research-kitThis skill uses the workspace's default tool permissions.
> AI-powered comprehensive research with state management, multi-agent source verification, and structured outputs.
assets/templates/bibliography.mdassets/templates/executive_summary.mdassets/templates/full_report_section.mdassets/templates/readme_research.mdassets/templates/website_template.htmlexamples/ai_code_assistants.jsonexamples/climate_tech.jsonexamples/healthcare_ai.jsonreferences/agent_prompts.mdreferences/citation_rules.mdreferences/phase_contracts.mdreferences/quality_rubric.mdreferences/query_generator.mdreferences/query_schema.jsonreferences/tool_strategy.mdscripts/orchestrator.pyscripts/pipelines.pyExecutes multi-agent research pipeline on any topic with Scout, Investigators, Deep Diver, Verifier, Synthesizer, and Critic reviews to produce verified, sourced reports.
Diagnoses research quality, guides systematic query expansion, and runs Tavily AI web searches via Deno CLI. Use when starting research, stuck, or unsure if complete.
Conducts deep web research with parallel agents, multi-wave exploration for gaps, and structured synthesis. Activates for investigating topics, comparing options, best practices, or comprehensive web info.
Share bugs, ideas, or general feedback.
AI-powered comprehensive research with state management, multi-agent source verification, and structured outputs.
# Primary triggers
- "/deep-research [topic]"
- "/research [topic]"
- "딥리서치 [주제]"
- "심층 연구 [주제]"
- "[주제]에 대해 리서치해줘"
- "[주제] 리서치"
- "deep research on [topic]"
# Resume triggers
- "/deep-research resume [session_id]"
- "/research-resume [session_id]"
# Status triggers
- "/deep-research status"
- "/research-status"
DO NOT just display this documentation. EXECUTE the research flow immediately.
AskUserQuestion tool for interactive selectionCALL THE AskUserQuestion TOOL IMMEDIATELY.
DO NOT output text-based questions. INSTEAD, call the AskUserQuestion tool with JSON parameters.
EXECUTE: 아래 JSON으로 AskUserQuestion 도구를 즉시 호출한다 (combine into 1-4 question groups). Translate all labels/descriptions to match user's language:
English Example:
{
"questions": [
{
"question": "What aspects interest you most?",
"header": "Focus",
"options": [
{"label": "Current state & trends", "description": "Latest developments, market status, key players"},
{"label": "Technical deep-dive", "description": "Architecture, implementation, tech stack"},
{"label": "Market analysis", "description": "Market size, growth rate, competition"},
{"label": "All of the above (Recommended)", "description": "Comprehensive research - all aspects"}
],
"multiSelect": false
},
{
"question": "What type of deliverable do you want?",
"header": "Output",
"options": [
{"label": "Comprehensive report (Recommended)", "description": "20-50+ pages, detailed analysis and insights"},
{"label": "Executive summary", "description": "3-5 pages, key points only"},
{"label": "Modular documents", "description": "Multiple documents by topic"}
],
"multiSelect": false
},
{
"question": "Who will read this research?",
"header": "Audience",
"options": [
{"label": "Technical team/Developers", "description": "Include technical details"},
{"label": "Business executives", "description": "Focus on strategic insights"},
{"label": "Researchers/Academic", "description": "Academic citations and methodology"},
{"label": "General audience", "description": "Easy explanations and overview"}
],
"multiSelect": false
},
{
"question": "Any source preferences?",
"header": "Sources",
"options": [
{"label": "Academic/Papers", "description": "Peer-reviewed papers, conferences"},
{"label": "Industry reports", "description": "Gartner, white papers, analyst reports"},
{"label": "News/Current", "description": "Media, blogs, latest announcements"},
{"label": "All sources (Recommended)", "description": "All reliable sources"}
],
"multiSelect": false
}
]
}
Korean Example (EXECUTE):
{
"questions": [
{
"question": "어떤 측면에 관심이 있으신가요?",
"header": "Focus",
"options": [
{"label": "현재 상태와 트렌드", "description": "최신 동향, 시장 현황, 주요 플레이어"},
{"label": "기술 심층 분석", "description": "아키텍처, 구현 방법, 기술 스택"},
{"label": "시장 분석", "description": "시장 규모, 성장률, 경쟁 구도"},
{"label": "모두 포함 (Recommended)", "description": "종합 리서치 - 모든 측면 분석"}
],
"multiSelect": false
}
]
}
RESEARCH/{topic}_{timestamp}/state.jsonoutputs/ folderAll search queries MUST include current date context for freshness.
Before generating ANY search query, determine today's date from the system context.
Always append year to queries:
Use recency operators:
Add freshness keywords:
Example transformations:
| User Query | Generated Search Query |
|---|---|
| AI 코딩 어시스턴트 | AI 코딩 어시스턴트 2026 최신 동향 |
| startup trends | startup trends 2026 latest |
| React vs Vue | React vs Vue 2026 comparison |
For academic/historical research:
[topic] [current_year] [freshness_keyword] [specific_aspect]
tool_strategy.md의 플랫폼별 접근 전략 또는 Fallback 순서대로 시도via_fallback 태그 추가sources/failed_urls.txt에 함께 기록Deploy 3-5 parallel agents to maximize coverage:
| Agent Type | Count | Focus | Output |
|---|---|---|---|
| Web Research | 2-3 | Current info, trends, news | Structured summaries with source URLs |
| Academic/Technical | 1-2 | Papers, specs, methodology | Technical analysis with citations |
| Cross-Reference | 1 | Fact-checking, verification | Confidence ratings for key findings |
Launch multiple Task calls in a single response for parallel execution with mode: "bypassPermissions". Each agent receives a focused prompt with specific subtopic and citation requirements.
For detailed agent prompt templates and Graph of Thoughts integration:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/agent_prompts.md
기본 도구(WebSearch, WebFetch, Bash/curl)로 리서치를 수행한다. 플랫폼별 최적 접근법은 tool_strategy.md를 참조한다. 환경에 MCP 도구(Perplexity, Firecrawl, Exa 등)가 설치되어 있으면 우선 활용하되, 없어도 기본 도구만으로 충분한 리서치가 가능하다.
Deploy parallel research agents using the Task tool with run_in_background=True and mode: "bypassPermissions" for concurrent subtopic investigation.
For detailed tool strategy and code examples:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/tool_strategy.md
Every factual claim MUST include inline citation.
| Grade | Description | Examples |
|---|---|---|
| A | Peer-reviewed, systematic reviews, meta-analyses | Nature, Lancet, IEEE |
| B | Official docs, clinical guidelines, cohort studies | FDA, W3C, WHO |
| C | Expert opinion, case reports, industry reports | Gartner, conferences |
| D | Preliminary research, preprints, white papers | arXiv, company blogs |
| E | Anecdotal, theoretical, speculative | Social media, forums |
For detailed citation formatting rules, refer to:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/citation_rules.md
For complete source quality assessment rubric:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/quality_rubric.md
Always ground statements in source material
Use Chain-of-Verification for critical claims
Cross-reference multiple sources
Explicitly state uncertainty
{
"session_id": "Topic_Name_20260224_143000",
"topic": "Research Topic",
"created_at": "2026-02-24T14:30:00Z",
"updated_at": "2026-02-24T15:45:00Z",
"status": "PHASE_3_QUERYING",
"current_phase": 3,
"requirements": {
"focus": ["aspect1", "aspect2"],
"output_format": "comprehensive_report",
"scope": {"timeframe": {}, "geography": {}},
"sources": {"required_types": [], "min_quality": "B"},
"audience": "executive",
"special_requirements": []
},
"plan": {
"subtopics": [],
"search_queries": {},
"agent_assignments": []
},
"progress": {
"phase_1": "completed",
"phase_2": "completed",
"phase_3": "in_progress",
"phase_4": "pending",
"phase_5": "pending",
"phase_6": "pending",
"phase_7": "pending"
},
"sources_count": 0,
"artifacts": {},
"errors": []
}
{"id": "src_001", "url": "https://...", "title": "Article Title", "author": "Author", "date": "2024-06-15", "domain": "nature.com", "type": "academic", "quality_rating": "A", "snippet": "relevant excerpt...", "claims": ["claim1"], "verified": true}
For detailed phase input/output contracts:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/phase_contracts.md
RESEARCH/{topic}_{timestamp}/
├── state.json # Session state (resumable)
├── README.md # Navigation guide
│
├── artifacts/ # Intermediate outputs
│ ├── research_plan.json
│ ├── agent_results/
│ └── drafts/
│
├── sources/
│ ├── sources.jsonl # All collected sources
│ ├── bibliography.md # Formatted citations
│ └── quality_report.md # Source quality ratings
│
├── outputs/ # FINAL DELIVERABLES
│ ├── 00_executive_summary.md
│ ├── 01_full_report/
│ │ ├── 01_introduction.md
│ │ ├── 02_current_landscape.md
│ │ ├── 03_challenges.md
│ │ ├── 04_future_outlook.md
│ │ └── 05_conclusions.md
│ ├── 02_appendices/
│ └── comparison_data.json
│
└── website/ # (optional) Visual presentation
├── index.html
├── styles.css
└── script.js
Use the templates at ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/assets/templates/ for consistent formatting:
| Template | Purpose |
|---|---|
executive_summary.md | Executive summary structure |
full_report_section.md | Individual report section template |
bibliography.md | Bibliography with quality distribution |
readme_research.md | Research session README/navigation |
website_template.html | Interactive web presentation |
기본 5섹션 골격(introduction/landscape/challenges/future_outlook/conclusions)이 모든 리서치의 default. 사용자가 명시적으로 다른 type을 요청한 경우, 아래 참고 예시 패턴을 보고 사용자 리서치에 맞게 골격을 즉석 동적 생성한다.
주의: 기본 7-Phase + 5섹션 + Date-aware는 모두 deep-research의 핵심 contract로 보존. 본 type별 골격은 사용자 명시 요청 시에만 적용되는 advanced 옵션이며, 표는 카탈로그 메뉴가 아니라 동적 생성 학습용 예시다.
| Research Type | 5섹션 패턴 예시 | 적합 사례 |
|---|---|---|
| Exploratory (새 영역 탐색) | introduction / landscape / opportunities / challenges / conclusions | 신규 시장/기술 탐색 |
| Comparative (A vs B 비교) | introduction / criteria / comparison_matrix / recommendation / conclusions | 도구/제품 비교 |
| Predictive (미래 시나리오) | introduction / current_state / trends / scenarios / risks_and_recommendations | 시장 예측 / 기술 로드맵 |
| Analytical (원인-결과) | introduction / problem / causes / effects / conclusions | 사건 분석 / 인과 추적 |
| 기본 (Generic) | introduction / current_landscape / challenges / future_outlook / conclusions | 종합 리서치 (default) |
→ 위는 패턴 학습용 예시. 사용자 주제가 "X 시장의 한국 vs 일본 차이"면 Comparative 패턴으로 introduction / 시장규모비교 / 사용자행동차이 / 규제차이 / 진입전략추천 같이 섹션 명을 즉석 변환.
report_skeleton 필드에 최종 결정된 골격 기록 (resume 가능)For precise research control, accept structured JSON queries following the schema at:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/query_schema.json
When a user provides a JSON object as input, parse it according to the schema and skip Phase 1 (Question Scoping) since requirements are already defined.
Example queries are available at:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/examples/
When resume is triggered:
RESEARCH/*/state.jsonstate.jsonprogress object for last completed phasefor phase_num in range(1, 8):
phase_key = f"phase_{phase_num}"
if state["progress"][phase_key] == "in_progress":
resume_phase(phase_num)
break
elif state["progress"][phase_key] == "pending":
start_phase(phase_num)
break
state.json errors arrayfailed in progresssources/failed_urls.txtState management scripts are available at:
${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/scripts/
| Script | Purpose |
|---|---|
orchestrator.py | Research state machine controller - session creation, phase management, source tracking |
pipelines.py | Pipeline definitions - agent prompts, clarification templates, synthesis prompts |
These can be executed via Bash to initialize sessions or manage state programmatically.
For detailed documentation on specific aspects:
| Reference | Location |
|---|---|
| Citation formatting rules | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/citation_rules.md |
| Phase input/output contracts | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/phase_contracts.md |
| Source quality rubric | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/quality_rubric.md |
| Agent prompt templates & GoT | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/agent_prompts.md |
| Tool strategy & code examples | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/tool_strategy.md |
| Structured query schema | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/query_schema.json |
| Query generation guide | ${CLAUDE_PLUGIN_ROOT}/skills/deep-research-main/references/query_generator.md |