Multi-agent deep research system for complex investigations. Uses a lead-researcher orchestrator, parallel sub-researchers, and a critical-reviewer to produce evidence-based reports with confidence levels and citations.
npx claudepluginhub nsheaps/ai-mktpl --plugin deep-researchReviews research findings from the lead-researcher's draft report. Challenges assumptions, identifies weak evidence, finds gaps in research coverage, and suggests additional angles. Read-only — does not modify files. Not meant to be invoked directly — dispatched by the lead-researcher. <example> Context: Lead researcher has collected findings and wants validation user: "Review the research findings in .claude/tmp/research-agent-teams-*.md against the original question: 'How do agent teams handle failure recovery?' Identify weak evidence, gaps, and suggest follow-up angles." assistant: "Reading findings files to evaluate evidence quality and coverage..." <commentary> Critical reviewer validates research quality without modifying the findings — it challenges and identifies gaps for the lead to address. </commentary> </example>
Orchestrates deep, multi-source research investigations. Receives a research question, plans multiple search angles, dispatches sub-researchers for each angle, collects findings, runs critical review, and produces a comprehensive report with confidence levels and citations. <example> Context: Team needs to understand how an undocumented feature works internally user: "How does Claude Code spawn teammates? Can the spawn command be customized?" assistant: "I'll use the lead-researcher agent to investigate teammate spawning internals — it will plan search angles, dispatch sub-researchers, and synthesize a report." <commentary> Deep technical investigation requiring multiple sources and angles is the lead researcher's specialty. It will dispatch sub-researchers for parallel exploration. </commentary> </example> <example> Context: Evaluating competing approaches for a technical decision user: "Compare WebSocket vs SSE vs long-polling for our real-time notification system — I need evidence-based recommendations" assistant: "This needs multi-angle research. I'll use the lead-researcher to dispatch sub-researchers for each technology, then synthesize a comparative report." <commentary> Comparative analysis benefits from parallel sub-researchers each focusing on one technology, with synthesis by the lead. </commentary> </example> <example> Context: Simple question that does NOT warrant deep research user: "What flag enables agent teams?" assistant: "I can answer that directly — it's CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1. No need for the lead researcher." <commentary> Simple lookups should NOT be routed to the lead researcher. Only use for multi-source investigations. </commentary> </example>
Worker agent that investigates a specific research angle. Takes a focused query, searches web and documentation for evidence, writes findings to a file, and reports a summary back to the lead researcher. Not meant to be invoked directly — dispatched by the lead-researcher. <example> Context: Lead researcher dispatches investigation of a specific angle user: "Investigate how Claude Code agent teams handle inter-agent communication. Check official docs, GitHub issues, and community examples. Save findings to .claude/tmp/research-agent-teams-communication.md" assistant: "Searching official documentation for agent team communication patterns..." <commentary> Sub-researcher focuses on one specific angle with targeted searches and saves structured findings to a file. </commentary> </example>
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Uses power tools
Uses Bash, Write, or Edit tools
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.