By dbc-oduffy
Multi-agent deep research pipelines — internet research with iterative deepening (Pipeline A v2.2), repo assessment with repomap and atlas generation (Pipeline B), and structured schema-conforming research (Pipeline C). All pipelines use Agent Teams with Haiku scouts, Sonnet specialists, and Opus sweep agents.
npx claudepluginhub dbc-oduffy/deep-research-claudePipeline B (Repo Research) using Agent Teams — optional Opus survey for holistic orientation, 2 Haiku scouts build file inventories, 4 Sonnet specialists analyze and optionally compare, 1 Opus synthesizer produces the final document. In --deepest mode: three-phase pipeline with atlas sketch and refinement.
Run a deep research pipeline on a topic across internet sources (Pipeline A), a repository (Pipeline B), or structured research with schema-conforming output (Pipeline C). Use for studying codebases, building knowledge bases, evaluating libraries, or investigating multi-source technical topics with verified findings. For batch structured research campaigns, use /structured-research instead.
Set up the deep-research plugin — verify Agent Teams, check pipeline availability, configure NotebookLM. Safe to re-run.
Pipeline C (Structured Research) using Agent Teams — schema-conforming research with a Haiku scout, Sonnet verifiers, and an Opus synthesizer, all as teammates. EM reads spec, pre-processes into scout-brief.md, spawns the team, and is freed. The team handles everything autonomously.
Pipeline A v2.2 (Internet Research) using Agent Teams — collaborative research with a Haiku scout, Sonnet specialists (adversarial peers with structured output), and an Opus sweep agent, all as teammates. EM scopes research, spawns the team, and is freed. The team works autonomously with optional iterative deepening: after Team 1 completes, the EM evaluates the gap report and may dispatch a smaller Team 2 for targeted follow-up.
Haiku scout for Agent Teams-based repo research. Spawned as a teammate by the deep-research-repo command. Reads and inventories every file in assigned chunks of a target repository, producing structured file inventories for Sonnet specialists to consume. In comparison mode, also identifies equivalent files in the user's project. Examples: <example> Context: EM has scoped research into 4 chunks and assigned 2 chunks to each scout. user: "Inventory chunks A and B of the target repository" assistant: "I'll read every file in those chunks, catalog structs/functions/constants, and write the inventory." <commentary> Scout reads files mechanically, writes structured inventory to disk. Task completion unblocks specialists. </commentary> </example>
Sonnet topic specialist for Agent Teams-based repo research. Spawned as a teammate by the deep-research-repo command. Starts from a Haiku scout's file inventory, deep-reads repo files for assessment, optionally compares against a project, messages peers with cross-chunk findings, and writes verified analysis to disk. Examples: <example> Context: Scouts have completed file inventories and specialists are unblocked. user: "Analyze chunk A of the target repository" assistant: "I'll read the scout inventory, deep-read the key files, and write my assessment." <commentary> Specialist reads inventory first, then deep-reads files via Read. Produces assessment artifact, optionally comparison artifact. </commentary> </example>
Haiku scout for Agent Teams-based deep research. Spawned as a teammate by the deep-research-web-teams command. Executes search queries from scope.md, mechanically vets source accessibility, and writes a shared source corpus for Sonnet specialists to consume. Examples: <example> Context: EM has scoped research and written search queries to scope.md. user: "Execute search queries and build the shared source corpus" assistant: "I'll read the queries from scope.md, run web searches, vet accessibility, and write the corpus." <commentary> Scout reads queries from disk (written by EM during scoping), executes them mechanically, writes results to source-corpus.md. Task completion unblocks specialists. </commentary> </example>
Sonnet topic specialist for Agent Teams-based deep research. Spawned as a teammate by the deep-research-web command. Starts from a shared source corpus (built by a Haiku scout), deep-reads and verifies sources, challenges peers' claims (adversarial interaction), and writes structured claims JSON + markdown summary to disk. May do supplementary web searches if the corpus is thin for their topic. Examples: <example> Context: Scout has built a shared corpus and specialists are unblocked. user: "Analyze the 'agent orchestration patterns' topic area" assistant: "I'll read the shared corpus, deep-read the most relevant sources, challenge peer claims where warranted, and output structured claims + summary." <commentary> Specialist reads source-corpus.md first, then deep-reads sources via WebFetch. Supplements with own WebSearch if needed. Outputs claims.json (structured) + summary.md (human-readable). </commentary> </example>
Opus sweep agent for Agent Teams-based deep research. Spawned as a teammate by the deep-research-web command. Blocked until all specialists complete, then reads their structured claims and summaries directly, performs adversarial coverage check, fills gaps with targeted research, and writes the executive summary and conclusion. Preserves specialist content — does not rewrite it. Examples: <example> Context: All specialists have completed and written claims.json + summary.md files. user: "Sweep the specialist findings — check coverage, fill gaps, write framing" assistant: "I'll read all specialist outputs, assess coverage gaps, research to fill them, and write the executive summary and conclusion." <commentary> The sweep agent reads specialist claims.json and summary.md files directly (no consolidator intermediate). Three phases: assess, fill gaps, frame. </commentary> </example>
Opus synthesizer for Agent Teams-based structured research (Pipeline C v2.1). Spawned as a teammate by the deep-research-structured command. Blocked until all verifier tasks complete, then writes skeleton output immediately, cross-reconciles verifier findings, resolves CONTESTED fields, validates schema, and overwrites with final structured data. Examples: <example> Context: All verifiers have completed their research and written schema field tables to disk. user: "Synthesize all verified findings into schema-conforming output" assistant: "I'll read all verifier outputs, write a skeleton to the output path immediately, cross-reference schema fields, resolve CONTESTED fields, reconcile conflicts, and overwrite with validated YAML/JSON output." <commentary> Synthesizer's task is blocked by all verifier tasks. Once unblocked, it reads schema field tables from the scratch directory. Output-first: skeleton written immediately as crash insurance, then refined. Output is structured data, not prose. </commentary> </example>
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Uses power tools
Uses Bash, Write, or Edit tools
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.