By oborchers
Structured deep research methodology for Claude Code — query decomposition, parallel web research with source verification, hallucination prevention, and synthesis into well-sourced documents
npx claudepluginhub oborchers/fractional-cto --plugin deep-researchUse this agent to synthesize research findings from multiple research-worker intermediate documents into a single, well-sourced final output document. Runs after all research-worker AND research-verifier agents have completed. Applies verification corrections, handles deduplication, conflict resolution, thematic organization, citation management, and confidence scoring. <example> Context: Four research-worker agents completed and wrote intermediate docs. Time to synthesize. user: "Research how LLM agents handle memory" assistant: "All workers finished. I'll dispatch the research-synthesizer to merge findings into the final document." <commentary> The main conversation dispatches the synthesizer after all workers complete. The synthesizer reads all intermediate docs, deduplicates, resolves conflicts, organizes by theme, and writes the final output with inline citations and a Sources section. </commentary> </example> <example> Context: Additional workers were dispatched to fill gaps. Re-synthesis needed. user: "I want to investigate the pricing gap from the first round" assistant: "Gap-filling worker is done. I'll re-run the synthesizer to merge the new findings into the final document." <commentary> The synthesizer can be re-dispatched after follow-up research rounds to incorporate new findings into the existing output document. </commentary> </example>
Use this agent to verify a research-worker's intermediate document by re-fetching key sources and checking numerical claims, critical facts, and citation accuracy. Spawn one verifier per worker, in parallel, after all workers complete. <example> Context: Five research workers completed their intermediate documents. Time to verify before synthesis. user: "Research alternatives to trigger.dev for job scheduling" assistant: "All 5 workers finished. I'll dispatch parallel verifiers to spot-check their claims before synthesis." <commentary> One verifier per worker, running in parallel. Each verifier re-fetches key sources and checks the worker's most critical claims (numbers, funding, benchmarks, feature assertions) against actual source content. </commentary> </example> <example> Context: A single follow-up worker completed gap-filling research. Verify before re-synthesis. user: "Investigate the pricing gap from the first round" assistant: "Gap-filling worker done. Let me verify its claims before re-synthesizing." <commentary> Verifiers can be spawned individually for follow-up research rounds, not just as part of the initial batch. </commentary> </example>
Use this agent for parallel web research on specific subtopics during deep research sessions. Spawn multiple instances simultaneously, each assigned a different subtopic, to research in parallel and write intermediate findings documents. <example> Context: User initiated /research on LLM agent architectures and the command decomposed into subtopics. user: "Research how LLM-based research agents are built in production" assistant: "I'll dispatch parallel research-worker agents to investigate orchestration patterns, token efficiency, and hallucination prevention simultaneously." <commentary> The /research command decomposed the query into subtopics and dispatched research-worker agents in parallel. Each worker searches the web, evaluates sources, and writes an intermediate document with findings and citations. </commentary> </example> <example> Context: User wants comprehensive comparison of database options. user: "Compare PostgreSQL, CockroachDB, and TiDB for our multi-region SaaS" assistant: "I'll dispatch research workers to investigate each database's multi-region capabilities, consistency models, and operational complexity." <commentary> Each research-worker focuses on a specific subtopic, uses WebSearch and WebFetch to gather real information, and writes findings with sources to an intermediate document. </commentary> </example> <example> Context: Follow-up research to fill a gap identified in initial synthesis. user: "The initial research didn't cover pricing. Can you investigate that?" assistant: "I'll dispatch a research-worker to specifically investigate pricing models." <commentary> Research-worker agents can be spawned individually for targeted follow-up research, not just as part of initial parallel dispatch. </commentary> </example>
This skill should be used when producing any research output, verifying claims from web sources, checking citation accuracy, assessing confidence in findings, preventing hallucination cascading across agent boundaries, or reviewing research documents for factual reliability. Covers the hallucination taxonomy (7 types), OWASP ASI08 cascading failures, circuit breaker patterns, citation verification rules, confidence scoring, ground-truth validation, and known limitations of automated verification.
This skill should be used when starting any research task, decomposing a research query, planning research strategy, deciding how many sub-topics to investigate, scaling research effort to query complexity, determining when to stop researching, or dynamically re-planning based on intermediate findings. Covers query analysis, decomposition techniques (Self-Ask, Least-to-Most, DAG-based), effort scaling, plan representations, stopping criteria, and research anti-patterns.
This skill should be used when evaluating source credibility, deciding which search results to trust, choosing between search providers, detecting SEO spam or content farms, selecting domain-specific sources (academic, medical, legal, technical), evaluating software packages or libraries, comparing tools or technologies, assessing GitHub repo health, checking adoption metrics, or when research quality depends on retrieval quality. Covers the source credibility taxonomy (T1-T6 tiers), CRAAP framework adaptation, multi-provider search strategy, artifact evaluation framework (health/adoption/authority signals for packages, repos, APIs, standards, technologies), and source quality anti-patterns.
This skill should be used when combining research findings from multiple sources or agents, deduplicating overlapping information, resolving conflicts between sources, constructing a narrative from research data, formatting citations and source lists, assessing report quality, or writing the final research document. Covers deduplication strategies, conflict resolution, thematic analysis, narrative construction, citation management, and synthesis anti-patterns.
This skill should be used when the user asks 'how do I do deep research', 'show me research skills', 'help me research a topic', 'what research methodology should I use', or at the start of any structured web research task. Provides the index of all deep research principle skills and the /research command.
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Uses power tools
Uses Bash, Write, or Edit tools
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification