Multi-workstream research orchestrator. Decomposes any research question into 2-4 focused workstreams, executes each with systematic web search, then synthesizes into a single coherent document with cross-cutting insights, contradictions, and confidence assessments. Use this skill whenever the user asks to "research", "deep dive", "analyze", "investigate", or "find out everything about" any topic. Also trigger when a question is complex enough that a single search would miss important angles — competitive analysis, market research, investment due diligence, scientific topics, strategic decisions, historical events, or any question where multiple perspectives matter. Activate proactively even if the user just says "help me understand X in depth."
From content-creationnpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin content-creationThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Details PluginEval's skill quality evaluation: 3 layers (static, LLM judge), 10 dimensions, rubrics, formulas, anti-patterns, badges. Use to interpret scores, improve triggering, calibrate thresholds.
Systematic multi-workstream research for any question. Decompose → Execute → Synthesize.
Before searching anything:
Understand the question. What exactly needs to be learned? What decision or output will this research inform?
Decompose into 2–4 workstreams. Each workstream must have:
Present the plan to the user in this format:
## Research Plan: [Topic]
**Core question:** [What we're actually trying to answer]
### Workstream 1: [Name]
**Scope:** [1 sentence]
**Key questions:**
1. [Question]
2. [Question]
3. [Question]
### Workstream 2: [Name]
...
### Synthesis
After all workstreams: cross-cutting themes, contradictions, confidence assessment, recommendations.
---
Proceed with this plan? Or adjust workstream scope?
Execute workstreams sequentially (Claude.ai constraint — no parallel agents). For each workstream:
For each sub-question in the workstream:
After completing each workstream, briefly report:
✓ Workstream 1 complete: [2-sentence summary of what was found]
Starting Workstream 2...
After all workstreams complete, produce a synthesis document with these sections:
3–5 bullets. The most important things learned. What would change decisions.
Organized by cross-cutting theme — NOT by workstream. This is the value-add: insights that span multiple workstreams, patterns that only emerge when the full picture is assembled.
What did different workstreams find that conflicts? Where is the evidence genuinely uncertain? Do not paper over these — flag them explicitly.
| Finding | Confidence | Basis |
|---|---|---|
| [Claim] | High / Medium / Low | [Why] |
High = multiple corroborating primary sources Medium = single good source or multiple secondary sources Low = limited data, old data, or conflicting sources
What should the user do with this research? What gaps remain? What would move low-confidence findings to high?
Never start research without user approval of the plan. The decomposition IS the insight — wrong workstreams waste everything that follows.
Write-then-search, not search-then-search. After every 1–2 searches, record findings. Never queue up 5 searches and then try to synthesize from memory.
Contradictions are features, not bugs. If workstreams surface conflicting data, report it. The user hired a researcher, not a PR firm.
Depth matches the question. Don't produce a 3,000-word synthesis for a question that has a clear 200-word answer. Calibrate.
Workstream scope should be exhaustive but non-overlapping. If two workstreams would both naturally search for the same thing, adjust the split before launching.
Flag source quality inline. If you're citing a Reddit post or an SEO farm, say so. The user needs to know where the evidence is solid.
The synthesis is not a summary. It surfaces cross-cutting patterns, not a workstream-by-workstream recap. If the synthesis just repeats what each workstream said, it failed.
Use these as starting points — adapt to the actual question:
Competitive / Market Analysis
Investment / Deal Due Diligence
Technology Evaluation
Person / Organization Research
Strategic Decision Support
Ask the user (or infer from context) which output format they need:
Default to structured report unless the question is simple or the user signals otherwise.