From hyperskills
Conducts wave-based multi-agent research: prime with Sibyl searches, broad agent sweeps, gap analysis, targeted waves, synthesis for tech evaluation, SOTA, competitive analysis.
npx claudepluginhub hyperb1iss/hyperskills --plugin hyperskillsThis skill uses the workspace's default tool permissions.
Wave-based knowledge gathering with deferred synthesis. Mined from 300+ real research dispatches: the pattern that consistently produces actionable intelligence.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Wave-based knowledge gathering with deferred synthesis. Mined from 300+ real research dispatches: the pattern that consistently produces actionable intelligence.
Core insight: Research breadth-first, synthesize after. Don't draw conclusions from the first 3 results. Deploy agents in waves, accumulate findings, then synthesize with the full picture.
digraph research {
rankdir=TB;
node [shape=box];
"1. PRIME" [style=filled, fillcolor="#e8e8ff"];
"2. WAVE 1: Broad Sweep" [style=filled, fillcolor="#ffe8e8"];
"3. GAP ANALYSIS" [style=filled, fillcolor="#fff8e0"];
"4. WAVE 2+: Targeted" [style=filled, fillcolor="#ffe8e8"];
"5. SYNTHESIZE" [style=filled, fillcolor="#e8ffe8"];
"6. DECIDE & RECORD" [style=filled, fillcolor="#e8e8ff"];
"1. PRIME" -> "2. WAVE 1: Broad Sweep";
"2. WAVE 1: Broad Sweep" -> "3. GAP ANALYSIS";
"3. GAP ANALYSIS" -> "4. WAVE 2+: Targeted";
"4. WAVE 2+: Targeted" -> "3. GAP ANALYSIS" [label="still gaps", style=dashed];
"3. GAP ANALYSIS" -> "5. SYNTHESIZE" [label="coverage sufficient"];
"5. SYNTHESIZE" -> "6. DECIDE & RECORD";
}
Search what you already know before spawning a single agent.
Search Sibyl first:
sibyl search "<research topic>"
sibyl search "<related technology>"
sibyl search "<prior decision in this area>"
Check for stale knowledge:
Define the research question clearly:
Set the research budget:
| Depth | Agents | Time | When |
|---|---|---|---|
| Quick scan | 2-3 | 2-5 min | Known domain, just need latest info |
| Standard | 5-10 | 10-15 min | Technology evaluation, architecture options |
| Deep dive | 10-30 | 20-40 min | Greenfield decisions, SOTA analysis |
| Exhaustive | 30-60+ | 40-90 min | New project inception, competitive landscape |
| Claim Type | Required Source |
|---|---|
| Current version | Package registry, release page, or official CLI |
| CLI flags / config keys | Official docs or local --help output |
| Security frameworks | OWASP, NIST, SLSA/OpenSSF, CIS, ISO, PCI sources |
| Cloud/provider behavior | Provider docs and current changelog |
| Research papers / SOTA | Paper, benchmark repo, or authors' artifact |
| Community health | Repository activity plus issue/release cadence |
If primary sources disagree with blog posts, trust the primary source and record the discrepancy. If a fact is volatile, date it explicitly and prefer a command/source the next agent can rerun.
Deploy the first wave of agents across the full research surface.
Each agent gets:
Research [SPECIFIC_TOPIC] for [PROJECT/DECISION].
Create a research doc at docs/research/[filename].md covering:
1. Current state (latest version, recent changes)
2. [Specific capability A relevant to our use case]
3. [Specific capability B]
4. [Integration with our stack: list specific technologies]
5. Performance characteristics / benchmarks
6. Known limitations and gotchas
7. Community health (stars, activity, maintenance)
8. Comparison with alternatives (name 2-3 specific alternatives)
Use WebSearch for current information. Include dates on all facts.
Cite sources with URLs.
For technology evaluations, cover these dimensions:
| Dimension | Question |
|---|---|
| Capability | Does it do what we need? |
| Performance | Is it fast enough? |
| Ecosystem | Does it integrate with our stack? |
| Maturity | Is it production-ready? |
| Community | Will it be maintained in 2 years? |
| Cost | What does it cost at our scale? |
| Migration | How hard is it to adopt/abandon? |
After Wave 1 completes, identify what's missing before synthesizing.
Read all Wave 1 outputs: skim each research doc
Identify gaps:
Check for bias:
| Finding | Action |
|---|---|
| Good coverage, minor gaps | Synthesize now, note gaps |
| Significant gaps | Deploy Wave 2 targeted agents |
| Contradictory findings | Deploy verification agents to resolve |
| Entirely new direction emerged | Deploy Wave 2 in new direction |
Fill specific gaps identified in the analysis.
Stop deploying waves when:
Max 3 waves for most research. If 3 waves haven't answered the question, the question needs reframing.
Combine all findings into actionable intelligence. This is where the magic happens.
## Research: [Topic]
### TL;DR
[2-3 sentences. The answer, not the journey.]
### Recommendation
[Clear choice with justification. Don't hedge — pick one.]
### Options Evaluated
| Option | Fit | Maturity | Perf | Ecosystem | Verdict |
| ------ | --- | -------- | ---- | --------- | --------------- |
| A | ... | ... | ... | ... | Best for [X] |
| B | ... | ... | ... | ... | Best for [Y] |
| C | ... | ... | ... | ... | Avoid: [reason] |
### Key Findings
1. [Most important finding with source]
2. [Second most important]
3. [Third most important]
### Risks & Gotchas
- [Known issue or limitation]
- [Migration complexity]
- [Hidden cost]
### Sources
- [Source 1](url) — [what it contributed]
- [Source 2](url) — [what it contributed]
Lock in the decision and capture it for future sessions.
Present the synthesis to the user with a clear recommendation
Record in Sibyl:
sibyl add "Research: [topic]" "Evaluated [options]. Chose [X] because [reasons]. Key risk: [Y]. Sources: [primary URLs]. Date: [today]."
Archive research docs: keep the wave outputs for reference:
docs/research/[topic]/Exit to next action:
| Next Step | When |
|---|---|
/hyperskills:brainstorm | Research surfaced multiple viable approaches |
/hyperskills:plan | Decision made, ready to decompose implementation |
/hyperskills:orchestrate | Decision made, work is parallelizable |
| Direct implementation | Research confirmed a simple path |
For focused questions that don't need the full wave protocol:
Use when: "What's the latest version of X?", "Does Y support Z?", "What's the recommended way to do W?"
Wave 1: Official docs + GitHub README for each option (parallel)
Wave 2: Production experience + benchmarks (parallel)
Synthesize: Comparison matrix + recommendation
Wave 1: Explore agents mapping each subsystem (parallel)
Wave 2: Grep for specific patterns / usage (parallel)
Synthesize: Architecture diagram + dependency map
Wave 1: WebSearch for latest papers, blog posts, releases (parallel)
Wave 2: Deep read the most relevant 3-5 sources (parallel)
Synthesize: What's genuinely novel vs rehashed + recommendation
Wave 1: Feature matrix for each competitor (parallel)
Wave 2: Pricing, community size, trajectory (parallel)
Synthesize: Positioning matrix + gap analysis
| Anti-Pattern | Fix |
|---|---|
| Synthesizing after Wave 1 only | Wait for gap analysis, premature conclusions miss nuance |
| 50 agents with "research everything" | Specific scope per agent, vague prompts produce vague results |
| Only official documentation | Include community experience, docs show intent, community shows reality |
| No dates on findings | Date everything, research spoils faster than produce |
| No recommendation | Force a decision, "more research needed" is only valid with a specific question |
| Researching what Sibyl already knows | Always prime first, don't burn tokens re-discovering known patterns |