From rune
Executes targeted web searches for technologies, libraries, best practices, and competitor solutions. Returns structured findings from 3+ diverse sources like docs, community forums, and blogs.
npx claudepluginhub rune-kit/rune --plugin @rune/analyticsThis skill uses the workspace's default tool permissions.
Web research utility. Receives a research question, executes targeted searches, deep-dives into top results, and returns structured findings with sources. Stateless — no memory between calls.
Conducts systematic internet research via strategic questioning, multi-source analysis, credibility evaluation with tier system, and structured .research/ reports. Use for best practices, comparisons, technology evaluations, trends.
Conducts structured technical research evaluating technologies, architectures, best practices, trade-offs, and solutions via YAGNI/KISS/DRY phases. Use for scoping, gathering info, analysis, and recommendations.
Conducts structured multi-source research on tasks with value stance, epistemic tagging (verified/interpretation/speculation/unknown), and actionable synthesis.
Share bugs, ideas, or general feedback.
Web research utility. Receives a research question, executes targeted searches, deep-dives into top results, and returns structured findings with sources. Stateless — no memory between calls.
None — pure L3 utility using WebSearch and WebFetch tools directly.
plan (L2): external knowledge for architecture decisionsbrainstorm (L2): data for informed ideationmarketing (L2): competitor analysis, SEO datahallucination-guard (L3): verify package existence on npm/pypiautopsy (L2): research best practices for legacy patternsba (L2): research similar products and integrationsgraft (L2): research source repo patterns before graftingmcp-builder (L2): research MCP standards and existing implementationsscaffold (L1): research project templates and best practicesresearch_question: string — what to research
focus: string (optional) — narrow the scope (e.g., "security", "performance")
Generate 2-3 targeted search queries from the research question. Vary phrasing to cover different angles:
Call WebSearch for each query. Collect result titles, URLs, and snippets. Identify the top 3-5 most relevant URLs prioritizing source diversity:
| Source Type | Examples | Why |
|---|---|---|
| Official docs | Framework docs, API reference, RFC | Authoritative but may lag behind reality |
| Community | Stack Overflow, GitHub Issues, Reddit | Real-world pain points, edge cases |
| Technical blogs | Dev.to, Medium engineering blogs, personal blogs | Practical experience, tutorials |
| Repositories | GitHub repos, npm packages, example code | Working implementations |
Selection rules:
After each WebSearch call, evaluate whether additional searches are productive:
Track across search results:
new_entities_in_this_search / total_entities_found_so_far| Signal | Threshold | Action |
|---|---|---|
| New entity ratio < 10% | Last search added almost nothing new | Skip remaining queries, proceed to Step 3 with existing results |
| Result overlap > 60% | Most URLs already fetched or seen | Skip this query's results entirely |
| All 3 queries return same top 3 URLs | Search space is exhausted | Proceed directly to Step 3 — more queries won't help |
Report when triggered:
Note: Research saturation reached after [N] searches — [M] unique entities found.
Additional queries showed <10% new information. Proceeding with synthesis.
Why: Research skills commonly waste 2-3 WebFetch calls on pages that repeat information already gathered. Saturation detection saves tool calls and context tokens while preserving research quality — the first 3 sources typically contain 90%+ of available information.
Call WebFetch on the top 3-5 URLs identified in Step 2. Hard limit: max 5 WebFetch calls per research invocation. For each fetched page:
Across all fetched content, triangulate — don't just aggregate:
| Confidence | Criteria |
|---|---|
high | 3+ sources from different types agree |
medium | 2 sources agree, or 3+ from same type |
low | Single source, or sources conflict without resolution |
unverified | No sources found — report this explicitly, NEVER fabricate |
Return structured findings in the output format below.
## Research Results: [Query]
- **Sources fetched**: [n]
- **Confidence**: high | medium | low
### Key Findings
- [finding] — [source URL]
- [finding] — [source URL]
### Conflicts / Caveats
- [Source A] says X. [Source B] says Y. Recommend verifying against [authority].
### Code Examples
```[lang]
[relevant snippet]
## Sharp Edges
Known failure modes for this skill. Check these before declaring done.
| Failure Mode | Severity | Mitigation |
|---|---|---|
| Fabricating findings when no useful results found | CRITICAL | Constraint: report "no useful results found" explicitly — never invent citations |
| Reporting conflicting sources without flagging the conflict | HIGH | Constraint: flag conflicting information explicitly, never silently pick one side |
| Assigning "high" confidence from a single source | MEDIUM | High = 3+ sources agree; 1-2 sources = medium confidence |
| Exceeding 5 WebFetch calls per invocation | MEDIUM | Hard limit: prioritize top 3-5 URLs from search, fetch only the most relevant |
| Single-source conclusions presented as fact | HIGH | HARD-GATE: minimum 3 complementary sources from different source types. Single source = `low` confidence |
| All sources from same domain (e.g., 3 Stack Overflow links) | MEDIUM | Source diversity rule: never 3+ URLs from the same domain. Spread across official/community/blog/repo |
## Done When
- 2-3 search queries formulated and executed
- Top 3-5 URLs identified and fetched (max 5 WebFetch calls)
- Conflicting information between sources explicitly flagged
- Confidence level assigned (high/medium/low) with rationale
- Research Results emitted with source URLs for every key finding
## Cost Profile
~300-800 tokens input, ~200-500 tokens output. Haiku. Fast and cheap.