Research workflow — use when asked to research a topic, investigate options, compare approaches, or find the best solution to a technical question. NOT for "deep research" or "validate" requests — those go to deep-research. For complex or high-stakes questions where correctness is critical, use deep-research instead.
From devkitnpx claudepluginhub 5uck1ess/devkitThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Complete research lifecycle: clarify → decompose → search → summarize → corroborate → synthesize.
The user wants to research: {input}
Before searching, clarify:
- What specifically are we trying to learn?
- What constraints matter (language, framework, scale)?
- Any sources they already know about?
Restate the research question precisely.
Use AskUserQuestion to ask these questions explicitly. Don't proceed until the question is clear.
Break the research question into 3-5 sub-questions, each with an explicit retrieval goal.
Format:
- Query: <search query>
Goal: <what this search should find — e.g., "official docs on X", "benchmarks comparing X vs Y", "known problems with X">
Rules:
- Each query must target a DIFFERENT angle (definition, evidence, criticism, alternatives, recency)
- No two queries should return the same results
- Include at least one query seeking disconfirming evidence or criticism
[PARALLEL] Launch all sub-question searches concurrently using the researcher agent (max 3 agents):
Task: Execute web search for a specific sub-question.
Agent: researcher
Input: Query + Goal from decomposition step
Collect: titles, URLs, key snippets
All searches run in parallel. Collect results before proceeding.
For the 3-5 most promising URLs, fetch clean content:
WebFetch https://r.jina.ai/{url} with header Accept: text/markdown
CRITICAL: Immediately summarize each page into 3-5 key claims with source attribution.
Do NOT carry raw page content forward — summarize first, then discard the raw text.
This prevents context overflow on large pages.
If Jina fails for a URL, fall back to raw WebFetch on the original URL.
For each key claim from Step 4:
- Count how many independent sources support it
- Flag any claims supported by only 1 source as "uncorroborated"
- Flag any claims where sources contradict each other
Mark claims as:
- CONFIRMED (2+ independent sources agree)
- UNCORROBORATED (only 1 source)
- CONTESTED (sources disagree)
After corroboration, evaluate whether this question needs deep research. Ask the user (via AskUserQuestion) to upgrade to /devkit:deep-research if ANY of these are true:
Phrasing: "I'm finding [conflicting sources / low confidence / high-stakes implications] on this. Want me to switch to deep research with competing hypothesis analysis? It costs more tokens but produces higher-confidence results."
If the user says no, continue with the standard synthesis. Do NOT auto-escalate — always ask first.
Review claims marked UNCORROBORATED or CONTESTED.
For each, run one targeted search to either confirm or resolve the conflict.
Use Jina Reader for any new sources, summarize immediately.
If all key claims are confirmed or the question is answered, say "RESEARCH_COMPLETE".
Loop up to 2 times.
## Research: {question}
### Direct Answer
{clear answer to the research question}
### Key Findings
{findings with source URLs and corroboration status}
- CONFIRMED: {claim} — [source1], [source2]
- UNCORROBORATED: {claim} — [source] (single source only)
- CONTESTED: {claim} — [source1] says X, [source2] says Y
### Tradeoffs
{comparison between approaches}
### Open Questions
{what couldn't be resolved, any CONTESTED claims without resolution}
### Recommendation
{what to do and why, noting confidence level}