DEPRECATED — use flux-drive with mode=research instead. This skill is kept for backward compatibility.
From interfluxnpx claudepluginhub mistakeknot/interagency-marketplace --plugin interfluxThis skill uses the workspace's default tool permissions.
SKILL-compact.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
This skill has been merged into flux-drive. Use
interflux:flux-drivewithmode=researchinstead.The
/interflux:flux-researchcommand automatically routes to flux-drive in research mode. This file is retained for reference only — it will be removed in a future release.
You are executing the flux-research skill. This skill answers research questions by dispatching only relevant research agents from the roster, collecting their findings in parallel, and synthesizing a unified answer with source attribution. Follow each phase in order. Do NOT skip phases.
The user provides a research question as an argument. If no question is provided, ask for one using AskUserQuestion.
RESEARCH_QUESTION = <the question the user provided>
PROJECT_ROOT = <git root of the current working directory>
OUTPUT_DIR = {PROJECT_ROOT}/docs/research/flux-research/{query-slug}
Where {query-slug} is the research question converted to kebab-case (max 50 chars, alphanumeric + hyphens only).
Check for domain context that can sharpen research queries:
Classify the project's domains using flux-drive's LLM classification (Step 1.0.1 in flux-drive SKILL.md): read README, build files, and key source files, then classify into known domains. For each detected domain, load the domain profile from ${CLAUDE_PLUGIN_ROOT}/config/flux-drive/domains/{domain-name}.md and extract the ## Research Directives section (if present).
Fallback: If no domains detected or no Research Directives sections exist, skip domain injection — agents run with the raw query only.
Analyze the research question to determine:
query_profile:
type: <one of: onboarding, how-to, why-is-it, what-changed, best-practice, debug-context, exploratory>
keywords: [list of key terms extracted from the question]
scope: <narrow | medium | broad>
project_domains: [from Step 1.0, if any]
estimated_depth: <quick | standard | deep>
Type detection heuristics:
how-towhy-is-itwhat-changedbest-practiceonboardingdebug-contextexploratoryDepth estimation:
quick (30s per agent): simple factual lookups, single-source answersstandard (2min per agent): multi-source synthesis, pattern matchingdeep (5min per agent): comprehensive survey, cross-referencing, analysisScore each research agent on a 3-point scale using the query-type → agent affinity table:
| Query Type | Primary (score=3) | Secondary (score=2) | Skip (score=0) |
|---|---|---|---|
| onboarding | repo-research-analyst | learnings-researcher, framework-docs-researcher | best-practices-researcher, git-history-analyzer |
| how-to | best-practices-researcher, framework-docs-researcher | learnings-researcher | repo-research-analyst, git-history-analyzer |
| why-is-it | git-history-analyzer, repo-research-analyst | learnings-researcher | best-practices-researcher, framework-docs-researcher |
| what-changed | git-history-analyzer | repo-research-analyst | best-practices-researcher, framework-docs-researcher, learnings-researcher |
| best-practice | best-practices-researcher | framework-docs-researcher, learnings-researcher | repo-research-analyst, git-history-analyzer |
| debug-context | learnings-researcher, git-history-analyzer | repo-research-analyst, framework-docs-researcher | best-practices-researcher |
| exploratory | repo-research-analyst, best-practices-researcher | git-history-analyzer, framework-docs-researcher, learnings-researcher | — |
Domain bonus: If a detected domain has Research Directives for best-practices-researcher or framework-docs-researcher, add +1 to their score (these agents benefit most from domain-specific search terms).
Selection: Launch all agents with score >= 2. Agents with score 0 are skipped entirely.
Present the triage result via AskUserQuestion:
AskUserQuestion:
question: "Research plan for: '{RESEARCH_QUESTION}'. Query type: {type}. Launching {N} agents ({agent_names}). Estimated depth: {estimated_depth}. Proceed?"
header: "Research"
options:
- label: "Launch (Recommended)"
description: "Dispatch {N} agents in parallel for {estimated_depth} research"
- label: "Edit agents"
description: "Add or remove specific agents before launch"
- label: "Cancel"
description: "Abort research"
If user selects "Edit agents", present a multi-select AskUserQuestion with all 5 agents and let them toggle.
If user selects "Cancel", stop immediately.
mkdir -p {OUTPUT_DIR}
find {OUTPUT_DIR} -maxdepth 1 -type f \( -name "*.md" -o -name "*.md.partial" \) -delete
For each selected agent, construct a research prompt:
## Research Task
Question: {RESEARCH_QUESTION}
Query profile:
- Type: {type}
- Keywords: {keywords}
- Scope: {scope}
- Depth: {estimated_depth}
## Project Context
Project root: {PROJECT_ROOT}
[If domains detected AND Research Directives exist for this agent:]
## Domain Research Directives
This project is classified as: {domain1} ({confidence1}), {domain2} ({confidence2}), ...
Search directives for your focus area in these project types:
### {domain1-name}
{bullet points from domain profile's ### {agent-name} section under ## Research Directives}
### {domain2-name}
{bullet points from domain profile's ### {agent-name} section under ## Research Directives}
Use these directives to guide your search queries and prioritize relevant sources.
[End domain section]
## Output
Write your findings to `{OUTPUT_DIR}/{agent-name}.md.partial`. Rename to `.md` when done.
Add `<!-- flux-research:complete -->` as the last line before renaming.
Structure your output as:
### Sources
- [numbered list of sources with type: internal/external, authority level]
### Findings
[Your research findings, organized by relevance]
### Confidence
- High confidence: [findings well-supported by multiple sources]
- Medium confidence: [findings from single source or indirect evidence]
- Low confidence: [inferences, gaps in available information]
### Gaps
[What you couldn't find or areas needing deeper investigation]
Launch all selected agents via Task tool with run_in_background: true:
Task(interflux:research:{agent-name}):
prompt: {constructed prompt from Step 2.1}
run_in_background: true
Agent invocation:
| Agent | subagent_type |
|---|---|
| best-practices-researcher | interflux:research:best-practices-researcher |
| framework-docs-researcher | interflux:research:framework-docs-researcher |
| git-history-analyzer | interflux:research:git-history-analyzer |
| learnings-researcher | interflux:research:learnings-researcher |
| repo-research-analyst | interflux:research:repo-research-analyst |
Timeouts by depth:
| Depth | Per-agent timeout |
|---|---|
| quick | 30 seconds |
| standard | 2 minutes |
| deep | 5 minutes |
Polling loop (every 15 seconds):
{OUTPUT_DIR}/ for .md files (not .md.partial)✅ learnings-researcher (12s)
⏳ best-practices-researcher
⏳ framework-docs-researcher
[1/3 agents complete]
.md files exist, stop pollingCompletion verification:
### Sources
(none — agent failed)
### Findings
Agent {name} did not complete within timeout.
### Confidence
No findings available.
### Gaps
This agent's entire domain is a gap in the research.
.md.partial filesDo NOT read agent output files yourself. Delegate ALL collection, merging, ranking, and verdict writing to a synthesis subagent. This keeps agent prose entirely out of the host context.
Launch the intersynth research synthesis agent (foreground, not background — you need its result):
Task(intersynth:synthesize-research):
prompt: |
OUTPUT_DIR={OUTPUT_DIR}
VERDICT_LIB=auto
RESEARCH_QUESTION={RESEARCH_QUESTION}
QUERY_TYPE={type}
ESTIMATED_DEPTH={estimated_depth}
The intersynth agent reads all research agent output files, merges findings with source attribution, ranks sources, writes verdicts, and returns a compact answer. It writes {OUTPUT_DIR}/synthesis.md.
After the synthesis subagent returns:
{OUTPUT_DIR}/synthesis.md for the full report to present to the userSource ranking, merging, conflict resolution, and report generation are performed by the intersynth synthesis agent.
Read {OUTPUT_DIR}/synthesis.md and present to user. This file was written by the intersynth agent.
When complete, display:
Research complete!
Output: {OUTPUT_DIR}/synthesis.md
Agents: {N} dispatched, {M} completed, {K} failed
Sources: {total} ({internal} internal, {external} external)
Key answer: [1-2 sentence summary]