Use when reviewing documents or codebases with multi-agent analysis, or researching topics with multi-agent research — triages relevant agents from roster, launches only what matters in background mode
From interfluxnpx claudepluginhub mistakeknot/interagency-marketplace --plugin interfluxThis skill uses the workspace's default tool permissions.
SKILL-compact.mdphases/cross-ai.mdphases/expansion.mdphases/launch-codex.mdphases/launch.mdphases/reaction.mdphases/shared-contracts.mdphases/slicing.mdphases/synthesize.mdreferences/agent-roster.mdreferences/progressive-enhancements.mdreferences/prompt-template.mdreferences/scoring-examples.mdProvides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
You are executing the flux-drive skill. This skill operates in two modes:
Follow each phase in order. Do NOT skip phases.
File organization: This skill is split across phase files. Read each phase file as you reach it — do NOT pre-load all files upfront. Key references:
phases/shared-contracts.md — output format, completion signals (read before Phase 2 dispatch)phases/slicing.md — content routing patterns and algorithms (read only when slicing activates: diff >= 1000 lines or document >= 200 lines)phases/launch.md — agent dispatch protocol (read at Phase 2)phases/expansion.md — AgentDropout + staged expansion (read only when Stage 2 candidates exist)phases/reaction.md — reaction round (read only when reaction is enabled in config)Determine the mode from the user's invocation:
/interflux:flux-drive → MODE = review/interflux:flux-research → MODE = research--mode=research or --mode=review → use thatMODE = reviewThe mode gates behavior throughout all phases. Look for [review only] and [research only] markers below.
--interactive: Restore confirmation gates before agent dispatch. Without this flag, the orchestrator auto-proceeds after displaying the triage result. Use --interactive when you want to review and edit the agent selection before launch.--output-dir <path>: Override the default timestamped OUTPUT_DIR with a fixed path (enables iterative reviews of the same document).Set: INTERACTIVE = true if --interactive is present, false otherwise.
[review mode]: The user provides a file path, directory path, or inline text/topic as an argument. If no argument is provided, ask for one using AskUserQuestion.
[research mode]: The user provides a research question as an argument. If no question is provided, ask for one using AskUserQuestion.
Detect the input type and derive paths for use throughout all phases:
INPUT_PATH = <the path or text the user provided>
Then detect:
INPUT_PATH is a file AND content starts with diff --git or --- a/: INPUT_FILE = INPUT_PATH, INPUT_DIR = <directory containing file>, INPUT_TYPE = diffINPUT_PATH is a file (non-diff): INPUT_FILE = INPUT_PATH, INPUT_DIR = <directory containing file>, INPUT_TYPE = fileINPUT_PATH is a directory: INPUT_FILE = none (repo review mode), INPUT_DIR = INPUT_PATH, INPUT_TYPE = directoryINPUT_PATH is not a valid path on disk: INPUT_TYPE = text, INPUT_DIR = CWD, treat as inline textText input handling: When INPUT_TYPE = text, write the user's text to {OUTPUT_DIR}/input.md so agents can read it. Text inputs are treated like .md documents for triage — all agents (including cognitive agents) are eligible.
Derive:
INPUT_TYPE = file | directory | diff | text
INPUT_STEM = <filename without extension, directory basename, or topic as kebab-case (max 50 chars)>
PROJECT_ROOT = <nearest ancestor directory containing .git, or INPUT_DIR>
OUTPUT_DIR = {PROJECT_ROOT}/docs/research/flux-drive/{INPUT_STEM}
RESEARCH_QUESTION = <the question the user provided>
PROJECT_ROOT = <git root of the current working directory>
INPUT_STEM = <question converted to kebab-case, max 50 chars, alphanumeric + hyphens>
OUTPUT_DIR = {PROJECT_ROOT}/docs/research/flux-research/{INPUT_STEM}
INPUT_TYPE = research
Run isolation: Append a timestamp to OUTPUT_DIR to prevent cross-run contamination:
RUN_TS = $(date +%Y%m%dT%H%M)
OUTPUT_DIR = {OUTPUT_DIR}-{RUN_TS}
This is the default because find -delete on a shared OUTPUT_DIR races with slow agents from previous runs (e.g., Oracle with a 10-minute timeout). A still-writing agent's .partial gets deleted, but when it renames to .md, the file reappears — contaminating the new run's synthesis with stale findings.
To reuse a fixed OUTPUT_DIR (e.g., for iterative reviews of the same document), pass --output-dir <path> explicitly. In that case, enforce run isolation with the clean approach:
.md, .md.partial, and peer-findings.jsonl files before dispatch.Critical: Resolve OUTPUT_DIR to an absolute path before using it in agent prompts. Agents inherit the main session's CWD, so relative paths write to the wrong project during cross-project reviews.
Check PROJECT_ROOT for build files (Cargo.toml, go.mod, package.json, etc.), read CLAUDE.md/AGENTS.md if present. For file inputs, compare document vs actual codebase. For directory inputs, read README + key source files. If qmd MCP available, search for project conventions. Note any document-codebase divergence as divergence: [description] — this is a P0 finding. Use the actual tech stack for triage.
Output: Project domains: [comma-separated from: game-simulation, web-api, ml-pipeline, cli-tool, mobile-app, embedded-systems, library-sdk, data-pipeline, claude-code-plugin, tui-app, desktop-tauri] (or none). Multiple domains allowed. Feeds into scoring (domain_boost), criteria injection (Step 2.1a).
/interflux:flux-gen "Review of {INPUT}: {1-line summary}" — skip-existing mode (fast when agents exist). If fails, proceed with core agents only.
[research mode]: Build a query profile: type (onboarding/how-to/why-is-it/what-changed/best-practice/debug-context/exploratory), keywords, scope (narrow/medium/broad), project_domains, estimated_depth (quick=30s, standard=2min, deep=5min). Then skip to Step 1.2.
[review mode]: Read the input and extract a structured profile:
Document Profile:
- Type: [plan|brainstorm|spec|prd|README|repo-review|other]
- Summary: [1-2 sentences]
- Languages/Frameworks: [from codebase, not just document]
- Domains touched: [architecture, security, performance, UX, data, API, etc.]
- Project domains: [from Step 1.0.1]
- Divergence: [none | description]
- Key codebase files: [3-5 files]
- Section analysis: [section: thin/adequate/deep — 1-line summary]
- Estimated complexity: [small|medium|large]
- Review goal: [1 sentence — adapts to type: plan→gaps/risks, brainstorm→feasibility, PRD→assumptions/scope, spec→ambiguities]
Diff Profile (when INPUT_TYPE = diff): File count, stats (+/-), languages, domains touched, project domains, key files (top 5 by size), commit message, complexity (small <200/medium/large 1000+), slicing eligible (>= 1000 lines).
Do this analysis yourself (no subagents). The profile drives triage in Step 1.2.
[research mode]: Skip the review agent scoring below. Instead, use the research agent affinity table:
Score each research agent on a 3-point scale using the query-type → agent affinity table:
| Query Type | Primary (score=3) | Secondary (score=2) | Skip (score=0) |
|---|---|---|---|
| onboarding | repo-research-analyst | learnings-researcher, framework-docs-researcher | best-practices-researcher, git-history-analyzer |
| how-to | best-practices-researcher, framework-docs-researcher | learnings-researcher | repo-research-analyst, git-history-analyzer |
| why-is-it | git-history-analyzer, repo-research-analyst | learnings-researcher | best-practices-researcher, framework-docs-researcher |
| what-changed | git-history-analyzer | repo-research-analyst | best-practices-researcher, framework-docs-researcher, learnings-researcher |
| best-practice | best-practices-researcher | framework-docs-researcher, learnings-researcher | repo-research-analyst, git-history-analyzer |
| debug-context | learnings-researcher, git-history-analyzer | repo-research-analyst, framework-docs-researcher | best-practices-researcher |
| exploratory | repo-research-analyst, best-practices-researcher | git-history-analyzer, framework-docs-researcher, learnings-researcher | — |
Domain bonus: If a detected domain has Research Directives for best-practices-researcher or framework-docs-researcher, add +1 to their score (these agents benefit most from domain-specific search terms).
Selection: Launch all agents with score >= 2. Agents with score 0 are skipped entirely. No staged dispatch — all selected agents launch in a single stage.
Then skip to Step 1.3 (user confirmation).
[review mode]: Use the review agent scoring below.
Read .claude/routing-overrides.json if it exists. For each entry with "action":"exclude": apply scope check (domains AND/OR file_patterns — AND logic if both set; reject .. or /-prefixed patterns). Remove matching agents from candidate pool. Warn if excluded agent is cross-cutting (fd-architecture, fd-quality, fd-safety, fd-correctness). Entries with "action":"propose" are informational only. Show canary/confidence metadata in triage notes. Discovery nudge: if agent overridden 3+ times this session, suggest /interspect:correction.
Eliminate agents that cannot score >= 1:
File/directory inputs:
Diff inputs (use routing patterns from phases/slicing.md):
Cognitive agents (fd-systems, fd-decisions, fd-people, fd-resilience, fd-perception): skip unless .md/.txt document or text input with document type PRD/brainstorm/plan/strategy/vision/roadmap/options analysis. NEVER for code/diff. Base scores: 3 (systems/strategy content), 2 (PRD/brainstorm/plan), 1 (technical reference).
final_score = base_score(0-3) + domain_boost(0-2) + project_bonus(0-1) + domain_agent(0-1)
Dynamic slot ceiling: 4(base) + scope(file:0, small-diff:1, large-diff:2, repo:3) + domain(0:0, 1:1, 2+:2), hard max 10.
Stage assignment: Stage 1 = top 40% of slots (min 2, max 5). Stage 2 = rest. Expansion pool = scored >= 2 but no slot.
Present triage table: Agent | Category | Score | Stage | Est. Tokens | Source | Reason | Action
Read references/scoring-examples.md for worked examples and thin-section thresholds.
Apply budget constraints from config/flux-drive/budget.yaml. See SKILL-compact.md Step 1.2c for the complete algorithm. Key: budget by INPUT_TYPE, per-agent costs from interstat (>= 3 runs) or defaults, slicing multiplier 0.5x, min 2 agents always selected, exempt agents (fd-safety, fd-correctness) never deferred.
Trigger: INPUT_TYPE = file AND document > 200 lines. Read phases/slicing.md → Document Slicing. Output: section_map per agent for Step 2.1c. Documents < 200 lines → all agents get full document.
[research mode]: Display the triage result as a one-line summary: Research: {N} agents ({agent_names}), depth: {estimated_depth}.
[review mode]: Display the triage table showing all agents, tiers, scores, stages, reasons, and Launch/Skip actions. Then display: Stage 1: [agent names]. Stage 2 (on-demand): [agent names].
Auto-proceed (default): Proceed directly to Phase 2. No confirmation needed — the triage algorithm is deterministic and the user can inspect the table output.
Interactive mode (INTERACTIVE = true): Use AskUserQuestion to get approval before proceeding:
AskUserQuestion:
question: "[research] Launching {N} agents. Proceed?" / "[review] Stage 1: [names]. Launch?"
options:
- label: "Launch (Recommended)"
- label: "Edit selection"
- label: "Cancel"
If user selects "Edit selection", adjust and re-present. If "Cancel", stop here.
[review mode]: Read references/agent-roster.md for the full review agent roster including:
.claude/agents/fd-*.md)[research mode]: Use the research agent roster:
| Agent | subagent_type |
|---|---|
| best-practices-researcher | interflux:best-practices-researcher |
| framework-docs-researcher | interflux:framework-docs-researcher |
| git-history-analyzer | interflux:git-history-analyzer |
| learnings-researcher | interflux:learnings-researcher |
| repo-research-analyst | interflux:repo-research-analyst |
Read the launch phase file now:
phases/launch.md (in the flux-drive skill directory)MODE parameter — research mode uses single-stage dispatch without AgentDropout, expansion, or peer findingsphases/launch-codex.md[review mode only] — skip entirely in research mode.
Read phases/reaction.md now.
Read the synthesis phase file now:
phases/synthesize.md (in the flux-drive skill directory)MODE parameter — research mode delegates to intersynth:synthesize-research and skips bead creation and knowledge compounding[review mode only] — skip entirely in research mode.
Skip this phase if Oracle was not in the review roster. For cross-AI options without Oracle, mention /clavain:interpeer in the Phase 3 report.
If Oracle participated, read phases/cross-ai.md now.
Chains to (user-initiated, after Phase 4 consent gate) [review mode]:
interpeer — when user wants to investigate cross-AI disagreementsSuggests (when Oracle absent, in Phase 3 report) [review mode]:
interpeer — lightweight cross-AI second opinionCalled by:
/interflux:flux-drive command (mode=review)/interflux:flux-research command (mode=research)See also:
interpeer/references/oracle-reference.md — Oracle CLI referenceinterpeer/references/oracle-troubleshooting.md — Oracle troubleshootingclavain:interserve for Codex dispatch details.