Multi-agent review and research with scored triage, domain detection, content slicing, intermediate finding sharing, and knowledge injection. 17 agents (12 review + 5 research), 6 commands, 1 skill (unified flux-drive with review/research modes), 1 MCP server. Companion plugin for Clavain.
npx claudepluginhub mistakeknot/interagency-marketplace --plugin interfluxFetch peer findings from a flux-drive review session. Use to inspect what findings agents have shared during a parallel review.
Intelligent document review — triages relevant agents, launches only what matters in background mode. Also supports research mode (--mode=research).
Autonomous multi-round semantic space exploration — generates agents from progressively more distant knowledge domains and synthesizes cross-domain structural isomorphisms
Generate review agents from task prompts via LLM design
Multi-agent research — triages agents, dispatches in parallel, synthesizes answer with source attribution
Multi-track deep review — generates agents across a spectrum of semantic distance (adjacent → orthogonal → esoteric), runs parallel flux-drive reviews, then synthesizes across all tracks with cross-track convergence analysis
```
- 12 agents: 7 technical (auto-detect language) + 5 cognitive (documents only)
interflux supports both standalone (Claude Code marketplace) and integrated (Interverse ecosystem) operation via the interbase SDK.
During parallel flux-drive reviews, agents can share high-severity findings via `{OUTPUT_DIR}/peer-findings.jsonl`.
Standard definitions for token metrics used across interflux and companion plugins.
Researches and synthesizes external best practices, documentation, and examples for any technology or framework. Use when you need industry standards, community conventions, or implementation guidance.
Gathers comprehensive documentation and best practices for frameworks, libraries, or dependencies. Use when you need official docs, version-specific constraints, or implementation patterns.
Performs archaeological analysis of git history to trace code evolution, identify contributors, and understand why code patterns exist. Use when you need historical context for code changes.
Searches docs/solutions/ for relevant past solutions by frontmatter metadata. Use before implementing features or fixing problems to surface institutional knowledge and prevent repeated mistakes.
Conducts thorough research on repository structure, documentation, conventions, and implementation patterns. Use when onboarding to a new codebase or understanding project conventions.
Flux-drive Architecture & Design reviewer — evaluates module boundaries, coupling, design patterns, anti-patterns, code duplication, and unnecessary complexity. Examples: <example>user: "I've split the data layer into three packages — review the module boundaries" assistant: "I'll use the fd-architecture agent to evaluate module boundaries and coupling." <commentary>Module restructuring involves architecture boundaries and coupling.</commentary></example> <example>user: "We're adding Redis as a caching layer — review the integration plan" assistant: "I'll use the fd-architecture agent to evaluate how Redis integrates with existing architecture." <commentary>New dependency evaluation requires design pattern and coupling assessment.</commentary></example>
Flux-drive Correctness reviewer — evaluates data consistency, transaction safety, race conditions, async bugs, and concurrency patterns across all languages. Examples: <example>user: "Review this migration — it renames user_id to account_id and backfills" assistant: "I'll use the fd-correctness agent to evaluate data consistency and transaction safety." <commentary>Migrations with renames and backfills need atomicity, NULL handling, and referential integrity review.</commentary></example> <example>user: "Check this worker pool for race conditions" assistant: "I'll use the fd-correctness agent to analyze concurrency patterns and race conditions." <commentary>Worker pools involve shared mutable state, lifecycle management, and synchronization.</commentary></example>
Flux-drive Decision Quality reviewer — evaluates decision traps, cognitive biases, uncertainty handling, strategic paradoxes, and option framing in strategy documents, PRDs, and plans. Examples: <example>user: "Review this migration plan for decision quality blind spots" assistant: "I'll use the fd-decisions agent to evaluate reversibility, premature commitment, and option value." <commentary>Migration plans involve irreversibility, optionality loss, and sunk cost traps.</commentary></example> <example>user: "Check if our tech choice has decision bias issues" assistant: "I'll use the fd-decisions agent to check for anchoring bias, explore/exploit imbalance, and missing trade-offs." <commentary>Tech selection without trade-off analysis risks anchoring bias and premature lock-in.</commentary></example>
Flux-drive Game Design reviewer — evaluates balance, pacing, player psychology, feedback loops, emergent behavior, and procedural content quality. Examples: <example>user: "Review the utility AI system for agent behavior" assistant: "I'll use the fd-game-design agent to evaluate needs curves, action scoring, and emergent behavior." <commentary>Utility AI tuning involves game design balance, not just code correctness.</commentary></example> <example>user: "Check if the storyteller pacing feels right" assistant: "I'll use the fd-game-design agent to review drama curve, event cooldowns, and death spiral prevention." <commentary>Drama pacing is a game design concern about player experience.</commentary></example>
Flux-drive Human Systems reviewer — evaluates trust dynamics, power structures, communication patterns, team culture, and leadership gaps in strategy documents, PRDs, and plans. Examples: <example>user: "Review this reorg plan for people-related blind spots" assistant: "I'll use the fd-people agent to evaluate trust erosion, Conway's Law violations, and authority gradient blind spots." <commentary>Reorganization without stakeholder involvement risks trust erosion and cultural fragmentation.</commentary></example> <example>user: "Check if this approval process has team dynamics issues" assistant: "I'll use the fd-people agent to evaluate authority gradients, bottleneck risks, and psychological safety." <commentary>Mandatory approval gates involve power dynamics, incentive misalignment, and learned helplessness risks.</commentary></example>
Flux-drive Sensemaking reviewer — evaluates mental models, information quality, temporal reasoning, and perceptual blind spots in strategy documents, PRDs, and plans. Examples: <example>user: "Review this competitive analysis for sensemaking blind spots" assistant: "I'll use the fd-perception agent to evaluate map/territory confusion, information source diversity, and signal/noise separation." <commentary>Single-source analysis risks map/territory confusion and signal/noise conflation.</commentary></example> <example>user: "Check if our transformation roadmap has perceptual blind spots" assistant: "I'll use the fd-perception agent to evaluate temporal discounting, paradigm shift exposure, and change blindness." <commentary>Long-range plans risk temporal discounting and illusion of control over future states.</commentary></example>
Flux-drive Performance reviewer — evaluates rendering bottlenecks, data access patterns, algorithmic complexity, memory usage, and resource consumption. Examples: <example>user: "The dashboard endpoint is slow — review the data access patterns" assistant: "I'll use the fd-performance agent to evaluate query patterns and identify bottlenecks." <commentary>Slow endpoints need data access review: repeated scans, missing indexes, inefficient lookups.</commentary></example> <example>user: "The TUI flickers on every update — review the rendering approach" assistant: "I'll use the fd-performance agent to check for unnecessary redraws and rendering bottlenecks." <commentary>TUI rendering issues involve batching, debouncing, and event loop blocking.</commentary></example>
Flux-drive Quality & Style reviewer — evaluates naming, conventions, test approach, error handling, and language-specific idioms. Auto-detects language. Examples: <example>user: "Review this Go handler for style and conventions" assistant: "I'll use the fd-quality agent to evaluate naming, error handling, and Go idioms." <commentary>Go code needs explicit error handling with %w, accept-interfaces-return-structs, table-driven tests.</commentary></example> <example>user: "I've converted the utils to TypeScript — check type safety" assistant: "I'll use the fd-quality agent to review type safety and idiomatic patterns." <commentary>Cross-language refactoring needs proper type narrowing, avoiding 'any', consistent naming.</commentary></example>
Flux-drive Adaptive Capacity reviewer — evaluates antifragility, creative constraints, resource allocation, innovation dynamics, and failure recovery in strategy documents, PRDs, and plans. Examples: <example>user: "Review this architecture for resilience blind spots" assistant: "I'll use the fd-resilience agent to evaluate single points of failure, degradation paths, and antifragility gaps." <commentary>Single-database architectures need redundancy, degradation strategy, and recovery time analysis.</commentary></example> <example>user: "Check if our investment strategy has adaptability issues" assistant: "I'll use the fd-resilience agent to evaluate staging opportunities, diminishing returns, and creative destruction blindness." <commentary>All-in commitment risks constraint violation and missed MVP opportunities.</commentary></example>
Flux-drive Safety reviewer — evaluates security threats, credential handling, trust boundaries, deployment risk, rollback procedures, and migration safety. Examples: <example>user: "I've updated the login to use OAuth2 — review security implications" assistant: "I'll use the fd-safety agent to evaluate auth flow changes and credential handling." <commentary>Auth flow changes involve trust boundaries and credential handling.</commentary></example> <example>user: "Review the new file upload endpoint for security issues" assistant: "I'll use the fd-safety agent to check for security threats in the upload endpoint." <commentary>File uploads need trust boundary analysis, input validation, and deployment risk assessment.</commentary></example>
Flux-drive Systems Thinking reviewer — evaluates feedback loops, emergence, causal reasoning, unintended consequences, and systems dynamics in strategy documents, PRDs, and plans. Examples: <example>user: "Review this PRD for systems thinking blind spots" assistant: "I'll use the fd-systems agent to evaluate feedback loops, second-order effects, and emergence patterns." <commentary>Caching introduces feedback loops, emergence (thundering herd), and systems dynamics.</commentary></example> <example>user: "Check if I'm missing systems-level risks in this reorg plan" assistant: "I'll use the fd-systems agent to analyze causal chains, pace layer mismatches, and Schelling traps." <commentary>Organizational changes involve feedback loops in communication and emergence in team behavior.</commentary></example>
Flux-drive User & Product reviewer — evaluates user flows, UX friction, value proposition, problem validation, scope creep, and missing edge cases. Examples: <example>user: "Review the new CLI command hierarchy — is it intuitive?" assistant: "I'll use the fd-user-product agent to evaluate CLI UX, discoverability, and user flow." <commentary>CLI redesigns need UX review for hierarchy, progressive disclosure, and error experience.</commentary></example> <example>user: "Review this PRD — does the problem statement hold up?" assistant: "I'll use the fd-user-product agent to validate the problem definition and check for scope creep." <commentary>PRDs need product validation: who has this problem, what evidence, whether solution fits.</commentary></example>
This file contains extracted concurrency code patterns used by `agents/review/fd-correctness.md`.
```bash
Use when reviewing documents or codebases with multi-agent analysis, or researching topics with multi-agent research — triages relevant agents from roster, launches only what matters in background mode
DEPRECATED — use flux-drive with mode=research instead. This skill is kept for backward compatibility.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
OpenAPI specification generation, Mermaid diagram creation, tutorial writing, API reference documentation
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification