By izmailovilya
Launch coordinated AI agent teams for parallel, deep-dive codebase research, building causal models of architecture, dependencies, and failure modes via structured investigation, adversarial challenging, and specialized domain analysis.
npx claudepluginhub izmailovilya/ilia-izmailov-plugins --plugin team-researchResearch team module spawned on-demand for failure mode analysis. Only created when the Challenger recommends it. Enumerates specific failure modes, fragile assumptions, and weak evidence areas. <example> Context: Challenger flagged insufficient failure analysis in auth findings lead: "Challenger flagged: auth findings lack failure mode analysis. Deep-dive failure modes for the auth system." assistant: "I'll enumerate specific failure modes — what could go wrong with token refresh, session invalidation, concurrent logins — and identify fragile assumptions." <commentary> Critic focuses on failure modes — what could go wrong, not what works correctly. </commentary> </example> <example> Context: Challenger flagged thin evidence in database angle lead: "Evidence for database migration safety is circumstantial. Analyze failure modes." assistant: "I'll analyze what specifically could break during migration — data loss scenarios, constraint violations, rollback failures." <commentary> Critic turns vague concerns into specific, concrete failure scenarios. </commentary> </example> <example type="negative"> Context: Critic starts re-investigating instead of analyzing failures assistant: "Let me trace through the entire auth flow to understand it better..." <commentary> WRONG — Critic analyzes failure modes from existing findings. It doesn't re-investigate — that's already done. </commentary> </example>
Research team module for deep investigation using the Depth Protocol (WHAT/WHY/FRAGILITY/CONTEXT/SURPRISE). Tags every claim with Source (Observed/Inferred/Hypothesized). Applies Feynman Test and Fact Registry for quality control. <example> Context: Assigned to investigate authentication angle lead: "Investigate the auth system. Depth tier: deep. Start from src/middleware/auth.ts." assistant: "I'll apply Depth Protocol to every finding — WHAT with file:line, WHY with causal chain, FRAGILITY analysis, and Feynman Test at the end." <commentary> Investigator builds CAUSAL understanding, not just coverage. Every claim gets a Source tag. </commentary> </example> <example> Context: Investigator discovers something relevant to another angle assistant: "I found that auth middleware uses a shared Redis connection — sending this to investigator-database since it affects their angle." <commentary> Cross-pollination: investigators share relevant discoveries with teammates working on related angles. </commentary> </example> <example type="negative"> Context: Investigator reports findings without Source tags assistant: "The auth system uses JWT tokens and refreshes them every hour." <commentary> WRONG — every claim needs a Source tag. Is this Observed (seen in code), Inferred (logical conclusion), or Hypothesized (best guess)? </commentary> </example>
Research team module that stress-tests findings through 3 lenses: Evidence Quality (verify Source Tags), Pre-Mortem (imagine the research was wrong), and Frame Gap Detection (find missing perspectives). Actively adversarial, not a passive checklist. <example> Context: All investigators have reported — challenger reviews their findings lead: "Stress-test these findings from 4 investigators. Focus on weakest claims." assistant: "I'll apply 3 lenses: verify Source Tags are real, run a Pre-Mortem scenario, and detect missing perspectives across all findings." <commentary> Challenger is adversarial — it looks for weaknesses, not validation. </commentary> </example> <example> Context: Findings are solid but missing failure analysis assistant: "Evidence quality is good, but FRAGILITY analysis is thin across all angles. Recommending a Critic agent for failure mode deep-dive." <commentary> Challenger recommends Critic only when failure analysis is genuinely insufficient. </commentary> </example> <example type="negative"> Context: Challenger rubber-stamps findings assistant: "Everything looks great! No issues found." <commentary> WRONG — Challenger must always find SOMETHING to challenge. If findings were perfect, they wouldn't need a challenger. </commentary> </example>
Research team module for quick landscape scanning. Identifies 3-7 distinct areas, suggests investigation angles with depth tiers, and estimates complexity. Fast (max 5 minutes) — maps terrain, doesn't investigate. <example> Context: Research lead needs to understand codebase before spawning investigators lead: "Research question: How does authentication work? Quick-scan the landscape." assistant: "I'll scan the file tree, entry points, and auth-related modules to map 3-7 areas with suggested investigation angles." <commentary> Scout is fast — skim, don't read deeply. Its job is mapping, not investigating. </commentary> </example> <example> Context: Broad research question requiring multiple angles lead: "Research question: Full architecture review. Map the landscape." assistant: "I'll identify distinct areas — routing, data layer, auth, UI, config — and suggest shallow vs deep tiers for each." <commentary> Scout suggests depth tiers so the lead can allocate investigators efficiently. </commentary> </example> <example type="negative"> Context: Scout starts reading files in depth assistant: "Let me trace through the entire auth flow to understand how tokens are refreshed..." <commentary> WRONG — Scout skims and maps. Deep investigation is the investigators' job. </commentary> </example>
Research team module spawned on-demand for domain-specific deep dives (security, database, external-api). Created when an investigator flags ESCALATE for a specific domain. Uses Depth Protocol for findings. <example> Context: Investigator flagged security code needing expert review lead: "Investigator found auth/crypto code in src/middleware/auth.ts. ESCALATE: security. Deep-dive this area." assistant: "I'll apply security-specific analysis — check for timing attacks, weak crypto, auth bypasses — using Depth Protocol." <commentary> Specialist brings domain expertise to areas flagged by investigators. </commentary> </example> <example> Context: Investigator flagged complex database queries lead: "Investigator found complex transaction logic in src/services/order.ts. ESCALATE: database." assistant: "I'll check for race conditions, N+1 queries, missing indexes, and transaction isolation issues." <commentary> Database specialist focuses on data integrity patterns investigators may miss. </commentary> </example> <example type="negative"> Context: Specialist expands beyond flagged area assistant: "I'll also review the entire API layer while I'm looking at the database..." <commentary> WRONG — Specialist stays focused on the flagged area only. Don't expand beyond the ESCALATE scope. </commentary> </example>
A collection of plugins for Claude Code.
Add this marketplace to Claude Code:
/plugin marketplace add izmailovilya/ilia-izmailov-plugins
Then install any plugin:
/plugin install <plugin-name>@ilia-izmailov-plugins
Important: Restart Claude Code after installing plugins to load them.
Launch a team of AI agents to implement features with built-in code review gates.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install agent-teams@ilia-izmailov-plugins
Usage:
/interviewed-team-feature "Add user settings page"
/team-feature docs/plan.md --coders=2
/conventions
The main workflow is /interviewed-team-feature — a short adaptive interview (2-6 questions) to understand your intent, then automatic launch of the full implementation pipeline. Spawns researchers, coders, and specialized reviewers (security, logic, quality) with automatic team scaling based on complexity (SIMPLE/MEDIUM/COMPLEX).
Expert evaluation arena — real experts independently assess options with cross-enrichment for any domain.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install expert-arena@ilia-izmailov-plugins
Usage:
/expert-arena "Should we use microservices or monolith?"
/expert-arena "Best pricing strategy for a developer tool?"
Selects 3-5 real experts with opposing viewpoints, gathers context via researchers, launches independent evaluations with cross-enrichment, and produces an action-oriented report: verdict first, action plan second, detailed analysis for those who want to dig deeper.
Deep parallel codebase research — causal understanding, not just coverage.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment.
/plugin install team-research@ilia-izmailov-plugins
Usage:
/team-research "How does authentication work in this project?"
/team-research "Full architecture review"
Spawns a scout to map the landscape, then 2-7 investigators explore independent angles in parallel, followed by an adversarial challenger who stress-tests the findings. Produces a research report with causal understanding, source confidence tags, and cross-cutting insights.
Scout open-source repos for patterns and ideas to improve your own product.
/plugin install repo-scout@ilia-izmailov-plugins
Usage:
/repo-scout https://github.com/anomalyco/opencode
/repo-scout https://github.com/vercel/ai "how they handle streaming"
Two-phase approach: first understands YOUR project (2 scouts), then explores the external repo with your context (2 scouts), then adversarial challenge (2 challengers verify patterns are real and worth adopting). Only recommendations that survive the challenge make it into the final report.
Interactive feature audit for vibe-coded projects. Finds dead code, unused features, and experiments through conversation.
/plugin install vibe-audit@ilia-izmailov-plugins
Usage:
/vibe-audit # Full codebase scan
/vibe-audit features # src/features/ deep audit
/vibe-audit server # src/server/ routers & services
/vibe-audit ui # src/design-system/ components
/vibe-audit stores # src/stores/ Zustand state
Scans your codebase for suspicious areas (orphan routes, dead UI, stale code), asks if you need them, and safely removes what you don't — with git backup.
MIT
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Task-focused agents for test, review, debug, docs, CI, security, refactoring, research, performance, and search-replace — with teammate and subagent role guidance
Generate comprehensive analysis and documentation of entire codebase
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns
No description provided.