By izmailovilya
Assemble AI-simulated teams of domain experts to independently evaluate options like code variants or approaches across dev, product, or business domains. They generate pros/cons maps, 1-5 scores, rankings, cross-enrich via private messages, and produce a final decision map after reconnaissance on code context and best practices.
npx claudepluginhub izmailovilya/ilia-izmailov-plugins --plugin expert-arenaЭксперт-оценщик для Expert Arena — перевоплощается в реального эксперта, независимо оценивает варианты, обогащает оценки других экспертов
Агент разведки для Expert Arena — one-shot агент, собирает контекст (код, данные, практики) перед экспертными дебатами. Запускается ДО создания команды, не является участником команды
A collection of plugins for Claude Code.
Add this marketplace to Claude Code:
/plugin marketplace add izmailovilya/ilia-izmailov-plugins
Then install any plugin:
/plugin install <plugin-name>@ilia-izmailov-plugins
Important: Restart Claude Code after installing plugins to load them.
Launch a team of AI agents to implement features with built-in code review gates.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install agent-teams@ilia-izmailov-plugins
Usage:
/interviewed-team-feature "Add user settings page"
/team-feature docs/plan.md --coders=2
/conventions
The main workflow is /interviewed-team-feature — a short adaptive interview (2-6 questions) to understand your intent, then automatic launch of the full implementation pipeline. Spawns researchers, coders, and specialized reviewers (security, logic, quality) with automatic team scaling based on complexity (SIMPLE/MEDIUM/COMPLEX).
Expert evaluation arena — real experts independently assess options with cross-enrichment for any domain.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install expert-arena@ilia-izmailov-plugins
Usage:
/expert-arena "Should we use microservices or monolith?"
/expert-arena "Best pricing strategy for a developer tool?"
Selects 3-5 real experts with opposing viewpoints, gathers context via researchers, launches independent evaluations with cross-enrichment, and produces an action-oriented report: verdict first, action plan second, detailed analysis for those who want to dig deeper.
Deep parallel codebase research — causal understanding, not just coverage.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment.
/plugin install team-research@ilia-izmailov-plugins
Usage:
/team-research "How does authentication work in this project?"
/team-research "Full architecture review"
Spawns a scout to map the landscape, then 2-7 investigators explore independent angles in parallel, followed by an adversarial challenger who stress-tests the findings. Produces a research report with causal understanding, source confidence tags, and cross-cutting insights.
Scout open-source repos for patterns and ideas to improve your own product.
/plugin install repo-scout@ilia-izmailov-plugins
Usage:
/repo-scout https://github.com/anomalyco/opencode
/repo-scout https://github.com/vercel/ai "how they handle streaming"
Two-phase approach: first understands YOUR project (2 scouts), then explores the external repo with your context (2 scouts), then adversarial challenge (2 challengers verify patterns are real and worth adopting). Only recommendations that survive the challenge make it into the final report.
Interactive feature audit for vibe-coded projects. Finds dead code, unused features, and experiments through conversation.
/plugin install vibe-audit@ilia-izmailov-plugins
Usage:
/vibe-audit # Full codebase scan
/vibe-audit features # src/features/ deep audit
/vibe-audit server # src/server/ routers & services
/vibe-audit ui # src/design-system/ components
/vibe-audit stores # src/stores/ Zustand state
Scans your codebase for suspicious areas (orphan routes, dead UI, stale code), asks if you need them, and safely removes what you don't — with git backup.
MIT
Share bugs, ideas, or general feedback.
Use this agent when evaluating new development tools, frameworks, or services for the studio. This agent specializes in rapid tool assessment, comparative analysis, and making recommendations that align with the 6-day development cycle philosophy. Examples:\n\n<example>\nContext: Considering a new framework or library
Collaborative technical discussion with proactive requirements gathering
Code review plugin with a standalone reviewer agent and two skill strategies: disposable subagents for one-shot reviews and persistent team members for iterative reviews
Helps Claude read a planning document and explore related files to get familiar with a topic. Asking Claude to prepare to discuss seems to work better than asking it to prepare to do specific work. This is followed by Plan, then Execute.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.