By izmailovilya
Scout open-source GitHub repos to uncover patterns, features, and ideas for improving your project. Clone target repositories, analyze their architecture and code, compare against your codebase, and get actionable recommendations to adopt best practices.
npx claudepluginhub izmailovilya/ilia-izmailov-plugins --plugin repo-scoutA collection of plugins for Claude Code.
Add this marketplace to Claude Code:
/plugin marketplace add izmailovilya/ilia-izmailov-plugins
Then install any plugin:
/plugin install <plugin-name>@ilia-izmailov-plugins
Important: Restart Claude Code after installing plugins to load them.
Launch a team of AI agents to implement features with built-in code review gates.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install agent-teams@ilia-izmailov-plugins
Usage:
/interviewed-team-feature "Add user settings page"
/team-feature docs/plan.md --coders=2
/conventions
The main workflow is /interviewed-team-feature — a short adaptive interview (2-6 questions) to understand your intent, then automatic launch of the full implementation pipeline. Spawns researchers, coders, and specialized reviewers (security, logic, quality) with automatic team scaling based on complexity (SIMPLE/MEDIUM/COMPLEX).
Expert evaluation arena — real experts independently assess options with cross-enrichment for any domain.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment. See setup →
/plugin install expert-arena@ilia-izmailov-plugins
Usage:
/expert-arena "Should we use microservices or monolith?"
/expert-arena "Best pricing strategy for a developer tool?"
Selects 3-5 real experts with opposing viewpoints, gathers context via researchers, launches independent evaluations with cross-enrichment, and produces an action-oriented report: verdict first, action plan second, detailed analysis for those who want to dig deeper.
Deep parallel codebase research — causal understanding, not just coverage.
Requires: Enable
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMSin settings.json or environment.
/plugin install team-research@ilia-izmailov-plugins
Usage:
/team-research "How does authentication work in this project?"
/team-research "Full architecture review"
Spawns a scout to map the landscape, then 2-7 investigators explore independent angles in parallel, followed by an adversarial challenger who stress-tests the findings. Produces a research report with causal understanding, source confidence tags, and cross-cutting insights.
Scout open-source repos for patterns and ideas to improve your own product.
/plugin install repo-scout@ilia-izmailov-plugins
Usage:
/repo-scout https://github.com/anomalyco/opencode
/repo-scout https://github.com/vercel/ai "how they handle streaming"
Two-phase approach: first understands YOUR project (2 scouts), then explores the external repo with your context (2 scouts), then adversarial challenge (2 challengers verify patterns are real and worth adopting). Only recommendations that survive the challenge make it into the final report.
Interactive feature audit for vibe-coded projects. Finds dead code, unused features, and experiments through conversation.
/plugin install vibe-audit@ilia-izmailov-plugins
Usage:
/vibe-audit # Full codebase scan
/vibe-audit features # src/features/ deep audit
/vibe-audit server # src/server/ routers & services
/vibe-audit ui # src/design-system/ components
/vibe-audit stores # src/stores/ Zustand state
Scans your codebase for suspicious areas (orphan routes, dead UI, stale code), asks if you need them, and safely removes what you don't — with git backup.
MIT
Launch a team of AI agents for deep parallel codebase research — causal understanding, not just coverage
Share bugs, ideas, or general feedback.
Adversarial multi-agent pipeline for Claude Code. GAN-style loops where generators produce artifacts, discriminators validate them, and feedback drives convergence.
Code review plugin with a standalone reviewer agent and two skill strategies: disposable subagents for one-shot reviews and persistent team members for iterative reviews
Use this agent when evaluating new development tools, frameworks, or services for the studio. This agent specializes in rapid tool assessment, comparative analysis, and making recommendations that align with the 6-day development cycle philosophy. Examples:\n\n<example>\nContext: Considering a new framework or library
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Comprehensive UI/UX design plugin for mobile (iOS, Android, React Native) and web applications with design systems, accessibility, and modern patterns