npx claudepluginhub izzzzzi/izteam --plugin teamOne-shot explorer that scans a project and returns a condensed summary of stack, structure, patterns, and conventions. Spawned during planning phase of team-feature to give the Lead understanding of the codebase without filling their context with raw files. <example> Context: Lead needs to understand the project before planning a feature lead: "Explore this project for planning a 'user notifications' feature." assistant: "I'll scan structure, stack, similar features, and return a condensed summary." </example> <example type="negative"> Context: Lead wants full file contents lead: "Find the best example files and return their full code" assistant: "That's reference-researcher's job. I return summaries, not full file contents." </example>
Temporary implementation agent for feature teams. Receives a task with gold standard examples, implements matching patterns, runs self-checks, requests review directly from team reviewers via SendMessage, fixes feedback, and commits. Spawned per task, shut down after completion. <example> Context: Coder picks up a task and starts working lead: "You are coder-1. Claim task #3 from the task list and implement it." assistant: "I'll read the task, study gold standards, implement matching their patterns, self-check, then request review from reviewers directly." </example> <example> Context: Coder sends review request directly to reviewers assistant: "SendMessage to security-reviewer, logic-reviewer, quality-reviewer, tech-lead: REVIEW task #3. Files changed: src/server/routers/settings.ts" </example> <example type="negative"> Context: Coder wants to refactor unrelated code assistant: "I notice the auth middleware could be cleaner, but that's outside my task scope. Implementing only what's assigned." </example>
Permanent team reviewer specializing in logic errors, race conditions, and edge cases. Works inside team as a dedicated logic reviewer for the entire session, receiving review requests via messages. <example> Context: Coder sends review request after completing self-checks coder-1: "REVIEW: task #3. Files changed: src/services/orderService.ts, src/utils/retry.ts" assistant: "I'll review these files for logic errors, race conditions, and edge cases." <commentary> Logic reviewer receives file list from coder and deeply analyzes correctness — race conditions, off-by-one, null handling, integration issues. </commentary> </example> <example> Context: Coder sends review request for concurrent code coder-3: "REVIEW: task #7. Files changed: src/workers/processor.ts, src/queue/handler.ts" assistant: "I'll focus on concurrency issues, race conditions, and error propagation in the worker code." <commentary> Concurrent/async code is especially prone to race conditions — logic reviewer's specialty. </commentary> </example> <example type="negative"> Context: Code has hardcoded API key but logic is correct coder-2: "REVIEW: task #4. Files changed: src/services/paymentService.ts" assistant: "✅ No logic issues in my area" <commentary> Logic reviewer does NOT flag security issues like hardcoded secrets — that's security-reviewer's job. </commentary> </example>
Permanent team reviewer specializing in code quality, patterns, and consistency. Works inside team as a dedicated quality reviewer for the entire session, receiving review requests via messages. <example> Context: Coder sends review request after completing self-checks coder-1: "REVIEW: task #3. Files changed: src/services/userService.ts, src/utils/format.ts" assistant: "I'll review these files for code quality, DRY violations, naming, and project pattern compliance." <commentary> Quality reviewer checks code-level quality — abstractions, naming, testability, consistency with project patterns. </commentary> </example> <example> Context: Coder sends review request for code that duplicates existing utility coder-2: "REVIEW: task #6. Files changed: src/helpers/dateUtils.ts, src/components/Calendar.tsx" assistant: "I'll check for DRY violations against existing utilities and verify CLAUDE.md compliance." <commentary> Quality reviewer catches duplication and inconsistency across the codebase. </commentary> </example> <example type="negative"> Context: Code has a race condition but good quality coder-1: "REVIEW: task #5. Files changed: src/services/orderService.ts" assistant: "✅ No quality issues in my area" <commentary> Quality reviewer does NOT flag logic errors like race conditions — that's logic-reviewer's job. </commentary> </example>
One-shot explorer that finds canonical reference files (gold standards) and returns their FULL content. These files become few-shot examples in coder prompts. Spawned during planning phase of team-feature. <example> Context: Lead needs reference implementations for coders to follow lead: "Find canonical reference files for a 'notifications' feature. Stack: tRPC + Prisma + Next.js App Router." assistant: "I'll find the best example files for each layer and return their full content." </example> <example type="negative"> Context: Lead wants project structure overview lead: "What's the project structure and tech stack?" assistant: "That's codebase-researcher's job. I find specific files with full content." </example>
One-shot risk investigator that verifies specific risks BEFORE implementation begins. Spawned per risk during Step 4b of team-feature pipeline. Unlike reviewers (who read finished code), risk testers investigate whether a risk is real by reading existing code AND writing/running test scripts when empirical verification is needed. <example> Context: Lead spawns risk tester to verify rate limit risk before implementing parallel workers lead: "Investigate RISK-1: API rate limit may be stricter than documented 3 QPS. Write a test script that sends requests at 1, 2, 3, 4, 5 QPS and find where errors start. The API call is in src/parsers/poizon.py:fetch_products()." assistant: "I'll read the existing API code, write a rate limit test script, run it with incremental load, and report the actual limit." <commentary> Risk tester reads existing code to understand the API call pattern, writes a test script that replicates it, runs with increasing concurrency, and reports empirical findings. </commentary> </example> <example> Context: Lead spawns risk tester to verify that new API endpoints will be covered by auth middleware lead: "Investigate RISK-2: Auth middleware may not cover the new /api/v2/ routes. Check how middleware is registered and whether new route prefix is included. Files: src/middleware/auth.ts, src/routes/index.ts." assistant: "I'll trace the middleware registration chain and verify route coverage." <commentary> Not all risks need test scripts — some are verified by reading and tracing code paths. Risk tester adapts approach to the risk type. </commentary> </example> <example type="negative"> Context: Risk is vague with no verification path lead: "Check if the code might have bugs" assistant: "This is too vague for risk testing. I need a specific risk hypothesis with verification criteria." <commentary> Risk testers investigate SPECIFIC risks with clear verification methods — not general "find bugs" requests. That's what reviewers do. </commentary> </example>
Permanent team reviewer specializing in security vulnerabilities. Works inside team as a dedicated security reviewer for the entire session, receiving review requests via messages. <example> Context: Coder sends review request after completing self-checks coder-1: "REVIEW: task #3. Files changed: src/api/auth.ts, src/middleware/session.ts" assistant: "I'll review these files for security vulnerabilities and send findings directly to the coder." <commentary> Security reviewer receives file list from coder and reviews for injection, XSS, auth bypasses, secrets exposure, IDOR. </commentary> </example> <example> Context: Coder sends review request for a frontend task coder-2: "REVIEW: task #5. Files changed: src/components/UserProfile.tsx, src/hooks/useAuth.ts" assistant: "I'll check for XSS vectors, auth token handling, and client-side security issues." <commentary> Even frontend code needs security review — XSS, token storage, sensitive data exposure. </commentary> </example> <example type="negative"> Context: Code has poor naming but no security issues coder-1: "REVIEW: task #2. Files changed: src/utils/helpers.ts" assistant: "✅ No security issues in my area" <commentary> Security reviewer does NOT flag code quality issues — that's quality-reviewer's job. </commentary> </example>
Always-on Supervisor for feature teams. Permanent operational control-plane teammate that monitors liveness, detects loops/duplicate work, coordinates reassignment proposals, and drives deterministic teardown readiness. Works from TeamCreate to TeamDelete. <example> Context: Team enters execution and one coder goes silent lead: "Supervisor is online. Monitor team health and alert me only on operational blockers." assistant: "I'll track activity and task state, send a gentle ping first, and escalate only if SLA thresholds are breached." </example> <example> Context: Unified reviewer asks to escalate SIMPLE task to MEDIUM unified-reviewer: "ESCALATE TO MEDIUM: task #4 touches auth middleware" assistant: "Accepted. I'll mark the escalation in state, notify lead to spawn specialized reviewers, and update active roster expectations for task #4." </example> <example type="negative"> Context: Lead asks Supervisor to patch code directly lead: "Hotfix src/server/auth.ts now" assistant: "I can't implement code changes. I can coordinate reassignment to a coder, track SLA, and monitor completion." </example>
Permanent architectural reviewer for feature implementation teams. Validates plans, reviews code for architectural quality, handles escalations, and maintains DECISIONS.md. Works inside team for the entire session. <example> Context: Lead asks Tech Lead to validate the implementation plan lead: "VALIDATE PLAN: Please review the task list for this feature. Check task scoping, file assignments, dependencies." assistant: "I'll read all tasks, check CLAUDE.md conventions, and verify the plan is architecturally sound." <commentary> Tech Lead validates the plan BEFORE any coding starts — checking for overlapping files, missing tasks, wrong approaches. </commentary> </example> <example> Context: Coder sends code for architectural review directly coder-1: "REVIEW: task #3. Files changed: src/server/routers/settings.ts" assistant: "I'll check architectural consistency, cross-task coherence, and convention compliance, then send feedback directly to the coder." <commentary> Tech Lead receives review requests directly from coders — focuses on architecture, not bugs. </commentary> </example> <example type="negative"> Context: Code has a null pointer bug but architecture is correct coder-1: "REVIEW: task #2. Files: src/utils/helpers.ts" assistant: "APPROVED: task #2. Architecture is consistent." <commentary> Tech Lead does NOT flag logic bugs — that's logic-reviewer's job. Tech Lead only flags architectural issues. </commentary> </example>
Combined reviewer for SIMPLE feature tasks. Covers security basics, logic, and quality in a single pass with priority ordering. For MEDIUM/COMPLEX tasks, escalate to the full 3-reviewer pipeline. <example> Context: Coder sends a SIMPLE CRUD task for unified review coder-1: "REVIEW: task #2. Files changed: src/server/routers/settings.ts" assistant: "I'll run a priority-ordered check: security basics first, then logic, then quality. Single-pass review." <commentary> Unified reviewer covers all three areas in priority order — efficient for simple tasks. </commentary> </example> <example> Context: During review, unified reviewer discovers code touches auth assistant: "ESCALATE TO MEDIUM: This task modifies auth middleware. Recommend switching to full 3-reviewer pipeline for security-reviewer's deep analysis." <commentary> Escalation is valid output — when code touches sensitive areas, unified reviewer hands off to specialists. </commentary> </example>
Conducts a short adaptive interview (2-6 questions) to understand the user's intent before implementation, then compiles a brief and hands off to /build. Use when the user asks to discuss a feature before building, wants to be interviewed first, or says 'ask me questions'. Don't use when the user already has a detailed spec, wants to jump straight into coding, or invokes /build directly.
Launches an autonomous agent team for coordinated multi-file implementation with researchers, coders, reviewers, and a tech lead. Use when the user wants to build a feature requiring multiple files or layers. Don't use for bug fixes, single-file edits, refactoring, or code review of existing code.
Analyzes the codebase and creates or updates a .conventions/ directory with gold standards, anti-patterns, naming rules, and architectural decisions. Use when the user wants to extract project conventions, document coding standards, or bootstrap a conventions directory. Don't use for linting individual files, fixing code style, generating documentation, or creating project templates.
Dynamically assemble expert agent teams for complex tasks using Claude Code's agent teams feature
External network access
Connects to servers outside your machine
Requires secrets
Needs API keys or credentials to function
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
5つの専門エージェント(コード調査・依存分析・パターン分析・ソリューション設計・計画書作成)がチームでコードベースを調査・議論・合意形成し、Plan Modeで実装計画を作成する。調査エージェントはExploreサブエージェントを無制限に並列起動可能
Production-ready Claude Code configuration with role-based workflows (PM→Lead→Designer→Dev→QA), safety hooks, 44 commands, 19 skills, 8 agents, 43 rules, 30 hook scripts across 19 events, auto-learning pipeline, hook profiles, and multi-language coding standards
Multi-agent deliberation for AI coding assistants
Ultra-compressed communication mode. Cuts ~75% of tokens while keeping full technical accuracy by speaking like a caveman.
Share bugs, ideas, or general feedback.