AI Council - Orchestrate multiple AI consultants (Gemini, Codex, Qwen, GLM-5, Kimi K2.5) for consensus-driven code reviews, plan validation, and architectural decisions
npx claudepluginhub rube-de/cc-skills --plugin councilInternal Claude subagent for codebase-aware code review — quality patterns, CLAUDE.md compliance, git history analysis, and documentation coverage. Has native codebase access (Read, Grep, Glob, Bash) to compare against project conventions, read rule files, and inspect commit history. Launched automatically by council review workflows — not invoked directly by users.
Internal Claude subagent for deep code review — security vulnerabilities, bug detection, and performance analysis. Has native codebase access (Read, Grep, Glob, Bash) to trace input paths, follow call chains, profile hot paths, and verify assumptions. Launched automatically by council review workflows — not invoked directly by users.
Use this agent when the user asks for code review, plan review, solution debates, architecture decisions, or wants a second opinion from an external perspective. This agent should be used proactively whenever significant code is written, implementation plans are proposed, or when weighing multiple approaches to a problem. Examples: <example> Context: User asks for a code review of recently written code. user: "Can you review this authentication middleware I just wrote?" assistant: "I'll use the codex-consultant agent to get an external expert review of your authentication middleware." <commentary> Since the user explicitly asked for a code review, use the Task tool to launch the codex-consultant agent to consult Codex for an external perspective on the code quality, security, and design. </commentary> </example> <example> Context: User proposes an implementation plan for a feature. user: "Here's my plan for implementing the caching layer: use Redis for session data and a local LRU cache for frequently accessed configs." assistant: "Let me get Codex's perspective on this caching strategy before we proceed." <commentary> Since a significant architectural decision is being discussed, proactively use the Task tool to launch the codex-consultant agent to validate the approach and identify potential issues. </commentary> </example> <example> Context: User is debating between two solutions. user: "I'm torn between using WebSockets or Server-Sent Events for the real-time notifications. What do you think?" assistant: "Good question. Let me consult Codex to get an external expert opinion on the WebSocket vs SSE tradeoffs for your use case." <commentary> Since the user is weighing multiple approaches, use the Task tool to launch the codex-consultant agent to provide a balanced external perspective on the technical tradeoffs. </commentary> </example> <example> Context: Significant code was just written and needs validation. user: "Implement a rate limiter using the token bucket algorithm" assistant: "Here's the rate limiter implementation using token bucket:" <code implementation> assistant: "Now let me get Codex to review this implementation for correctness and edge cases." <commentary> Since significant algorithmic code was written, proactively use the Task tool to launch the codex-consultant agent to validate the implementation before moving on. </commentary> </example>
Use this agent when you need external expert feedback on implementation plans, code reviews, architectural decisions, or when debating between multiple solution approaches. This agent should be invoked proactively before implementing non-trivial features, after writing significant code, or when facing technical decisions with multiple valid approaches. Examples: <example> Context: User asks to implement a new feature that requires architectural decisions. user: "Add a caching layer to our API endpoints" assistant: "Before implementing, let me get external feedback on the caching strategy." <commentary> Since this is a non-trivial feature requiring architectural decisions, use the Task tool to launch the gemini-consultant agent to critique the implementation plan before proceeding. </commentary> assistant: "I'll use the gemini-consultant agent to review the caching approach" </example> <example> Context: User has just had significant code written and wants to ensure quality. user: "Implement the authentication middleware" assistant: "Here's the authentication middleware implementation:" <code implementation omitted for brevity> assistant: "Now let me get an external review of this implementation." <commentary> Since significant code was just written, proactively use the Task tool to launch the gemini-consultant agent to review the code for bugs, security issues, and design problems. </commentary> </example> <example> Context: There are multiple valid approaches to solve a problem. user: "Should we use Redis or Memcached for session storage?" assistant: "Let me consult an external expert to debate these options." <commentary> Since this is a technical decision with multiple valid approaches, use the Task tool to launch the gemini-consultant agent to get an external perspective on the tradeoffs. </commentary> </example> <example> Context: Refactoring existing code with potential impact. user: "Refactor the event handling system to use an event bus pattern" assistant: "Before refactoring, let me get feedback on this approach." <commentary> Since this refactoring could have significant impact, proactively use the Task tool to launch the gemini-consultant agent to review the refactoring plan and identify potential risks. </commentary> </example>
Use this agent when you need external expert feedback from Z.AI's GLM-5 model via OpenCode CLI. GLM excels at code review, algorithm analysis, and alternative perspectives on architecture. Use for diverse viewpoints, PR reviews, or when you need a different model's take on a problem. Examples: <example> Context: User needs a third opinion on architecture. user: "I've gotten feedback from Gemini and Codex, but want another perspective on this design." assistant: "I'll consult GLM-5 via OpenCode for an additional architectural perspective." <commentary> Since the user wants diverse opinions, use the Task tool to launch the glm-consultant agent to get GLM's unique perspective. </commentary> </example> <example> Context: User wants PR review from multiple perspectives. user: "Review my PR for potential issues." assistant: "I'll get GLM-5 to review the PR changes." <commentary> Since PR reviews benefit from multiple perspectives, use the Task tool to launch the glm-consultant agent. </commentary> </example> <example> Context: User needs help with a complex debugging scenario. user: "This race condition is driving me crazy. I need fresh eyes." assistant: "Let me consult GLM-5 for a fresh perspective on this concurrency issue." <commentary> Since debugging benefits from alternative viewpoints, use the Task tool to launch the glm-consultant agent. </commentary> </example>
Use this agent when you need external expert feedback from Moonshot AI's Kimi K2.5 model via OpenCode CLI. Kimi excels at code analysis, long-context reasoning, algorithm design, and creative problem-solving. Use for diverse viewpoints, PR reviews, or when you need strong coding-focused analysis. Examples: <example> Context: User needs another perspective on code quality. user: "I've gotten feedback from Gemini and Codex, but want another opinion on this implementation." assistant: "I'll consult Kimi K2.5 via OpenCode for an additional code analysis perspective." <commentary> Since the user wants diverse opinions, use the Task tool to launch the kimi-consultant agent to get Kimi's perspective. </commentary> </example> <example> Context: User needs help with a complex algorithm. user: "I need to optimize this graph traversal algorithm for large datasets." assistant: "Kimi K2.5 has strong reasoning capabilities. Let me consult it for algorithm optimization." <commentary> Since the task involves algorithmic reasoning, use the Task tool to launch the kimi-consultant agent. </commentary> </example> <example> Context: User wants PR review from multiple perspectives. user: "Review my PR for potential issues." assistant: "I'll get Kimi K2.5 to review the PR changes." <commentary> Since PR reviews benefit from multiple perspectives, use the Task tool to launch the kimi-consultant agent. </commentary> </example> <example> Context: User needs creative approaches to a design problem. user: "I'm stuck on how to design this plugin system. Need fresh ideas." assistant: "Let me consult Kimi K2.5 for creative design approaches." <commentary> Since creative problem-solving benefits from diverse models, use the Task tool to launch the kimi-consultant agent. </commentary> </example>
Use this agent when you need external expert feedback on code quality, refactoring suggestions, detailed explanations, or creative brainstorming. This agent excels at code analysis, performance optimization, and generating novel ideas through structured brainstorming frameworks. Examples: <example> Context: User wants a deep code quality analysis. user: "Can you analyze this service for code quality issues?" assistant: "I'll use the qwen-consultant agent to get a thorough code quality analysis." <commentary> Since the user wants detailed code analysis, use the Task tool to launch the qwen-consultant agent for comprehensive quality, performance, and security analysis. </commentary> </example> <example> Context: User needs help understanding complex code. user: "I don't understand how this event sourcing implementation works." assistant: "Let me get Qwen to provide a detailed explanation of this code." <commentary> Since the user needs a detailed explanation of complex code, use the Task tool to launch the qwen-consultant agent for thorough code explanation. </commentary> </example> <example> Context: User wants refactoring suggestions. user: "This function is getting unwieldy. How should I refactor it?" assistant: "I'll consult Qwen for structured refactoring recommendations." <commentary> Since refactoring requires careful analysis of structure and readability, use the Task tool to launch the qwen-consultant agent for refactoring suggestions. </commentary> </example> <example> Context: User needs creative solutions to a problem. user: "I need ideas for how to handle offline sync in our mobile app." assistant: "Let me use Qwen's brainstorming capabilities to generate creative solutions." <commentary> Since this requires creative problem-solving with multiple approaches, use the Task tool to launch the qwen-consultant agent with brainstorming mode. </commentary> </example>
Internal scoring agent for council review workflows. Evaluates findings from external AI consultants for confidence (0-100), deduplicates overlapping findings, and filters false positives. Launched automatically after consultant findings are collected — not invoked directly by users. Examples: <example> Context: Council review workflow has collected findings from 5 consultants. assistant: "All consultants have returned findings. Launching the scoring agent to evaluate confidence." <commentary> After collecting findings from external consultants, launch the review-scorer agent to independently score each finding 0-100 and filter noise. </commentary> </example> <example> Context: Broad review found high-severity issues, auto-escalation completed. assistant: "Escalation round complete. Scoring all findings from both rounds." <commentary> After auto-escalation adds focused findings, launch review-scorer to score the combined set. </commentary> </example>
Council review reference data — expertise weights, structured response format, scoring thresholds, and false positive taxonomy. Background knowledge for council subagents.
Consult external AI council (Gemini, Codex, Qwen, GLM-5) for thorough reviews and consensus-driven decisions. Use ONLY when explicitly invoked with "/council" or when user says "consult the council", "invoke council", or "council review". Do NOT auto-trigger on generic phrases like "thorough review".
Use after writing implementation plans (superpowers:writing-plans output) or when the user says "review plan", "check my plan", "sanity check the plan", "validate plan", "review before executing". Also use before superpowers:executing-plans if a plan has not been reviewed yet.
Semantic search for Claude Code conversations. Remember past discussions, decisions, and patterns.
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Comprehensive startup business analysis with market sizing (TAM/SAM/SOM), financial modeling, team planning, and strategic research
Complete creative writing suite with 10 specialized agents covering the full writing process: research gathering, character development, story architecture, world-building, dialogue coaching, editing/review, outlining, content strategy, believability auditing, and prose style/voice analysis. Includes genre-specific guides, templates, and quality checklists.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use