Multi-MCP: Multi-Model Code Review and Analysis MCP Server for Claude Code

A multi-model AI orchestration MCP server for automated code review and LLM-powered analysis. Multi-MCP integrates with Claude Code CLI and OpenCode to orchestrate multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for code quality checks, security analysis (OWASP Top 10), and multi-agent consensus. Built on the Model Context Protocol (MCP), this tool enables Python developers and DevOps teams to automate code reviews with AI-powered insights directly in their development workflow.

Features
- 🔍 Code Review - Systematic workflow with OWASP Top 10 security checks and performance analysis
- 💬 Chat - Interactive development assistance with repository context awareness
- 🔄 Compare - Parallel multi-model analysis for architectural decisions
- 🎭 Debate - Multi-agent consensus workflow (independent answers + critique)
- 🤖 Multi-Model Support - OpenAI GPT, Anthropic Claude, Google Gemini, and OpenRouter
- 🖥️ CLI & API Models - Mix CLI-based (Gemini CLI, Codex CLI) and API models
- 🏷️ Model Aliases - Use short names like
mini, sonnet, gemini
- 🧵 Threading - Maintain context across multi-step reviews
How It Works
Multi-MCP acts as an MCP server that Claude Code or OpenCode connects to, providing AI-powered code analysis tools:
- Install the MCP server and configure your AI model API keys
- Integrate with Claude Code or OpenCode automatically via
make install
- Invoke tools using natural language (e.g., "multi codereview this file")
- Get Results from multiple AI models orchestrated in parallel
Performance
Fast Multi-Model Analysis:
- ⚡ Parallel Execution - 3 models in ~10s (vs ~30s sequential)
- 🔄 Async Architecture - Non-blocking Python asyncio
- 💾 Conversation Threading - Maintains context across multi-step reviews
- 📊 Low Latency - Response time = slowest model, not sum of all models
Quick Start
Prerequisites:
- Python 3.11+
- API key for at least one provider (OpenAI, Anthropic, Google, or OpenRouter)
Installation
Option 1: From Source
# Clone and install
git clone https://github.com/religa/multi_mcp.git
cd multi_mcp
# Execute ./scripts/install.sh
make install
# The installer will:
# 1. Install dependencies (uv sync)
# 2. Generate your .env file
# 3. Automatically add to Claude Code / OpenCode config (requires jq)
# 4. Test the installation
Option 2: Manual Configuration
If you prefer not to run make install:
# Install dependencies
uv sync
# Copy and configure .env
cp .env.example .env
# Edit .env with your API keys
Add to Claude Code (~/.claude.json) or OpenCode (~/.opencode/opencode.json), replacing /path/to/multi_mcp with your actual clone path:
Claude Code:
{
"mcpServers": {
"multi": {
"type": "stdio",
"command": "/path/to/multi_mcp/.venv/bin/python",
"args": ["-m", "multi_mcp.server"]
}
}
}
OpenCode:
{
"mcp": {
"multi": {
"type": "local",
"command": ["/path/to/multi_mcp/.venv/bin/python", "-m", "multi_mcp.server"],
"enabled": true
}
}
}
Configuration
Environment Configuration (API Keys & Settings)
Multi-MCP loads settings from .env files in this order (highest priority first):
- Environment variables (already set in shell)
- Project
.env (current directory or project root)
- User
.env (~/.multi_mcp/.env) - fallback for pip installs
Edit .env with your API keys:
# API Keys (configure at least one)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OPENROUTER_API_KEY=sk-or-...
# Azure OpenAI (optional)
AZURE_API_KEY=...
AZURE_API_BASE=https://your-resource.openai.azure.com/
# AWS Bedrock (optional)
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION_NAME=us-east-1
# Model Configuration
DEFAULT_MODEL=gpt-5-mini
DEFAULT_MODEL_LIST=gpt-5-mini,gemini-3-flash
Model Configuration (Adding Custom Models)