LLM Delegator
Multi-provider LLM expert subagents for Claude Code — A fork of claude-delegator supporting any LLM backend
LLM expert subagents for Claude Code. Five specialists that can analyze AND implement—architecture, security, code review, and more.
Supports: Anthropic Claude, OpenAI GPT, GLM-4.7, GLM-5, Ollama, Groq, DeepInfra, and any OpenAI/Anthropic-compatible API.

What is LLM Delegator?
Claude gains a team of LLM specialists via MCP. Each expert has a distinct specialty and can advise OR implement.
| What You Get | Why It Matters |
|---|
| 5 domain experts | Right specialist for each problem type |
| Dual mode | Experts can analyze (read-only) or implement (write) |
| Multi-provider | Use Claude, GPT-4, GLM (4.7/5), Ollama, or any compatible API |
| Auto-routing | Claude detects when to delegate based on your request |
| Prompt Enhancement | LLM-based prompt improvement before expert delegation |
| Synthesized responses | Claude interprets LLM output, never raw passthrough |
| Multilingual | Code Review supports EN/FR/CN (中文) |
The Experts
| Expert | What They Do | Example Triggers |
|---|
| Architect | System design, tradeoffs, complex debugging | "How should I structure this?" / "What are the tradeoffs?" |
| Plan Reviewer | Validate plans before you start | "Review this migration plan" / "Is this approach sound?" |
| Scope Analyst | Catch ambiguities early | "What am I missing?" / "Clarify the scope" |
| Code Reviewer | Find bugs, improve quality (EN/FR/CN) | "Review this PR" / "What's wrong with this?" |
| Security Analyst | Vulnerabilities, threat modeling | "Is this secure?" / "Harden this endpoint" |
Differences from claude-delegator
| Feature | claude-delegator | llm-delegator |
|---|
| Backend | Codex GPT-5.2 only | Any LLM provider |
| Configuration | CLI args only | CLI args (same model) |
| Providers | Single | Multi-provider (OpenAI, Anthropic, Ollama, etc.) |
| Code Review | English only | EN/FR/CN multilingual |
| Security | OWASP | OWASP + Chinese MLPS standards |
| Prompt Enhancement | None | LLM-based auto-enhancement |
| Routing | None | Intelligent task routing |
| Workflow | None | State machine for automation |
| License | MIT | MIT |
Install
Prerequisites
- Python 3.8+ for the MCP server
- API Key for your chosen provider (Anthropic, OpenAI, Z.AI, etc.)
- httpx library -
pip install -r requirements.txt
Step 1: Install Dependencies
cd glm-delegator
pip install -r requirements.txt
Step 2: Configure API Key
Set your API key as an environment variable:
# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-..."
# OpenAI
export OPENAI_API_KEY="sk-..."
# GLM via Z.AI
export GLM_API_KEY="your_z_ai_api_key_here"
# Groq
export GROQ_API_KEY="gsk_..."
For persistent configuration, add to your ~/.bashrc or ~/.zshrc:
echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.bashrc
source ~/.bashrc
Step 3: Register MCP Server
Add to ~/.claude.json (or ~/.claude/settings.json):
Using Anthropic Claude
{
"mcpServers": {
"claude-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "anthropic-compatible",
"--base-url", "https://api.anthropic.com/v1",
"--api-key", "$ANTHROPIC_API_KEY",
"--model", "claude-sonnet-4-20250514"
]
}
}
}
Using OpenAI GPT
{
"mcpServers": {
"openai-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "openai-compatible",
"--base-url", "https://api.openai.com/v1",
"--api-key", "$OPENAI_API_KEY",
"--model", "gpt-4o"
]
}
}
}
Using Ollama (Local)
{
"mcpServers": {
"ollama-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "openai-compatible",
"--base-url", "http://localhost:11434/v1",
"--model", "llama3.1"
]
}
}
}
Using GLM via Z.AI
{
"mcpServers": {
"glm-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "anthropic-compatible",
"--base-url", "https://api.z.ai/api/anthropic",
"--api-key", "$GLM_API_KEY",
"--model", "glm-5"
]
}
}
}