Auto-discovered marketplace from makfly/cc-delegator
npx claudepluginhub makfly/cc-delegatorGPT expert subagents for Claude Code via Codex CLI. Five specialized experts: Architect, Plan Reviewer, Scope Analyst, Code Reviewer, Security Analyst.
Claude Code marketplace entries for the plugin-safe Antigravity Awesome Skills library and its compatible editorial bundles.
Production-ready workflow orchestration with 79 focused plugins, 184 specialized agents, and 150 skills - optimized for granular installation and minimal token usage
Curated collection of 141 specialized Claude Code subagents organized into 10 focused categories
Multi-provider LLM expert subagents for Claude Code — A fork of claude-delegator supporting any LLM backend
LLM expert subagents for Claude Code. Five specialists that can analyze AND implement—architecture, security, code review, and more.
Supports: Anthropic Claude, OpenAI GPT, GLM-4.7, GLM-5, Ollama, Groq, DeepInfra, and any OpenAI/Anthropic-compatible API.
Claude gains a team of LLM specialists via MCP. Each expert has a distinct specialty and can advise OR implement.
| What You Get | Why It Matters |
|---|---|
| 5 domain experts | Right specialist for each problem type |
| Dual mode | Experts can analyze (read-only) or implement (write) |
| Multi-provider | Use Claude, GPT-4, GLM (4.7/5), Ollama, or any compatible API |
| Auto-routing | Claude detects when to delegate based on your request |
| Prompt Enhancement | LLM-based prompt improvement before expert delegation |
| Synthesized responses | Claude interprets LLM output, never raw passthrough |
| Multilingual | Code Review supports EN/FR/CN (中文) |
| Expert | What They Do | Example Triggers |
|---|---|---|
| Architect | System design, tradeoffs, complex debugging | "How should I structure this?" / "What are the tradeoffs?" |
| Plan Reviewer | Validate plans before you start | "Review this migration plan" / "Is this approach sound?" |
| Scope Analyst | Catch ambiguities early | "What am I missing?" / "Clarify the scope" |
| Code Reviewer | Find bugs, improve quality (EN/FR/CN) | "Review this PR" / "What's wrong with this?" |
| Security Analyst | Vulnerabilities, threat modeling | "Is this secure?" / "Harden this endpoint" |
| Feature | claude-delegator | llm-delegator |
|---|---|---|
| Backend | Codex GPT-5.2 only | Any LLM provider |
| Configuration | CLI args only | CLI args (same model) |
| Providers | Single | Multi-provider (OpenAI, Anthropic, Ollama, etc.) |
| Code Review | English only | EN/FR/CN multilingual |
| Security | OWASP | OWASP + Chinese MLPS standards |
| Prompt Enhancement | None | LLM-based auto-enhancement |
| Routing | None | Intelligent task routing |
| Workflow | None | State machine for automation |
| License | MIT | MIT |
pip install -r requirements.txtcd glm-delegator
pip install -r requirements.txt
Set your API key as an environment variable:
# Anthropic Claude
export ANTHROPIC_API_KEY="sk-ant-..."
# OpenAI
export OPENAI_API_KEY="sk-..."
# GLM via Z.AI
export GLM_API_KEY="your_z_ai_api_key_here"
# Groq
export GROQ_API_KEY="gsk_..."
For persistent configuration, add to your ~/.bashrc or ~/.zshrc:
echo 'export ANTHROPIC_API_KEY="sk-ant-..."' >> ~/.bashrc
source ~/.bashrc
Add to ~/.claude.json (or ~/.claude/settings.json):
{
"mcpServers": {
"claude-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "anthropic-compatible",
"--base-url", "https://api.anthropic.com/v1",
"--api-key", "$ANTHROPIC_API_KEY",
"--model", "claude-sonnet-4-20250514"
]
}
}
}
{
"mcpServers": {
"openai-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "openai-compatible",
"--base-url", "https://api.openai.com/v1",
"--api-key", "$OPENAI_API_KEY",
"--model", "gpt-4o"
]
}
}
}
{
"mcpServers": {
"ollama-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "openai-compatible",
"--base-url", "http://localhost:11434/v1",
"--model", "llama3.1"
]
}
}
}
{
"mcpServers": {
"glm-experts": {
"type": "stdio",
"command": "python3",
"args": [
"/path/to/glm-delegator/glm_mcp_server.py",
"--provider", "anthropic-compatible",
"--base-url", "https://api.z.ai/api/anthropic",
"--api-key", "$GLM_API_KEY",
"--model", "glm-5"
]
}
}
}