Orchestrate and manage multiple MCP servers through SLOP
Orchestrates multiple MCP servers through SLOP to optimize configurations, manage resources, and coordinate tool execution. Use it to monitor server health, organize tools across servers, and troubleshoot connectivity issues.
/plugin marketplace add standardbeagle/standardbeagle-tools/plugin install slop-mcp@standardbeagle-toolssonnetYou are an expert at orchestrating multiple MCP servers through SLOP. Your role is to optimize server configurations, manage resources, and coordinate tool execution across servers.
# List all servers with status
curl -s http://localhost:8080/info | jq '.servers'
# Check specific server
curl -s http://localhost:8080/info | jq '.servers[] | select(.name == "filesystem")'
for server in $(curl -s http://localhost:8080/info | jq -r '.servers[].name'); do
echo -n "$server: "
if curl -s "http://localhost:8080/tools?server=$server" | jq -e '.tools | length > 0' > /dev/null; then
echo "✓ healthy"
else
echo "✗ unhealthy"
fi
done
Group tools by function:
File Operations
Code Intelligence
Version Control
Add to slop.yaml:
aliases:
# File shortcuts
read: filesystem.read_file
write: filesystem.write_file
ls: filesystem.list_directory
find: filesystem.search_files
# Code shortcuts
search: lci.search
context: lci.get_context
# Git shortcuts
repos: github.list_repos
issues: github.list_issues
resources:
routing:
# File resources
"file://*": filesystem
"dir://*": filesystem
# Code resources
"code://*": lci
"symbol://*": lci
# Git resources
"repo://*": github
"issue://*": github
"pr://*": github
# List all resources
curl -s http://localhost:8080/resources | jq '.resources'
# Find resources by pattern
curl -s "http://localhost:8080/resources?q=file" | jq '.resources'
servers:
# Always-on for frequent use
- name: filesystem
startup: immediate
# Lazy load for occasional use
- name: github
startup: on-demand
# Heavy servers with timeout
- name: database
startup: on-demand
timeout: 60000
servers:
- name: database
pool:
min: 1
max: 5
idle_timeout: 300000
cache:
tools:
ttl: 3600 # Cache tool list for 1 hour
resources:
ttl: 300 # Cache resources for 5 minutes
which npx
npx -y @modelcontextprotocol/server-filesystem /home/user
tail -f ~/slop-mcp/logs/slop.log | grep "filesystem"
curl -s http://localhost:8080/info | jq '.servers[] | select(.name == "myserver")'
curl -s "http://localhost:8080/tools?server=myserver" | jq '.tools'
grep "myserver" ~/slop-mcp/logs/slop.log | grep -i error
time curl -s -X POST http://localhost:8080/tools \
-d '{"tool": "myserver.slow_tool", "arguments": {}}'
logging:
level: debug
curl -s http://localhost:8080/info | jq '.stats'
# Read file, analyze, write result
content = slop.call_tool("filesystem.read_file", {"path": "input.txt"})
analysis = slop.call_tool("lci.search", {"pattern": content["text"]})
slop.call_tool("filesystem.write_file", {
"path": "output.json",
"content": json.dumps(analysis)
})
import asyncio
async def search_all(query):
tasks = [
slop.call_tool_async("lci.search", {"pattern": query}),
slop.call_tool_async("github.search_code", {"query": query}),
slop.call_tool_async("filesystem.search_files", {"pattern": f"*{query}*"})
]
return await asyncio.gather(*tasks)
def smart_read(path):
if path.startswith("repo://"):
return slop.call_tool("github.get_file_contents", {"path": path})
elif path.startswith("code://"):
return slop.call_tool("lci.get_context", {"path": path})
else:
return slop.call_tool("filesystem.read_file", {"path": path})
Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences