lab
lab is a Rust workspace for running a homelab control plane from one codebase: a reusable SDK in lab-apis, plus a lab binary that exposes the same services through a CLI, an MCP server, an HTTP API, and a TUI plugin manager.
One binary. 22 services (21 feature-gated + always-on extract). Three runtime surfaces (CLI, MCP, HTTP API) that all dispatch through the same shared action catalog. 571 callable actions across all services. One MCP tool per service, not hundreds.
Current state
In the current --all-features build, the catalog registers 22 services across 10 categories with 571 total callable actions:
| Category | Services |
|---|
| Servarr | Radarr, Sonarr |
| Indexer | Prowlarr |
| Media | Plex, Tautulli, Overseerr |
| Download | SABnzbd, qBittorrent |
| Notes / Bookmarks | Linkding, Memos, ByteStash |
| Documents | Paperless-ngx |
| Network / Infrastructure | Tailscale, UniFi, Unraid, Arcane |
| Notifications | Gotify, Apprise |
| AI / Inference | OpenAI, Qdrant, TEI |
| Bootstrap | Extract (always-on) |
Total callable actions: 571
Workspace layout
| Path | Role |
|---|
crates/lab-apis | Pure Rust SDK: typed clients, request/response models, auth, shared HTTP behavior, health contracts, plugin metadata |
crates/lab | Product binary: CLI, MCP server, HTTP API, TUI, config loading, output rendering, discovery catalog |
docs/ | Topic-based source-of-truth documentation |
The architectural rule: shared service logic belongs in lab-apis; the lab crate adapts that logic for user-facing surfaces.
Quick start
1. Build from source
The workspace requires Rust 1.90+ and uses edition 2024.
git clone git@github.com:jmagar/lab.git
cd lab
cargo build --workspace --all-features
To install the binary locally:
cargo install --path crates/lab --all-features
2. Add secrets and preferences
lab splits secrets from preferences:
- Secrets:
~/.lab/.env
- Preferences:
config.toml (searched ./ → ~/.lab/ → ~/.config/lab/)
Config loading order (highest priority wins):
- Process environment variables
~/.lab/.env (via dotenvy)
config.toml (first found: ./config.toml → ~/.lab/config.toml → ~/.config/lab/config.toml)
.env in the current working directory (non-fatal if missing)
Example ~/.lab/.env:
RADARR_URL=http://localhost:7878
RADARR_API_KEY=abc123
SONARR_URL=http://localhost:8989
SONARR_API_KEY=abc123
SABNZBD_URL=http://localhost:8080
SABNZBD_API_KEY=abc123
UNRAID_URL=https://tower.local/graphql
UNRAID_API_KEY=abc123
UNIFI_URL=https://unifi.local
UNIFI_API_KEY=abc123
PLEX_URL=http://localhost:32400
PLEX_TOKEN=abc123
GOTIFY_URL=http://localhost:80
GOTIFY_TOKEN=abc123
Multi-instance services follow the same pattern with a label:
UNRAID_URL=https://tower.local/graphql
UNRAID_API_KEY=abc123
UNRAID_NODE2_URL=https://tower-2.local/graphql
UNRAID_NODE2_API_KEY=abc123
MCP callers select instances via params.instance; CLI via --instance.
Example ~/.lab/config.toml (or ./config.toml for per-repo overrides):
[output]
format = "json"
[log]
filter = "lab=debug,lab_apis=info"
[mcp]
transport = "http"
host = "0.0.0.0"
port = 9000
[api]
cors_origins = ["https://lab.example.com"]
See config.example.toml for all available settings with defaults.
3. Inspect the catalog
lab help
lab help --json
4. Start the MCP server
lab serve
LAB_MCP_HTTP_TOKEN=... lab serve
LAB_AUTH_MODE=oauth LAB_PUBLIC_URL=https://lab.example.com LAB_GOOGLE_CLIENT_ID=... LAB_GOOGLE_CLIENT_SECRET=... lab serve
lab serve --host 127.0.0.1 --port 8765
lab serve mcp --stdio
lab serve --services radarr,sonarr,plex
lab serve is the hosted runtime path. It always starts the HTTP server for:
- the product API
- the Labby web UI (when exported assets exist)
- OAuth metadata and token endpoints
- the hosted HTTP MCP surface at
/mcp
When the exported Labby bundle exists at apps/gateway-admin/out, lab serve
also serves the web UI from the same origin. In that mode:
- Labby UI is available at
http://127.0.0.1:8765/
- product API stays at
http://127.0.0.1:8765/v1/...
- MCP over HTTP stays at
http://127.0.0.1:8765/mcp
The separate Next dev server on 3000 is now a frontend development workflow only.
5. Use the operator commands
lab doctor # Comprehensive health audit for all configured services
lab health # Quick reachability check
lab plugins # Launch the TUI plugin manager
lab oauth relay-local --machine dookie --port 38935
When a browser machine needs to catch a localhost OAuth redirect and forward it to a remote MCP
client, lab oauth relay-local can proxy the callback to a named target or an explicit Tailscale
URL without reimplementing the OAuth flow.
Minimal named-machine setup: