npx claudepluginhub tma1-ai/tma1Local-first LLM agent observability powered by GreptimeDB.
Share bugs, ideas, or general feedback.
"Your agent runs. TMA1 remembers."
Local-first observability for AI agents, with a built-in dashboard. See what your agents cost, how long they take, and whether they're doing anything weird. All data stays on your machine. One binary, no cloud account, no Docker, no Grafana.
Named after TMA-1 (Tycho Magnetic Anomaly-1) from 2001: A Space Odyssey: the monolith buried on the moon, silently recording everything until you dig it out.

Five dashboard views, picked automatically from whatever data shows up:
| View | Tabs | Data Source |
|---|---|---|
| Claude Code | Overview, Tools, Cost, Anomalies, Sessions→ | OTel metrics + logs |
| Codex | Overview, Tools, Cost, Anomalies, Sessions→ | OTel logs + metrics |
| OpenClaw | Overview, Sessions, Traces, Cost, Security | OTel traces + metrics |
| OTel GenAI | Overview, Traces, Cost, Security, Search | OTel traces (gen_ai semantic conventions) |
| Sessions | Sessions, Search | Hooks + JSONL transcripts (Claude Code, Codex) |
Sessions→ links in Claude Code and Codex views navigate to the unified Sessions view.
Each view gives you:
OpenClaw and OTel GenAI views also have a Security tab (shell commands, prompt injection, webhook errors).


# macOS / Linux
curl -fsSL https://tma1.ai/install.sh | bash
# Windows (PowerShell)
irm https://tma1.ai/install.ps1 | iex
Or build from source:
git clone https://github.com/tma1-ai/tma1.git
cd tma1
make build
Ask your agent:
Read https://tma1.ai/SKILL.md and follow the instructions to install and configure TMA1 for your AI agent
# Start TMA1
tma1-server
# Configure your agent to send OTel data (protobuf required):
# Claude Code — add to ~/.claude/settings.json:
# "env": {
# "OTEL_EXPORTER_OTLP_ENDPOINT": "http://localhost:14318/v1/otlp",
# "OTEL_EXPORTER_OTLP_PROTOCOL": "http/protobuf",
# "OTEL_METRICS_EXPORTER": "otlp",
# "OTEL_LOGS_EXPORTER": "otlp"
# }
# OpenClaw (sends traces)
openclaw config set diagnostics.otel.endpoint http://localhost:14318/v1/otlp
# Codex — add to ~/.codex/config.toml:
# [otel]
# log_user_prompt = true
#
# [otel.exporter.otlp-http]
# endpoint = "http://localhost:14318/v1/logs"
# protocol = "binary"
#
# [otel.trace_exporter.otlp-http]
# endpoint = "http://localhost:14318/v1/traces"
# protocol = "binary"
#
# [otel.metrics_exporter.otlp-http]
# endpoint = "http://localhost:14318/v1/metrics"
# protocol = "binary"
#
# Then restart Codex.
# Any OTel SDK
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:14318/v1/otlp \
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \
your-agent
# Open the dashboard
open http://localhost:14318
Agent (Claude Code / Codex / OpenClaw / any GenAI app)
│ OTLP/HTTP
▼
tma1-server (port 14318)
│ receives + stores OTel data
│ derives per-minute aggregations
│ serves dashboard UI
▼
Browser dashboard (embedded in the binary)
One process, one binary. First start creates ~/.tma1/ and you're good to go. Nothing leaves your machine.
Agents send OTLP data to tma1-server:
http://localhost:14318/v1/otlp # Wildcard OTLP (recommended)
http://localhost:14318/v1/traces # Direct signal: traces
http://localhost:14318/v1/metrics # Direct signal: metrics
http://localhost:14318/v1/logs # Direct signal: logs
Codex requires separate per-signal endpoints; other agents can use the single /v1/otlp base.
| Endpoint | Method | Description |
|---|---|---|
/health | GET | Liveness check |
/status | GET | Backend reachability |
/api/query | POST | SQL proxy ({"sql": "SELECT ..."}) |
/api/prom/* | GET/POST | Prometheus API proxy (PromQL) |