From posthog
Debugs local session replay pipeline failures like missing recordings despite /s calls or post-hogli issues. Troubleshoots SDK capture, Caddy proxy, Rust capture-replay, Kafka, Node services, SeaweedFS.
npx claudepluginhub anthropics/claude-plugins-official --plugin posthogThis skill uses the workspace's default tool permissions.
When a developer says "local replay isn't working" or "recordings aren't showing up",
Diagnoses why PostHog session recordings are missing or not captured using SDK diagnostic signals, session-recording-get, SQL queries, and checks for sampling, triggers, ad blockers, quotas. For troubleshooting no replays by session ID or project-wide.
Set up session replay for visual debugging of user sessions. Guides privacy masking, performance budgets, capture strategies for issue reproduction and QA.
Generates reproducible bug steps from Amplitude Session Replays by finding error sessions, extracting interaction timelines, and identifying failure sequences. Useful for user bug reports, repro requests, or error spikes.
Share bugs, ideas, or general feedback.
When a developer says "local replay isn't working" or "recordings aren't showing up", work through these layers in order. The local replay pipeline has several moving parts and failures are usually silent.
| Symptom | Likely cause |
|---|---|
No /s calls in Network tab | SDK not recording — triggers, settings, or recorder script issue (Step 1) |
/s calls return 200 but no recordings in list | Ingestion pipeline broken — capture-replay, Kafka, or ingestion-sessionreplay (Steps 2-3) |
| Recordings listed but playback stuck on "Buffering..." | recording-api (port 6741) not running (Step 2) |
| Recorder script MIME type or CORS error in console | Frontend build stale — need pnpm build + pnpm copy-scripts (Step 1) |
Browser SDK → /s endpoint (Caddy proxy :8000)
→ capture-replay (Rust, :3306)
→ Kafka (session_recording_snapshot_item_events topic)
→ ingestion-sessionreplay (Node, :6740, PLUGIN_SERVER_MODE=recordings-blob-ingestion-v2)
→ SeaweedFS (blob storage, :8333)
→ recording-api (Node, :6741, PLUGIN_SERVER_MODE=recording-api)
→ Frontend
A break at any point in this chain means no recordings in the UI. The diagnostic approach is to find where the chain breaks.
Ask the developer to open browser DevTools Network tab and filter for /s.
If no /s calls at all:
The SDK isn't attempting to send recording data. Investigate client-side causes:
$replay_sample_rate is < 1.0, sessions may be sampled out.session_recording must not be explicitly disabled.http://localhost:8000 (or wherever
the local Caddy proxy is running)./s endpoint.MIME type ('text/html') is not executable for posthog-recorder.js or a CORS error for
lazy-recorder.js. This means Django is serving an HTML page (usually the login redirect)
instead of the JS file — the static recorder scripts are stale or missing.
See recorder script build failure.If /s calls are happening with 200 responses:
The SDK is recording and capture is receiving data. The break is downstream — proceed to Step 2.
If /s calls are returning errors (4xx/5xx):
The capture service may be down or misconfigured. Check capture-replay in phrocs.
Check that these phrocs processes are running and healthy.
A "running" process that never produced output after tsx watch src/index.ts is effectively dead.
| Process | Port | What it does |
|---|---|---|
capture-replay | 3306 | Rust service receiving /s, writes to Kafka |
ingestion-sessionreplay | 6740 | Node consumer processing recordings from Kafka |
recording-api | 6741 | Node service serving replay data to the frontend |
Verify with:
lsof -nP -i :3306 -i :6740 -i :6741
If ports are not listening: The processes haven't started or are stuck. See common failures.
If ports are listening: The pipeline processes are running. Proceed to Step 3.
These Docker containers must be running and healthy:
| Container | Purpose |
|---|---|
posthog-kafka-1 | Message bus for recording events |
posthog-db-1 | Postgres for metadata |
posthog-redis7-1 | Redis for state |
posthog-clickhouse-1 | ClickHouse for session data |
seaweedfs-main | Blob storage for recording data |
Check with:
docker ps --format "table {{.Names}}\t{{.Status}}" | grep -E "kafka|db|redis7|clickhouse|seaweed"
All should show (healthy) except seaweedfs which doesn't have a health check.
If seaweedfs-main is missing, the replay Docker profile may not be active —
check the docker-compose phrocs process output for --profile replay.
If capture-replay is running and receiving /s calls, data should land on the
session_recording_snapshot_item_events Kafka topic. Check the Kafka UI at
http://localhost:8080 (if the debug_tools intent is enabled) or use kcat:
kcat -b localhost:9092 -t session_recording_snapshot_item_events -C -c 5 -e
If the topic is empty or doesn't exist: capture-replay isn't writing to Kafka. Check its phrocs logs for Kafka connection errors.
If data is on the topic but recordings don't appear: ingestion-sessionreplay isn't consuming. Check if it's stuck, crashed, or if an orphaned process is holding the consumer group (see common failures).
Ingestion writes recording blobs to SeaweedFS. Verify it's accessible:
curl -s http://localhost:8333/ | head -5
The SESSION_RECORDING_V2_S3_ENDPOINT env var must be set correctly.
In bin/start, this defaults to http://seaweedfs:8333 (the Docker hostname).
Host processes resolve this via Docker networking.
See common failures for detailed diagnosis of:
bin/wait-for-docker