From cockroachdb
Guide for using the CockroachDB replicator to continuously replicate changes from PostgreSQL, MySQL, or Oracle to CockroachDB after an initial molt fetch data load. Use when setting up CDC replication, configuring pglogical/mylogical/oraclelogminer, or managing the fetch → replicator cutover workflow.
npx claudepluginhub cockroachdb/claude-plugin --plugin cockroachdbThis skill uses the workspace's default tool permissions.
Continuous change-data-capture (CDC) replication from source databases to CockroachDB. Run **after** `molt fetch` completes the initial bulk load.
Configures replication topologies for PostgreSQL, MySQL, and MongoDB including primary-replica setups, failover automation, lag monitoring, and multi-primary conflict resolution.
Configures database replication for high availability, disaster recovery, read replicas, and monitoring. Covers PostgreSQL master-slave, multi-master, logical replication, and failover.
Creates observability infrastructure for database migrations with CDC pipelines using Debezium, Kafka, Prometheus metrics, Grafana dashboards, and real-time monitoring.
Share bugs, ideas, or general feedback.
Continuous change-data-capture (CDC) replication from source databases to CockroachDB. Run after molt fetch completes the initial bulk load.
Important: replicator is a separate binary from
molt. It is not invoked bymolt fetch. Thedata-load-and-replicationmode in molt fetch is deprecated — use replicator directly instead.
Source DB ──► [replicator] ──► Staging DB (_replicator schema) ──► Target CockroachDB
▲
Publication/
Slot/BinLog/
LogMiner
Replicator reads changes from the source, buffers them in a staging schema on the target CRDB cluster, and applies them to the target tables.
| Source | Command |
|---|---|
| PostgreSQL | replicator pglogical |
| MySQL | replicator mylogical |
| Oracle | replicator oraclelogminer |
| Kafka | replicator kafka |
| Cloud storage | replicator objstore |
| CockroachDB CDC | replicator start |
molt fetch \
--source "postgresql://user:pass@source:5432/db" \
--target "postgresql://root@crdb:26257/db" \
--bucket-path "s3://mybucket/migration" \
--table-handling drop-on-target-and-recreate
-- Run on source PostgreSQL:
CREATE PUBLICATION molt_fetch FOR ALL TABLES;
-- (molt fetch may have already created this; check first)
-- Run on target CockroachDB:
CREATE DATABASE _replicator;
replicator preflight \
--sourceConn "postgresql://user:pass@source:5432/db" \
--targetConn "postgresql://root@crdb:26257/db"
replicator pglogical \
--publicationName "molt_fetch" \
--sourceConn "postgresql://user:pass@source:5432/db" \
--stagingConn "postgresql://root@crdb:26257/_replicator" \
--stagingSchema "_replicator.public" \
--targetConn "postgresql://root@crdb:26257/db" \
--targetSchema "public" \
--metricsAddr "0.0.0.0:8080"
curl http://localhost:8080/metrics | grep replicator_
# Watch for: mutations applied, unapplied mutations, lag
Source prerequisites:
REPLICATION privilegewal_level = logical)molt fetch or manually)replicator pglogical \
--publicationName "molt_fetch" \
--slotName "replicator" \
--sourceConn "postgresql://..." \
--stagingConn "postgresql://root@crdb:26257/_replicator" \
--stagingSchema "_replicator.public" \
--targetConn "postgresql://root@crdb:26257/db" \
--targetSchema "public"
Source prerequisites:
binlog_format = ROW)gtid_mode=ON, enforce_gtid_consistency=ON)REPLICATION CLIENT privilegereplicator mylogical \
--sourceConn "mysql://root:pass@source:3306/db" \
--stagingConn "postgresql://root@crdb:26257/_replicator" \
--stagingSchema "_replicator.public" \
--targetConn "postgresql://root@crdb:26257/db" \
--targetSchema "public"
Source prerequisites:
replicator oraclelogminer \
--sourceConn "oracle://app_user:pass@oracle:1521/db" \
--stagingConn "postgresql://root@crdb:26257/_replicator" \
--stagingSchema "_replicator.public" \
--targetConn "postgresql://root@crdb:26257/db" \
--targetSchema "public"
# Performance
--parallelism 16 # concurrent DB transactions (default: 16)
--flushSize 1000 # rows per batch (default: 1000)
--flushPeriod 1s # flush interval (default: 1s)
# Staging connection pool
--stagingMaxPoolSize 128
--stagingIdleTime 1m
--stagingMaxLifetime 5m
# Target connection pool
--targetMaxPoolSize 128
--targetStatementCacheSize 128
# Retry
--maxRetries 10
--retryInitialBackoff 25ms
--retryMaxBackoff 2s
# Monitoring
--metricsAddr "0.0.0.0:8080" # Prometheus metrics endpoint
--schemaRefresh 1m # refresh schema cache (0 = disabled)
# Dead letter queue (failed rows instead of stopping)
--dlqTableName "replicator_dlq"
# Logging
-v # debug
-vv # trace
--logFormat fluent # for log aggregators
--logDestination "/var/log/replicator.log"
_replicator.public) is auto-created by replicator, but the database (_replicator) must exist first--publicationName and --slotName must match what molt fetch created (default: molt_fetch / molt_slot)--gracePeriod (default: 30s); don't SIGKILL without itSee flags reference for the full flag list.