Set up bidirectional sync between local SQLite databases and a PostgreSQL backend using PowerSync, Drizzle ORM, and BetterAuth JWT authentication. Use when the user asks to "set up PowerSync", "add local-first sync", "implement offline-first app sync", "sync SQLite with PostgreSQL", "set up PowerSync with Drizzle", or wants cloud sync for Electron or Expo React Native apps.
From recipesnpx claudepluginhub ichabodcole/project-docs-scaffold-template --plugin recipesThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Calculates TAM/SAM/SOM using top-down, bottom-up, and value theory methodologies for market sizing, revenue estimation, and startup validation.
Implement bidirectional sync between local SQLite databases (on Electron desktop and Expo mobile apps) and a PostgreSQL backend using PowerSync. This recipe captures the integration glue between PowerSync SDK, Drizzle ORM, BetterAuth JWT authentication, and the PowerSync Service -- the parts that require careful coordination across multiple platforms and are not obvious from any single library's documentation.
The core value: users can work fully offline with immediate read/write performance against a local SQLite database, and their data syncs automatically to PostgreSQL (and to their other devices) when a network connection is available.
| Layer | Technology | Version |
|---|---|---|
| Sync Service | PowerSync (self-hosted or cloud) | 1.3.8+ |
| Source Database | PostgreSQL | 15+ |
| Local Database | SQLite (via PowerSync SDK) | - |
| ORM | Drizzle ORM (sqlite + pg dialects) | 0.45+ |
| Drizzle Driver | @powersync/drizzle-driver | 1.x |
| Auth | BetterAuth + JWT plugin | 1.4+ |
| Desktop SDK | @powersync/node | 1.x |
| Mobile SDK | @powersync/react-native | 1.x |
| Desktop Runtime | Electron (better-sqlite3 backing) | - |
| Mobile Runtime | Expo / React Native (op-sqlite) | - |
PowerSync acts as a transparent sync layer between PostgreSQL (the source of truth) and local SQLite databases on each client device. Clients read and write exclusively against their local SQLite -- PowerSync handles replication in both directions.
Client Devices Cloud Infrastructure
================== ====================
Desktop (Electron) PowerSync Service
SQLite (local) ────WebSocket──── Sync Protocol Router
PowerSync SDK Bucket Storage (PG schema)
Drizzle ORM |
Logical Replication
Mobile (Expo) |
SQLite (local) ────WebSocket──── PostgreSQL (source of truth)
PowerSync SDK |
Drizzle ORM API Backend (Elysia)
BetterAuth + JWT
JWKS endpoint
Upload handler
PostgreSQL is the source of truth. PowerSync bucket storage is derived from PostgreSQL via logical replication and is fully rebuildable. If bucket storage is lost, PowerSync re-replicates from PostgreSQL. Clients re-download via full resync. Prioritize PostgreSQL backups; bucket storage backups are optional.
Two Drizzle schemas, one conceptual model. Clients use
drizzle-orm/sqlite-core and the API uses drizzle-orm/pg-core. The schemas
must be manually kept in alignment. Different dialects prevent a single shared
schema definition. Validate alignment with a CI script.
No foreign key constraints in SQLite. PowerSync does not support FK constraints. Referential integrity is enforced in application code (service layer) instead. This is a deliberate trade-off for sync compatibility.
Denormalized ownerId on every synced table. PowerSync sync rules filter
rows by owner_id. Rather than joining through a parent table, every synced
table carries its own ownerId column. This enables simple, efficient sync rule
definitions.
UUID primary keys everywhere. PowerSync requires UUID-style string primary keys. No auto-incrementing integers. All IDs are generated client-side (nanoid or UUID v4).
Last-Write-Wins conflict resolution. PowerSync uses LWW for concurrent edits. This is acceptable for single-user multi-device scenarios. For collaborative editing, you would need an additional CRDT or OT layer.
id column
as primary key on every tableEvery synced table must follow these rules:
text('id').primaryKey() -- UUID v4, generated client-sideownerId: text('owner_id') -- denormalized for sync rule filteringtext('created_at'))integer({ mode: 'boolean' })Example synced table (SQLite schema):
// packages/database/src/schema.ts
import { sqliteTable, text, integer, index } from "drizzle-orm/sqlite-core";
export const documents = sqliteTable(
"documents",
{
id: text("id").primaryKey(), // UUID v4, client-generated
title: text("title"),
content: text("content").notNull(),
ownerId: text("owner_id"), // Denormalized for sync rules
projectId: text("project_id"), // App-enforced FK (no constraint)
deviceId: text("device_id").notNull(), // Tracks originating device
createdAt: text("created_at").notNull(),
updatedAt: text("updated_at").notNull(),
deletedAt: text("deleted_at"), // Soft delete
},
(table) => [
index("idx_documents_owner").on(table.ownerId),
index("idx_documents_deleted").on(table.deletedAt),
]
);
Equivalent PostgreSQL schema (API side):
// apps/api/src/features/core/db.ts
import { pgTable, text, timestamp } from "drizzle-orm/pg-core";
export const documents = pgTable("documents", {
id: text("id").primaryKey(),
title: text("title"),
content: text("content").notNull(),
ownerId: text("owner_id").notNull(), // NOT NULL on server side
projectId: text("project_id"),
deviceId: text("device_id").notNull(),
createdAt: timestamp("created_at").notNull(),
updatedAt: timestamp("updated_at").notNull(),
deletedAt: timestamp("deleted_at"),
});
Critical difference: ownerId is nullable in SQLite but NOT NULL in
PostgreSQL. Clients may create records before sync populates the owner. The API
upload handler sets ownerId from the JWT before inserting into PostgreSQL.
Tables that should NOT sync to the backend must be marked localOnly: true in
the PowerSync schema. Use local-only for:
Synced tables are everything else -- the user's actual content data.
The shared schema package is the foundation. Both Desktop and Mobile apps import table definitions from this package.
1.1 Define the Drizzle SQLite schema
Create the shared schema in a monorepo package. Every synced table must have
id, ownerId, and timestamp columns:
// packages/database/src/schema.ts
import {
sqliteTable,
text,
integer,
index,
uniqueIndex,
} from "drizzle-orm/sqlite-core";
export const projects = sqliteTable(
"projects",
{
id: text("id").primaryKey(),
name: text("name").notNull(),
ownerId: text("owner_id"),
isDefault: integer("is_default", { mode: "boolean" })
.notNull()
.default(false),
createdAt: text("created_at").notNull(),
updatedAt: text("updated_at").notNull(),
deletedAt: text("deleted_at"),
},
(table) => [index("idx_projects_owner").on(table.ownerId)]
);
Export all tables from the package entry point:
// packages/database/src/index.ts
export * as tables from "./schema";
1.2 Configure Drizzle Kit for migrations
// packages/database/drizzle.config.ts
import type { Config } from "drizzle-kit";
export default {
schema: "./src/schema.ts",
out: "./drizzle",
dialect: "sqlite",
// Use sqlite dialect (not expo) -- generate bundle script
// handles React Native compatibility separately
} satisfies Config;
Run drizzle-kit generate to produce migration SQL files. These are used by the
API's PostgreSQL schema (adapted to pg dialect) and optionally for migration
bundles on mobile.
Validate: Import the schema package in both Desktop and Mobile apps. Verify TypeScript compilation succeeds.
Both Desktop and Mobile follow the same pattern: create a PowerSyncDatabase,
define which tables are local-only, wrap it with Drizzle, and connect via a
backend connector.
2.1 Initialize PowerSync with DrizzleAppSchema
The DrizzleAppSchema constructor takes your Drizzle table definitions. By
default, all tables are synced. Override specific tables with localOnly: true:
import { PowerSyncDatabase } from "@powersync/node"; // Desktop
// OR: import { PowerSyncDatabase } from '@powersync/react-native'; // Mobile
import {
wrapPowerSyncWithDrizzle,
DrizzleAppSchema,
} from "@powersync/drizzle-driver";
import { tables } from "@my/database";
const appSchema = new DrizzleAppSchema({
...tables,
// Override local-only tables
preferences: {
tableDefinition: tables.preferences,
options: { localOnly: true },
},
localMetadata: {
tableDefinition: tables.localMetadata,
options: { localOnly: true },
},
});
const powerSync = new PowerSyncDatabase({
schema: appSchema,
database: {
dbFilename: "myapp.db",
// Desktop only: provide a worker for background sync
// openWorker: openPowerSyncWorker,
},
});
// Wrap with Drizzle for type-safe query builder
const db = wrapPowerSyncWithDrizzle(powerSync, { schema: tables });
CRITICAL: PowerSync automatically creates tables from the schema definition. No client-side SQL migrations are needed for synced tables. PowerSync manages the SQLite schema lifecycle.
2.2 Implement the backend connector
The connector has two methods: fetchCredentials() and uploadData().
PowerSync calls these automatically.
import type {
AbstractPowerSyncDatabase,
PowerSyncBackendConnector,
PowerSyncCredentials,
} from "@powersync/node"; // or @powersync/react-native
class MyPowerSyncConnector implements PowerSyncBackendConnector {
async fetchCredentials(): Promise<PowerSyncCredentials | null> {
// 1. Check if user is authenticated
// 2. Fetch PowerSync URL from API
const configResponse = await authenticatedFetch("/api/powersync/config");
const config = await configResponse.json();
// 3. Fetch JWT token for PowerSync
const tokenResponse = await authenticatedFetch("/api/auth/token");
const { token } = await tokenResponse.json();
return {
endpoint: config.powersyncUrl,
token: token,
};
}
async uploadData(database: AbstractPowerSyncDatabase): Promise<void> {
const transaction = await database.getNextCrudTransaction();
if (!transaction) return;
// Map CRUD entries to upload format
const ops = transaction.crud.map((entry) => ({
op: entry.op,
table: entry.table,
data: entry.opData,
id: entry.id,
}));
const opId = String(
transaction.transactionId ?? transaction.crud[0]?.clientId ?? Date.now()
);
try {
const response = await authenticatedFetch("/api/powersync/upload", {
method: "PUT",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
transactions: [{ op_id: opId, ops }],
}),
});
if (!response.ok) {
const message = await response.text().catch(() => null);
throw new Error(message || `Upload failed (${response.status})`);
}
} catch (error) {
// Network errors: DON'T complete transaction -- let PowerSync retry
throw error;
}
// ALWAYS complete transaction after successful HTTP response
// Even if server returned an error in the body, complete to prevent
// infinite retry loops
await transaction.complete();
}
}
CRITICAL ordering in uploadData: Network errors must throw BEFORE
transaction.complete() so PowerSync retries when connectivity returns.
Server-side validation errors should still complete the transaction (the server
has processed the request; retrying won't help).
2.3 Connect to PowerSync
const connector = new MyPowerSyncConnector();
await powerSync.connect(connector);
// Optional: wait for initial sync before showing data
await powerSync.waitForFirstSync();
Validate: Start the app, sign in, verify that synced tables populate from the server. Create a record locally, verify it appears in PostgreSQL.
The API provides three integration points for PowerSync:
3.1 Configure BetterAuth with JWT plugin
PowerSync requires RS256 JWTs verified via JWKS. BetterAuth's JWT plugin generates these:
import { betterAuth } from "better-auth";
import { jwt } from "better-auth/plugins";
export const auth = betterAuth({
// ... other config ...
plugins: [
jwt({
jwks: {
keyPairConfig: {
alg: "RS256", // PowerSync requires RS256
},
},
}),
],
});
This automatically exposes:
GET /api/auth/jwks -- JWKS public keys (PowerSync fetches these)GET /api/auth/token -- JWT with sub claim = user ID3.2 Create the PowerSync config endpoint
// GET /api/powersync/config
app.get("/config", async ({ user }) => {
return {
powersyncUrl: env.POWERSYNC_URL,
userId: user.id,
};
});
3.3 Create the upload endpoint
The upload endpoint receives CRUD operations from PowerSync clients and applies them to PostgreSQL. Key responsibilities:
ownerId matches the authenticated userownerId from JWT (clients may send null)onConflictDoUpdate for PUT operations (upsert pattern)app.put("/upload", async ({ user, body }) => {
const userId = user.id;
for (const transaction of body.transactions) {
await db.transaction(async (tx) => {
for (const op of transaction.ops) {
// Inject ownerId from JWT -- never trust client-provided ownerId
const data = op.data ? { ...op.data, ownerId: userId } : undefined;
switch (op.op) {
case "PUT":
await tx
.insert(tableRef)
.values({ id: op.id, ...sanitizedData })
.onConflictDoUpdate({ target: tableRef.id, set: sanitizedData });
break;
case "PATCH":
await tx
.update(tableRef)
.set(sanitizedData)
.where(and(eq(tableRef.id, op.id), eq(tableRef.ownerId, userId)));
break;
case "DELETE":
// Soft delete for content tables
await tx
.update(tableRef)
.set({ deletedAt: new Date() })
.where(and(eq(tableRef.id, op.id), eq(tableRef.ownerId, userId)));
break;
}
}
});
}
});
CRITICAL: Always filter by ownerId = userId on PATCH and DELETE
operations. This is defense-in-depth -- sync rules also filter by owner, but the
upload endpoint must independently verify ownership.
Validate: Create a record on a client, verify it appears in PostgreSQL with
the correct ownerId. Modify the record, verify the update propagates.
The PowerSync Service runs as a Docker container and connects to PostgreSQL via logical replication.
4.1 Configure PostgreSQL for logical replication
ALTER SYSTEM SET wal_level = 'logical';
ALTER SYSTEM SET max_wal_senders = 10;
ALTER SYSTEM SET max_replication_slots = 10;
-- Limit WAL retention to prevent disk exhaustion
ALTER SYSTEM SET max_slot_wal_keep_size = '1GB';
-- Create publication for PowerSync
CREATE PUBLICATION powersync FOR ALL TABLES;
Restart PostgreSQL after changing wal_level.
4.2 Configure PowerSync service
# powersync.yaml
replication:
connections:
- type: postgresql
uri: !env PS_DATABASE_URL
sslmode: disable
# Use PostgreSQL for bucket storage (consolidated infrastructure)
storage:
type: postgresql
uri: !env PS_STORAGE_URL
sslmode: disable
# JWT verification via JWKS
client_auth:
jwks_uri: !env PS_JWKS_URI
audience:
- http://localhost:3011 # Must match BetterAuth base URL
port: 8080
sync_rules:
path: /config/sync_rules.yaml
4.3 Define sync rules
Sync rules determine which rows each user receives. Every query filters by
owner_id:
# sync_rules.yaml
bucket_definitions:
user_data:
parameters: SELECT request.user_id() as user_id
data:
- SELECT * FROM projects WHERE owner_id = bucket.user_id
- SELECT * FROM documents WHERE owner_id = bucket.user_id
- SELECT * FROM document_versions WHERE owner_id = bucket.user_id
- SELECT * FROM groups WHERE owner_id = bucket.user_id
- SELECT * FROM group_hierarchy WHERE owner_id = bucket.user_id
- SELECT * FROM document_groups WHERE owner_id = bucket.user_id
request.user_id() extracts the sub claim from the JWT. The bucket.user_id
parameter is then used to filter each table.
4.4 Docker Compose setup
services:
powersync:
image: journeyapps/powersync-service:latest
ports:
- "8080:8080"
environment:
PS_DATABASE_URL: postgresql://user:pass@postgres:5432/mydb
PS_STORAGE_URL: postgresql://user:pass@postgres:5432/mydb
PS_JWKS_URI: http://host.docker.internal:3011/api/auth/jwks
volumes:
- ./powersync-config/powersync.yaml:/config/powersync.yaml
- ./powersync-config/sync_rules.yaml:/config/sync_rules.yaml
depends_on:
- postgres
CRITICAL: The PS_JWKS_URI must be reachable from inside the Docker
container. Use host.docker.internal for macOS/Windows development. For Linux,
use the host's IP or Docker network.
Validate: Start the Docker stack. Check PowerSync logs for successful connection to PostgreSQL and JWKS endpoint. Verify sync works end-to-end.
5.1 Electron Desktop
@powersync/node (uses better-sqlite3 under the hood)electron-store (or safeStorage for production)webContents.send()// Worker setup for Electron
import { Worker } from "worker_threads";
const powerSyncWorkerPath = path.join(__dirname, "powersync-worker.js");
const powerSync = new PowerSyncDatabase({
schema: appSchema,
database: {
dbFilename: getDatabaseFilePath(),
openWorker: (workerPath, options) =>
new Worker(powerSyncWorkerPath, options),
},
});
5.2 Expo / React Native Mobile
@powersync/react-native (uses op-sqlite driver)@react-native-community/netinfodisconnectAndClear() then reinitialize// React context for sync state management
function SyncProvider({ children }) {
const [status, setStatus] = useState('idle');
useEffect(() => {
const db = getRawDatabase();
const dispose = db.registerListener({
statusChanged: (nextStatus) => {
// Map PowerSync status to app-level status
setStatus(mapStatus(nextStatus));
},
});
return dispose;
}, []);
return <SyncContext.Provider value={{ status }}>{children}</SyncContext.Provider>;
}
When a user signs out or switches accounts, the local database must be handled carefully to prevent data leakage between accounts.
6.1 Account switch detection
Track the ownerUserId in persistent storage. On sign-in, compare with stored
value:
const shouldClear = Boolean(
storedOwnerUserId && storedOwnerUserId !== newUserId
);
if (shouldClear) {
await powerSync.disconnectAndClear(); // Wipes all synced data
// Local-only tables are preserved
}
6.2 Database version tracking (Mobile)
After clearing and reconnecting, increment a databaseVersion counter.
Components that cache query results watch this value to know when to refetch:
const [databaseVersion, setDatabaseVersion] = useState(0);
if (shouldClear) {
await disconnectAndClearDatabase();
setDatabaseVersion((v) => v + 1);
}
Validate: Sign in as User A, create data. Sign out, sign in as User B. Verify User A's data is not visible. Sign back in as User A, verify data resyncs from server.
GET /api/auth/token to obtain a JWTsub (user ID) and aud (audience) claimsuser_id from JWT sub claimps_crud tableuploadData() sends batch to APIPowerSync provides reactive query subscriptions. Use the Drizzle query builder for type safety:
const docs = await db
.select()
.from(documents)
.where(eq(documents.projectId, projectId));
For React Native, usePowerSyncQuery provides automatic re-rendering on data
changes.
| Variable | Purpose | Example |
|---|---|---|
POWERSYNC_URL | PowerSync Service WebSocket URL | http://localhost:8080 |
PS_DATABASE_URL | PostgreSQL connection for replication | postgresql://user:pass@host:5432/db |
PS_STORAGE_URL | PostgreSQL connection for bucket storage | Same as PS_DATABASE_URL (same DB, different schema) |
PS_JWKS_URI | API JWKS endpoint for JWT verification | http://host.docker.internal:3011/api/auth/jwks |
BETTER_AUTH_SECRET | BetterAuth secret for JWT signing | (random string) |
| Setting | Purpose | Value |
|---|---|---|
client_auth.jwks_uri | Where to fetch JWT public keys | API /api/auth/jwks endpoint |
client_auth.audience | Expected JWT audience claim | Must match BetterAuth base URL |
storage.type | Bucket storage backend | postgresql (recommended) |
replication.connections | Source database for logical replication | PostgreSQL connection URI |
SQLite and PostgreSQL schemas use different Drizzle dialects and must be kept in sync manually. Create a validation script that compares table names, column names, and types across both schemas. Run it in CI. Drift between schemas causes silent sync failures.
Clients create records with ownerId: null (the client may not know the user
ID). The API upload handler MUST set ownerId from the JWT sub claim before
inserting into PostgreSQL. If you forget this, sync rules filter out the record
(no owner_id match) and the data "disappears."
PowerSync sends column names in snake_case (matching SQLite column names), but Drizzle's TypeScript API uses camelCase. The upload handler must handle both formats. Use a sanitization function that checks for both:
const deviceId = "device_id" in data ? data.device_id : data.deviceId;
SQLite stores timestamps as ISO 8601 text strings. PostgreSQL uses native
timestamp columns. The upload handler must convert between formats. Pay
attention to Unix timestamps vs ISO strings, and seconds vs milliseconds:
// Timestamps < 10 billion are likely in seconds, not milliseconds
const timestamp = Math.abs(value) < 10_000_000_000 ? value * 1000 : value;
return new Date(timestamp);
SQLite stores booleans as integers (0/1). PostgreSQL uses native booleans. The
upload handler must convert: Boolean(value).
PowerSync validates the JWT aud (audience) claim against its configured
client_auth.audience. This MUST match the BetterAuth base URL exactly
(including protocol and port). A mismatch causes all sync connections to be
rejected with cryptic "unauthorized" errors.
In the connector's uploadData, always call transaction.complete() after a
successful HTTP response, even if the response body contains an error. If you
only call complete() on full success, server-side validation errors cause
infinite retry loops (the transaction is never marked as completed, so PowerSync
keeps calling uploadData with the same transaction).
Only skip complete() for actual network failures (fetch throws), where a retry
might succeed.
PostgreSQL replication slots grow indefinitely if PowerSync disconnects. Set
max_slot_wal_keep_size to prevent disk exhaustion. Monitor
pg_replication_slots for slot status.
When a user switches accounts, you MUST call disconnectAndClear() to wipe
synced data before reconnecting with the new user's credentials. Otherwise, the
previous user's data remains in the local SQLite database and may be visible to
the new user.
When using reactive queries (usePowerSyncQuery) with async post-processing,
beware of stale data. PowerSync returns cached results immediately when query
parameters change, then updates asynchronously. If your async processing
(additional DB calls, API calls) takes longer than the query refresh, stale
results can overwrite correct results. Use a generation counter or abort
controller to discard stale async work.
PowerSync requires every table to have a single text('id').primaryKey()
column. If your data model naturally uses composite keys (e.g., a many-to-many
join table), add a synthetic UUID id column as the primary key and enforce
uniqueness via a separate unique index on the composite columns.