Use when deploying a Bknd application to production hosting. Covers Cloudflare Workers/Pages, Node.js/Bun servers, Docker, Vercel, AWS Lambda, and other platforms.
npx claudepluginhub cameronapak/bknd-expert --plugin bknd-research-skillsThis skill uses the workspace's default tool permissions.
Deploy your Bknd application to various hosting platforms.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Deploy your Bknd application to various hosting platforms.
bknd-database-provision)bknd-env-config)| Platform | Best For | Database Options | Cold Start |
|---|---|---|---|
| Cloudflare Workers | Edge, global low-latency | D1, Turso | ~0ms |
| Cloudflare Pages | Static + API | D1, Turso | ~0ms |
| Vercel | Next.js apps | Turso, Neon | ~200ms |
| Node.js/Bun VPS | Full control, dedicated | Any | N/A |
| Docker | Containerized, portable | Any | N/A |
| AWS Lambda | Serverless, pay-per-use | Turso, RDS | ~500ms |
Step 1: Install Wrangler
npm install -D wrangler
Step 2: Create wrangler.toml
name = "my-bknd-app"
main = "src/index.ts"
compatibility_date = "2024-01-01"
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "your-d1-database-id"
# Optional: R2 for media storage
[[r2_buckets]]
binding = "R2_BUCKET"
bucket_name = "my-bucket"
[vars]
ENVIRONMENT = "production"
Step 3: Configure Adapter
// src/index.ts
import { hybrid, type CloudflareBkndConfig } from "bknd/adapter/cloudflare";
import { d1Sqlite } from "bknd/adapter/cloudflare";
import { em, entity, text } from "bknd";
const schema = em({
posts: entity("posts", {
title: text().required(),
}),
});
export default hybrid<CloudflareBkndConfig>({
app: (env) => ({
connection: d1Sqlite({ binding: env.DB }),
schema,
isProduction: true,
auth: {
jwt: {
secret: env.JWT_SECRET,
},
},
config: {
media: {
enabled: true,
adapter: {
type: "r2",
config: { bucket: env.R2_BUCKET },
},
},
},
}),
});
Step 4: Create D1 Database
# Create database
wrangler d1 create my-database
# Copy the database_id to wrangler.toml
Step 5: Set Secrets
wrangler secret put JWT_SECRET
# Enter your secret (min 32 chars)
Step 6: Deploy
wrangler deploy
Step 1: Create functions/api/[[bknd]].ts
import { hybrid, type CloudflareBkndConfig } from "bknd/adapter/cloudflare";
import { d1Sqlite } from "bknd/adapter/cloudflare";
import schema from "../../bknd.config";
export const onRequest = hybrid<CloudflareBkndConfig>({
app: (env) => ({
connection: d1Sqlite({ binding: env.DB }),
schema,
isProduction: true,
auth: {
jwt: { secret: env.JWT_SECRET },
},
}),
});
Step 2: Configure Pages
In Cloudflare dashboard:
Step 1: Create Production Entry
// index.ts
import { serve, type BunBkndConfig } from "bknd/adapter/bun";
// or for Node.js:
// import { serve } from "bknd/adapter/node";
const config: BunBkndConfig = {
connection: {
url: process.env.DB_URL!,
authToken: process.env.DB_TOKEN,
},
isProduction: true,
auth: {
jwt: {
secret: process.env.JWT_SECRET!,
expires: "7d",
},
},
config: {
media: {
enabled: true,
adapter: {
type: "s3",
config: {
bucket: process.env.S3_BUCKET!,
region: process.env.S3_REGION!,
accessKeyId: process.env.S3_ACCESS_KEY!,
secretAccessKey: process.env.S3_SECRET_KEY!,
},
},
},
guard: {
enabled: true,
},
},
};
serve(config);
Step 2: Set Environment Variables
export DB_URL="libsql://your-db.turso.io"
export DB_TOKEN="your-turso-token"
export JWT_SECRET="your-32-char-minimum-secret"
export PORT=3000
Step 3: Run with Process Manager
# Using PM2
npm install -g pm2
pm2 start "bun run index.ts" --name bknd-app
# Or systemd (create /etc/systemd/system/bknd.service)
Step 1: Create Dockerfile
FROM oven/bun:1.0-alpine
WORKDIR /app
COPY package.json bun.lockb ./
RUN bun install --frozen-lockfile --production
COPY . .
# Create data directory for SQLite (if using file-based)
RUN mkdir -p /app/data
ENV PORT=3000
EXPOSE 3000
CMD ["bun", "run", "index.ts"]
Step 2: Create docker-compose.yml
version: "3.8"
services:
bknd:
build: .
ports:
- "3000:3000"
volumes:
- bknd-data:/app/data
environment:
- DB_URL=file:/app/data/bknd.db
- JWT_SECRET=${JWT_SECRET}
- NODE_ENV=production
restart: unless-stopped
volumes:
bknd-data:
Step 3: Deploy
# Build and run
docker compose up -d
# View logs
docker compose logs -f bknd
Step 1: Create API Route
// app/api/bknd/[[...bknd]]/route.ts
export { GET, POST, PUT, DELETE, PATCH } from "bknd/adapter/nextjs";
Step 2: Create bknd.config.ts
import type { NextjsBkndConfig } from "bknd/adapter/nextjs";
import { em, entity, text } from "bknd";
const schema = em({
posts: entity("posts", {
title: text().required(),
}),
});
type Database = (typeof schema)["DB"];
declare module "bknd" {
interface DB extends Database {}
}
export default {
app: (env) => ({
connection: {
url: env.DB_URL,
authToken: env.DB_TOKEN,
},
schema,
isProduction: env.NODE_ENV === "production",
auth: {
jwt: { secret: env.JWT_SECRET },
},
}),
} satisfies NextjsBkndConfig;
Step 3: Set Vercel Environment Variables
In Vercel dashboard or CLI:
vercel env add DB_URL
vercel env add DB_TOKEN
vercel env add JWT_SECRET
Step 4: Deploy
vercel deploy --prod
Step 1: Install Dependencies
npm install -D serverless serverless-esbuild
Step 2: Create handler.ts
import { createHandler } from "bknd/adapter/aws";
export const handler = createHandler({
connection: {
url: process.env.DB_URL!,
authToken: process.env.DB_TOKEN,
},
isProduction: true,
auth: {
jwt: { secret: process.env.JWT_SECRET! },
},
});
Step 3: Create serverless.yml
service: bknd-api
provider:
name: aws
runtime: nodejs20.x
region: us-east-1
environment:
DB_URL: ${env:DB_URL}
DB_TOKEN: ${env:DB_TOKEN}
JWT_SECRET: ${env:JWT_SECRET}
plugins:
- serverless-esbuild
functions:
api:
handler: handler.handler
events:
- http:
path: /{proxy+}
method: ANY
- http:
path: /
method: ANY
Step 4: Deploy
serverless deploy --stage prod
# 1. Generate types
npx bknd types
# 2. Test locally with production-like config
DB_URL="your-prod-db" JWT_SECRET="your-secret" npx bknd run
# 3. Verify schema sync
# Schema auto-syncs on first request in production
| Variable | Required | Description |
|---|---|---|
DB_URL | Yes | Database connection URL |
DB_TOKEN | Depends | Auth token (Turso/LibSQL) |
JWT_SECRET | Yes | Min 32 chars for security |
PORT | No | Server port (default: 3000) |
Problem: better-sqlite3 not available in serverless
Fix: Use LibSQL/Turso instead of file-based SQLite:
connection: {
url: "libsql://your-db.turso.io",
authToken: process.env.DB_TOKEN,
}
Problem: Auth fails in production
Fix: Set JWT_SECRET environment variable:
# Cloudflare
wrangler secret put JWT_SECRET
# Vercel
vercel env add JWT_SECRET
# Docker
docker run -e JWT_SECRET="your-secret" ...
Problem: First request times out
Fix:
Problem: env.DB is undefined
Fix: Check wrangler.toml D1 binding:
[[d1_databases]]
binding = "DB" # Must match env.DB in code
database_name = "my-database"
database_id = "actual-id-from-wrangler-d1-create"
Problem: Local storage doesn't work in serverless
Fix: Use cloud storage adapter:
config: {
media: {
adapter: {
type: "s3", // or "r2", "cloudinary"
config: { /* credentials */ },
},
},
}
Problem: Frontend can't access API
Fix: Configure CORS in your adapter:
// Most adapters handle this automatically
// For custom needs, check platform docs
# Cloudflare Workers
wrangler deploy
wrangler tail # View logs
# Vercel
vercel deploy --prod
vercel logs
# Docker
docker compose up -d
docker compose logs -f
# AWS Lambda
serverless deploy --stage prod
serverless logs -f api
DO:
isProduction: true in production configDON'T:
.env files with real secrets