Use when configuring storage backends for file uploads. Covers S3-compatible storage (AWS S3, Cloudflare R2, DigitalOcean Spaces), Cloudinary media storage, local filesystem adapter for development, adapter configuration options, environment variables, and production storage setup.
npx claudepluginhub cameronapak/bknd-expert --plugin bknd-research-skillsThis skill uses the workspace's default tool permissions.
Configure storage backends for Bknd's media module.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Configure storage backends for Bknd's media module.
bknd package installed| Adapter | Type | Use Case |
|---|---|---|
s3 | S3-compatible | AWS S3, Cloudflare R2 (external), DigitalOcean Spaces, MinIO |
cloudinary | Media-optimized | Image/video transformations, CDN delivery |
local | Filesystem | Development only (Node.js/Bun runtime) |
r2 | Cloudflare R2 | Cloudflare Workers with R2 binding |
Create bucket in AWS console or via CLI:
aws s3 mb s3://my-app-uploads --region us-east-1
{
"CORSRules": [{
"AllowedOrigins": ["https://yourapp.com"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"]
}]
}
Create IAM user with S3 access and get:
import { defineConfig } from "bknd";
export default defineConfig({
media: {
enabled: true,
adapter: {
type: "s3",
config: {
access_key: process.env.S3_ACCESS_KEY,
secret_access_key: process.env.S3_SECRET_KEY,
url: "https://my-bucket.s3.us-east-1.amazonaws.com",
},
},
},
});
# .env
S3_ACCESS_KEY=AKIA...
S3_SECRET_KEY=wJalr...
Different S3-compatible services use different URL formats:
// AWS S3
url: "https://{bucket}.s3.{region}.amazonaws.com"
// Example: "https://my-bucket.s3.us-east-1.amazonaws.com"
// Cloudflare R2 (external access via S3 API)
url: "https://{account_id}.r2.cloudflarestorage.com/{bucket}"
// Example: "https://abc123.r2.cloudflarestorage.com/my-bucket"
// DigitalOcean Spaces
url: "https://{bucket}.{region}.digitaloceanspaces.com"
// Example: "https://my-bucket.nyc3.digitaloceanspaces.com"
// MinIO (self-hosted)
url: "http://localhost:9000/{bucket}"
From Cloudinary dashboard, copy:
import { defineConfig } from "bknd";
export default defineConfig({
media: {
enabled: true,
adapter: {
type: "cloudinary",
config: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
},
},
},
});
# .env
CLOUDINARY_CLOUD_NAME=my-cloud
CLOUDINARY_API_KEY=123456789
CLOUDINARY_API_SECRET=abcdef...
For unsigned uploads or custom transformations:
adapter: {
type: "cloudinary",
config: {
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET,
upload_preset: "my-preset", // Optional
},
},
mkdir -p ./uploads
import { defineConfig } from "bknd";
import { registerLocalMediaAdapter } from "bknd/adapter/node";
// Register the local adapter
const local = registerLocalMediaAdapter();
export default defineConfig({
media: {
enabled: true,
adapter: local({ path: "./uploads" }),
},
});
Files served at /api/media/file/{filename}.
Local adapter requires Node.js or Bun runtime (filesystem access). It won't work in:
wrangler r2 bucket create my-bucket
[[r2_buckets]]
binding = "MY_BUCKET"
bucket_name = "my-bucket"
import { serve, type CloudflareBkndConfig } from "bknd/adapter/cloudflare";
const config: CloudflareBkndConfig = {
app: (env) => ({
connection: { url: env.DB },
config: {
media: {
enabled: true,
adapter: {
type: "r2",
config: {
binding: "MY_BUCKET",
},
},
},
},
}),
};
export default serve(config);
R2 adapter uses the Cloudflare Workers binding directly, no external credentials needed.
export default defineConfig({
media: {
enabled: true,
body_max_size: 10 * 1024 * 1024, // 10MB max upload
adapter: { ... },
},
});
If body_max_size not set, uploads have no size limit. Always set a reasonable limit in production.
Different adapters for dev vs production:
import { defineConfig } from "bknd";
import { registerLocalMediaAdapter } from "bknd/adapter/node";
const local = registerLocalMediaAdapter();
const isDev = process.env.NODE_ENV !== "production";
export default defineConfig({
media: {
enabled: true,
body_max_size: 25 * 1024 * 1024, // 25MB
adapter: isDev
? local({ path: "./uploads" })
: {
type: "s3",
config: {
access_key: process.env.S3_ACCESS_KEY,
secret_access_key: process.env.S3_SECRET_KEY,
url: process.env.S3_BUCKET_URL,
},
},
},
});
import { Api } from "bknd";
const api = new Api({ host: "http://localhost:7654" });
// List files (empty if no uploads yet)
const { ok, data, error } = await api.media.listFiles();
if (ok) {
console.log("Media module working, files:", data.length);
} else {
console.error("Media error:", error);
}
async function testStorage() {
const testFile = new File(["test content"], "test.txt", {
type: "text/plain"
});
const { ok, data, error } = await api.media.upload(testFile);
if (ok) {
console.log("Upload succeeded:", data.name);
// Clean up
await api.media.deleteFile(data.name);
console.log("Cleanup complete");
} else {
console.error("Upload failed:", error);
}
}
# List files
curl http://localhost:7654/api/media/files
# Upload test file
echo "test" | curl -X POST \
-H "Content-Type: text/plain" \
--data-binary @- \
http://localhost:7654/api/media/upload/test.txt
import { defineConfig } from "bknd";
export default defineConfig({
connection: {
url: process.env.DATABASE_URL,
},
config: {
media: {
enabled: true,
body_max_size: 50 * 1024 * 1024, // 50MB
adapter: {
type: "s3",
config: {
access_key: process.env.AWS_ACCESS_KEY_ID,
secret_access_key: process.env.AWS_SECRET_ACCESS_KEY,
url: `https://${process.env.S3_BUCKET}.s3.${process.env.AWS_REGION}.amazonaws.com`,
},
},
},
},
});
import { serve, type CloudflareBkndConfig } from "bknd/adapter/cloudflare";
const config: CloudflareBkndConfig = {
app: (env) => ({
connection: { url: env.DB }, // D1 binding
config: {
media: {
enabled: true,
body_max_size: 25 * 1024 * 1024,
adapter: {
type: "r2",
config: { binding: "UPLOADS" },
},
},
},
}),
};
export default serve(config);
import { defineConfig } from "bknd";
import { registerLocalMediaAdapter } from "bknd/adapter/node";
const local = registerLocalMediaAdapter();
export default defineConfig({
connection: {
url: "file:data.db",
},
config: {
media: {
enabled: true,
adapter: local({ path: "./public/uploads" }),
},
},
});
Problem: Upload fails with 403 error.
Causes:
Fix:
// Check URL format - must NOT have trailing slash
url: "https://bucket.s3.region.amazonaws.com" // CORRECT
url: "https://bucket.s3.region.amazonaws.com/" // WRONG
// Verify credentials
console.log("Key:", process.env.S3_ACCESS_KEY?.substring(0, 8) + "...");
Problem: Files not found after upload.
Causes:
Fix:
# Create directory first
mkdir -p ./uploads
# Use relative path from project root
adapter: local({ path: "./uploads" }) // CORRECT
adapter: local({ path: "/uploads" }) // WRONG (absolute)
Problem: "No R2Bucket found with key" error.
Fix: Ensure wrangler.toml has correct binding:
[[r2_buckets]]
binding = "MY_BUCKET" # This name goes in config
bucket_name = "actual-bucket"
Problem: putObject returns undefined.
Causes:
Fix:
Problem: Local adapter fails in serverless.
Fix: Use S3/R2/Cloudinary for serverless:
// Cloudflare Workers - use r2
// Vercel - use s3
// AWS Lambda - use s3
// Node.js server - local is OK for dev
Problem: Credentials undefined at runtime.
Fix:
// Add validation
if (!process.env.S3_ACCESS_KEY) {
throw new Error("S3_ACCESS_KEY not set");
}
// Or provide defaults for dev
const config = {
access_key: process.env.S3_ACCESS_KEY ?? "dev-key",
// ...
};
Problem: Browser uploads blocked.
Fix: Configure CORS on the bucket itself (AWS/R2 console), not in Bknd.
body_max_sizeDO:
body_max_size in productionDON'T: