From atum-workflows
Cloudflare deployment pattern library — deploy frontend apps to Cloudflare Pages, backend logic to Workers (V8 isolate edge runtime), static assets and media to R2 (S3-compatible object storage), SQL databases to D1 (SQLite at the edge), key-value data to KV, message queues via Queues, scheduled jobs via Cron Triggers, and real-time data via Durable Objects. Covers wrangler.toml config, wrangler CLI deployment, environment variables and secrets (including rotation), custom domains via DNS, Pages Functions for hybrid static + serverless, Worker bindings (KV, R2, D1, Durable Objects, Queues, AI, Vectorize), Workers AI for on-edge inference, and Workers for Platforms for multi-tenant isolates. Use when building an edge-first application, migrating from Cloudflare Pages to Workers, wiring up storage and compute primitives, or architecting a low-latency global app. Differentiates from Vercel/Railway/Fly by being exclusively edge-native (no regions, V8 isolates, 0ms cold start) and offering the broadest suite of storage primitives.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-workflowsThis skill uses the workspace's default tool permissions.
Cloudflare propose une suite edge-native complète : Pages (static + hybrid), Workers (compute), R2 (object storage S3-compat), D1 (SQLite edge), KV, Durable Objects, Queues, et Workers AI. Tout tourne en **V8 isolate** → 0ms cold start, déploiement global instantané.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Cloudflare propose une suite edge-native complète : Pages (static + hybrid), Workers (compute), R2 (object storage S3-compat), D1 (SQLite edge), KV, Durable Objects, Queues, et Workers AI. Tout tourne en V8 isolate → 0ms cold start, déploiement global instantané.
# Install
npm install -g wrangler
# Login
wrangler login
# Init un Worker
wrangler init my-worker
# Deploy
wrangler deploy
# Build local
pnpm build
# Deploy direct
wrangler pages deploy ./dist --project-name=my-app
wrangler.toml pour Pagesname = "my-app"
compatibility_date = "2025-01-01"
pages_build_output_dir = "./dist"
[vars]
API_URL = "https://api.example.com"
[[d1_databases]]
binding = "DB"
database_name = "my-app-db"
database_id = "<d1-id>"
[[kv_namespaces]]
binding = "SESSIONS"
id = "<kv-id>"
[[r2_buckets]]
binding = "MEDIA"
bucket_name = "my-app-media"
wrangler.toml pour un Workername = "my-api"
main = "src/index.ts"
compatibility_date = "2025-01-01"
compatibility_flags = ["nodejs_compat"]
[vars]
ENV = "production"
[[d1_databases]]
binding = "DB"
database_name = "my-db"
database_id = "<d1-id>"
[[r2_buckets]]
binding = "UPLOADS"
bucket_name = "my-uploads"
// src/index.ts
export interface Env {
DB: D1Database
UPLOADS: R2Bucket
SESSIONS: KVNamespace
ENV: string
}
export default {
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
const url = new URL(request.url)
if (url.pathname === '/api/posts') {
const { results } = await env.DB.prepare('SELECT * FROM posts LIMIT 10').all()
return Response.json(results)
}
if (url.pathname.startsWith('/upload/')) {
const key = url.pathname.replace('/upload/', '')
const file = await env.UPLOADS.get(key)
if (!file) return new Response('Not found', { status: 404 })
return new Response(file.body, {
headers: { 'Content-Type': file.httpMetadata?.contentType ?? 'application/octet-stream' },
})
}
return new Response('Hello from edge')
},
}
// Upload
await env.UPLOADS.put('avatars/user-123.jpg', request.body, {
httpMetadata: { contentType: 'image/jpeg' },
customMetadata: { userId: '123' },
})
// Read
const object = await env.UPLOADS.get('avatars/user-123.jpg')
if (!object) return new Response('Not found', { status: 404 })
// List
const list = await env.UPLOADS.list({ prefix: 'avatars/', limit: 100 })
R2 vs S3 : API compatible, mais R2 a zéro egress fee (grosse économie vs AWS S3 sur les apps à fort download).
// Créer une DB
wrangler d1 create my-app-db
// Migrer
wrangler d1 migrations create my-app-db init
wrangler d1 migrations apply my-app-db
// Query depuis un Worker
const { results } = await env.DB
.prepare('SELECT * FROM users WHERE email = ?')
.bind('user@example.com')
.all()
// Write
await env.DB
.prepare('INSERT INTO users (email, name) VALUES (?, ?)')
.bind('new@example.com', 'Jane')
.run()
// Batch
await env.DB.batch([
env.DB.prepare('INSERT INTO logs VALUES (?)').bind('event1'),
env.DB.prepare('INSERT INTO logs VALUES (?)').bind('event2'),
])
Limites D1 actuelles : 10 GB par DB, 1000 writes/s pour un même row-key. Pour plus, utiliser Hyperdrive vers Postgres externe.
// Set avec TTL
await env.SESSIONS.put('session:abc123', JSON.stringify(data), {
expirationTtl: 3600, // 1 heure
})
// Get
const raw = await env.SESSIONS.get('session:abc123')
const data = raw ? JSON.parse(raw) : null
// Delete
await env.SESSIONS.delete('session:abc123')
// List
const { keys } = await env.SESSIONS.list({ prefix: 'session:' })
KV est eventually consistent (propagation globale en ~60s). Pour du strict consistency, utiliser Durable Objects.
// src/chat-room.ts
export class ChatRoom {
state: DurableObjectState
sessions: WebSocket[] = []
constructor(state: DurableObjectState) {
this.state = state
}
async fetch(request: Request) {
if (request.headers.get('Upgrade') === 'websocket') {
const [client, server] = Object.values(new WebSocketPair())
this.handleSession(server)
return new Response(null, { status: 101, webSocket: client })
}
return new Response('WebSocket only', { status: 426 })
}
handleSession(ws: WebSocket) {
ws.accept()
this.sessions.push(ws)
ws.addEventListener('message', (event) => {
this.sessions.forEach((s) => s.send(event.data))
})
}
}
// wrangler.toml
[[durable_objects.bindings]]
name = "CHAT_ROOM"
class_name = "ChatRoom"
Durable Objects = single-instance stateful compute pour chat rooms, game lobbies, collaborative editors, rate limiters.
// Producer
await env.MY_QUEUE.send({ type: 'send_email', userId: 123 })
// Consumer (autre Worker)
export default {
async queue(batch: MessageBatch, env: Env) {
for (const msg of batch.messages) {
try {
await processMessage(msg.body)
msg.ack()
} catch (err) {
msg.retry()
}
}
},
}
[triggers]
crons = ["0 9 * * *", "*/15 * * * *"]
export default {
async scheduled(event: ScheduledEvent, env: Env, ctx: ExecutionContext) {
if (event.cron === '0 9 * * *') {
await sendDailyDigest(env)
}
},
}
const response = await env.AI.run('@cf/meta/llama-3.1-8b-instruct', {
messages: [
{ role: 'user', content: 'Summarize: ...' },
],
})
return Response.json(response)
Catalogue : https://developers.cloudflare.com/workers-ai/models/
wrangler secret put STRIPE_SECRET_KEY
# Paste value
wrangler secret list
wrangler secret delete OLD_KEY
Secrets sont chiffrés, jamais en clair dans les logs.
fs, path, crypto Node sans nodejs_compat → erreurs runtimewrangler secret putdeploy-verceldeploy-flydeploy-railwaydeploy-eas, deploy-app-store, deploy-play-store