Design industry-grade Cloudflare architectures with wrangler.toml generation, Mermaid diagrams, AND Edge-Native Constraint validation. Use this skill when designing new systems, planning migrations, evaluating architecture options, or when the user mentions Node.js libraries that may not work on Workers.
/plugin marketplace add littlebearapps/cloudflare-engineer/plugin install cloudflare-engineer@littlebearapps-cloudflareThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Design production-ready Cloudflare architectures with proper service selection, wrangler configuration generation, visual diagrams, and Edge-Native Constraint enforcement.
node: polyfills or Cloudflare alternativesBefore designing or deploying, verify the local wrangler installation is current.
Check Command:
npx wrangler --version
Version Advisory Table (as of 2026-01):
| Installed Version | Status | Recommendation |
|---|---|---|
| 3.100+ | Current | Good to go |
| 3.80-3.99 | Acceptable | Update when convenient |
| 3.50-3.79 | Outdated | Update recommended: npm install -g wrangler@latest |
| <3.50 | Critical | Update required - missing important features |
Key Version Features:
wrangler devWrangler Health Check Output:
## Wrangler Health Check
**Installed**: wrangler 3.95.0
**Status**: ✅ Acceptable (current is 3.102.0)
**Recommendation**: Update when convenient for latest features
To update: `npm install -g wrangler@latest`
When to Run:
/cf-design architecture sessions| Need | Service | Limits | Cost |
|---|---|---|---|
| Relational queries | D1 | 10GB, 128MB memory | $0.25/B reads, $1/M writes |
| Key-value lookups | KV | 25MB/value, 1 write/sec/key | $0.50/M reads, $5/M writes |
| Large files/blobs | R2 | 5TB/object | $0.36/M reads, $4.50/M writes |
| Coordination/locks | Durable Objects | Per-object isolation | CPU time based |
| Time-series metrics | Analytics Engine | Adaptive sampling | FREE |
| Vector similarity | Vectorize | 1536 dims, 5M vectors | $0.01/M queries |
| Need | Service | Limits | Best For |
|---|---|---|---|
| HTTP handlers | Workers | 128MB, 30s/req | API endpoints |
| Background jobs | Queues | 128KB/msg, batches ≤100 | Async processing |
| Long-running tasks | Workflows | 1024 steps, 1GB state | Multi-step pipelines |
| Stateful coordination | Durable Objects | Per-object | Sessions, locks |
| Scheduled jobs | Cron Triggers | 1-minute minimum | Periodic tasks |
| Need | Service | Cost | Best For |
|---|---|---|---|
| LLM inference | Workers AI | $0.011/1K neurons | Serverless AI |
| LLM caching/logging | AI Gateway | Free tier + $0.10/M | Production AI |
| Embeddings + search | Vectorize | Per-dimension | RAG, semantic search |
IMPORTANT: Cloudflare Workers use a V8 isolate runtime, NOT Node.js. Always validate proposed code against these constraints.
Workers supports many Node.js APIs via node: prefixed imports when nodejs_compat or nodejs_compat_v2 flag is enabled.
| Node.js Module | Workers Support | Required Flag | Notes |
|---|---|---|---|
node:assert | Partial | nodejs_compat | Basic assertions |
node:async_hooks | Yes | nodejs_compat | AsyncLocalStorage supported |
node:buffer | Yes | nodejs_compat | Full Buffer API |
node:crypto | Yes | nodejs_compat | Prefer Web Crypto when possible |
node:events | Yes | nodejs_compat | EventEmitter |
node:path | Yes | nodejs_compat | Path manipulation |
node:stream | Partial | nodejs_compat | Web Streams preferred |
node:url | Yes | nodejs_compat | URL parsing |
node:util | Partial | nodejs_compat | Common utilities |
node:zlib | Yes | nodejs_compat | Compression |
node:fs | NO | - | Use R2 instead |
node:net | NO | - | Use TCP sockets (beta) |
node:child_process | NO | - | Not available |
node:cluster | NO | - | Edge inherently distributed |
node:http | NO | - | Use fetch() |
node:https | NO | - | Use fetch() |
| Library | Works? | Alternative | Notes |
|---|---|---|---|
express | NO | Hono, itty-router | Hono is recommended |
axios | Partial | fetch() | Native fetch preferred |
lodash | Yes | - | Works, but increases bundle |
moment | Yes | dayjs, date-fns | Moment is heavy |
uuid | Yes | crypto.randomUUID() | Native is better |
bcrypt | NO | bcryptjs | Pure JS version works |
sharp | NO | Cloudflare Images | Use Images API |
puppeteer | NO | Browser Rendering API | Cloudflare has native |
pg | NO | Hyperdrive | Use Hyperdrive for Postgres |
mysql2 | NO | Hyperdrive | Use Hyperdrive for MySQL |
mongodb | Partial | - | Use fetch to Atlas API |
redis | NO | KV, Durable Objects | Use native services |
aws-sdk | Partial | R2 S3 API | R2 is S3-compatible |
@prisma/client | Yes | - | D1 adapter available |
drizzle-orm | Yes | - | Recommended for D1 |
When using Node.js APIs, add to wrangler config:
{
"compatibility_flags": ["nodejs_compat_v2"], // Recommended: latest Node.js compat
// OR for legacy:
// "compatibility_flags": ["nodejs_compat"]
}
When reviewing proposed architecture or dependencies:
1. Scan package.json for known incompatible libraries
2. Flag any `require('fs')` or `require('net')` patterns
3. Check for `node:` imports without compat flags
4. Suggest Cloudflare-native alternatives:
- fs → R2
- net/http → fetch()
- express → Hono
- redis → KV/DO
- postgres → Hyperdrive
- image processing → Images API
5. Verify wrangler.toml has appropriate compat flags
| Limit | Free | Standard | Unbound |
|---|---|---|---|
| Request timeout | 10ms CPU | 30s wall | 30s wall |
| Memory | 128MB | 128MB | 128MB |
| Bundle size | 1MB | 10MB | 10MB |
| Subrequests | 50 | 1000 | 1000 |
| Environment vars | 64 | 128 | 128 |
| Cron triggers | 3 | 5 | 5 |
Before (Node.js pattern):
const express = require('express');
const fs = require('fs');
const Redis = require('ioredis');
const app = express();
app.get('/file/:id', async (req, res) => {
const content = fs.readFileSync(`./uploads/${req.params.id}`);
await redis.set(`cache:${req.params.id}`, content);
res.send(content);
});
After (Edge-Native):
import { Hono } from 'hono';
const app = new Hono<{ Bindings: Env }>();
app.get('/file/:id', async (c) => {
const obj = await c.env.R2_BUCKET.get(c.req.param('id'));
if (!obj) return c.notFound();
// Cache in KV for fast reads
await c.env.KV_CACHE.put(`file:${c.req.param('id')}`, await obj.text(), { expirationTtl: 3600 });
return new Response(obj.body);
});
export default app;
Use Case: REST/GraphQL API with database backend
graph LR
subgraph "Edge"
W[Worker<br/>Hono Router]
end
subgraph "Storage"
D1[(D1<br/>Primary DB)]
KV[(KV<br/>Cache)]
end
subgraph "Auth"
Access[CF Access]
end
Client --> Access --> W
W --> KV
KV -.->|miss| D1
W --> D1
Wrangler Config:
{
"name": "api-gateway",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"placement": { "mode": "smart" },
"observability": { "logs": { "enabled": true } },
"d1_databases": [
{ "binding": "DB", "database_name": "api-db", "database_id": "..." }
],
"kv_namespaces": [
{ "binding": "CACHE", "id": "..." }
],
"routes": [
{ "pattern": "api.example.com/*", "zone_name": "example.com" }
]
}
Use Case: Ingest events, process async, store results
graph LR
subgraph "Ingest"
I[Ingest Worker]
end
subgraph "Processing"
Q1[Queue]
P[Processor]
DLQ[Dead Letter]
end
subgraph "Storage"
D1[(D1)]
R2[(R2<br/>Raw Data)]
AE[Analytics Engine]
end
Client --> I
I --> Q1 --> P
P --> D1
P --> R2
P --> AE
P -.->|failed| DLQ
Wrangler Config:
{
"name": "event-pipeline",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"observability": { "logs": { "enabled": true } },
"d1_databases": [
{ "binding": "DB", "database_name": "events-db", "database_id": "..." }
],
"r2_buckets": [
{ "binding": "RAW_DATA", "bucket_name": "events-raw" }
],
"analytics_engine_datasets": [
{ "binding": "METRICS", "dataset": "event_metrics" }
],
"queues": {
"producers": [
{ "binding": "EVENTS_QUEUE", "queue": "events" }
],
"consumers": [
{
"queue": "events",
"max_batch_size": 100,
"max_retries": 1,
"dead_letter_queue": "events-dlq",
"max_concurrency": 10
}
]
}
}
Use Case: LLM-powered application with RAG
graph LR
subgraph "Edge"
W[Worker]
end
subgraph "AI"
GW[AI Gateway]
WAI[Workers AI]
end
subgraph "Storage"
V[(Vectorize)]
KV[(KV<br/>Prompt Cache)]
D1[(D1<br/>Conversations)]
end
Client --> W
W --> KV
W --> V
W --> GW --> WAI
W --> D1
Wrangler Config:
{
"name": "ai-app",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"placement": { "mode": "smart" },
"observability": { "logs": { "enabled": true } },
"ai": { "binding": "AI" },
"vectorize": [
{ "binding": "VECTORS", "index_name": "knowledge-base" }
],
"kv_namespaces": [
{ "binding": "PROMPT_CACHE", "id": "..." }
],
"d1_databases": [
{ "binding": "DB", "database_name": "conversations", "database_id": "..." }
],
"vars": {
"AI_GATEWAY_SLUG": "ai-app-gateway"
}
}
Use Case: Marketing site with API endpoints
graph LR
subgraph "Static"
Assets[R2<br/>Static Assets]
end
subgraph "Dynamic"
W[Worker<br/>API Routes]
D1[(D1)]
end
Client --> Assets
Client --> W --> D1
Wrangler Config:
{
"name": "marketing-site",
"main": "src/worker.ts",
"compatibility_date": "2025-01-01",
"assets": {
"directory": "./dist",
"binding": "ASSETS"
},
"d1_databases": [
{ "binding": "DB", "database_name": "site-db", "database_id": "..." }
],
"routes": [
{ "pattern": "example.com/*", "zone_name": "example.com" }
]
}
Ask about:
For each requirement, select appropriate service:
Create initial Mermaid diagram showing:
Generate wrangler.jsonc with:
Calculate monthly costs using:
Before finalizing, verify:
graph LR
A[Client] --> B[Worker]
B --> C[(D1)]
B --> D[(KV)]
graph LR
A[Producer] --> B[Queue]
B --> C[Consumer]
C --> D[(Storage)]
C -.->|failed| E[DLQ]
graph LR
A[Gateway Worker] -->|RPC| B[Auth Worker]
A -->|RPC| C[Data Worker]
B --> D[(KV)]
C --> E[(D1)]
graph TB
subgraph "Region A"
WA[Worker A]
DA[(D1 Primary)]
end
subgraph "Region B"
WB[Worker B]
DB[(D1 Replica)]
end
WA --> DA
WB --> DB
DA -.->|sync| DB
When designing an architecture, provide:
| Anti-Pattern | Problem | Solution |
|---|---|---|
| HTTP between Workers | 1000 subrequest limit | Service Bindings RPC |
| D1 as queue | Expensive, no guarantees | Use Queues |
| KV for large files | 25MB limit, expensive | Use R2 |
| Polling for events | Wasteful, slow | Queues or WebSocket |
| Per-request AI calls | Expensive, slow | Cache with KV |
| No DLQ | Lost messages | Always add DLQ |
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.