Interactive Cloudflare architecture design wizard with Mermaid diagrams and wrangler.toml generation
Designs production-ready Cloudflare architectures with diagrams and configuration.
/plugin marketplace add littlebearapps/cloudflare-engineer/plugin install cloudflare-engineer@littlebearapps-cloudflare[use-case] [--template=api|pipeline|ai|static]Design production-ready Cloudflare architectures with proper service selection, configuration generation, and visual documentation.
Arguments: "$ARGUMENTS"
/cf-design # Interactive mode
/cf-design api gateway with auth # Describe your use case
/cf-design --template=pipeline # Start from template
/cf-design migrate express to workers # Migration assistance
--template=api - API GatewayREST/GraphQL API with D1 database, KV caching, and optional auth.
--template=pipeline - Event PipelineIngest → Queue → Process → Store pattern with DLQ.
--template=ai - AI ApplicationLLM-powered app with RAG and conversation history.
--template=static - Static Site + FunctionsMarketing/docs site with API endpoints.
For the use case provided, gather:
Traffic Profile
Data Requirements
Processing Needs
Budget
Based on requirements:
Select Services
Design Data Flow
Security Boundaries
Produce:
graph LR
Client --> Worker
Worker --> Storage
{
"name": "project-name",
"main": "src/index.ts",
// ... complete config
}
Cost Estimate | Service | Usage | Monthly Cost | |---------|-------|--------------| | Workers | X req/mo | $X.XX | | D1 | X reads/writes | $X.XX | | Total | | $X.XX |
Implementation Roadmap
# Architecture Design: Event Analytics Platform
## Requirements Summary
- 1M events/day ingestion
- 7-day retention
- Real-time dashboards
- Geographic: Global
## Architecture
```mermaid
graph LR
subgraph "Ingest"
I[Ingest Worker]
end
subgraph "Process"
Q[Queue]
P[Processor]
end
subgraph "Store"
AE[Analytics Engine]
D1[(D1 Aggregates)]
end
subgraph "Query"
API[API Worker]
end
Client --> I --> Q --> P
P --> AE
P --> D1
Dashboard --> API --> D1
Dashboard --> API --> AE
| Component | Service | Justification |
|---|---|---|
| Ingestion | Worker | Low latency edge processing |
| Buffering | Queue | Decouple ingest from processing |
| Raw events | Analytics Engine | Free, handles sampling |
| Aggregates | D1 | Queryable for dashboards |
{
"name": "event-analytics",
"main": "src/index.ts",
"compatibility_date": "2025-01-01",
"placement": { "mode": "smart" },
"observability": { "logs": { "enabled": true } },
"d1_databases": [
{ "binding": "DB", "database_name": "analytics", "database_id": "..." }
],
"analytics_engine_datasets": [
{ "binding": "EVENTS", "dataset": "raw_events" }
],
"queues": {
"producers": [
{ "binding": "EVENT_QUEUE", "queue": "events" }
],
"consumers": [
{
"queue": "events",
"max_batch_size": 100,
"max_retries": 1,
"dead_letter_queue": "events-dlq",
"max_concurrency": 10
}
]
}
}
| Service | Calculation | Monthly |
|---|---|---|
| Workers | 30M req × $0.30/M | $9.00 |
| Queues | 30M msg × $0.40/M | $12.00 |
| D1 Writes | 1M × $1.00/M | $1.00 |
| D1 Reads | 10M × $0.25/B | $0.00 |
| Analytics Engine | Free | $0.00 |
| Total | $22.00 |
## Tips
- Start with a template for common patterns
- Always include DLQ for production queues
- Use Analytics Engine for metrics (it's free)
- Enable Smart Placement for global latency
- Plan indexes before implementing D1 queries
## MCP Tools Used
- `mcp__cloudflare-docs__search_cloudflare_documentation` - Best practices
- `mcp__cloudflare-bindings__workers_list` - Existing workers
- `mcp__cloudflare-bindings__d1_databases_list` - Existing D1 databases
- `mcp__cloudflare-bindings__kv_namespaces_list` - Existing KV namespaces
- `mcp__cloudflare-bindings__r2_buckets_list` - Existing R2 buckets