Migrate applications from other platforms to Cloudflare Workers with comprehensive analysis and validation
Migrate applications from platforms like Heroku, AWS Lambda, or Vercel to Cloudflare Workers. This command analyzes your codebase, identifies compatibility issues, transforms code for the Workers runtime, and guides you through a safe, step-by-step migration with validation.
/plugin marketplace add hirefrank/hirefrank-marketplace/plugin install edge-stack@hirefrank-marketplace<command_purpose> Migrate applications from other platforms (Heroku, AWS Lambda, Vercel Functions, etc.) to Cloudflare Workers with comprehensive analysis, code transformation, and multi-agent validation. </command_purpose>
<role>Platform Migration Specialist with expertise in Cloudflare Workers, runtime compatibility, and multi-cloud architecture patterns</role>
This command analyzes your existing application, identifies migration challenges, transforms code for Workers compatibility, and guides you through a safe migration to Cloudflare's edge network.
<migration_source> #$ARGUMENTS </migration_source>
Supported platforms:
Target: Cloudflare Workers (V8 runtime)
Automatic detection via files:
# Check for platform-specific files
if [ -f "Procfile" ]; then
echo "Detected: Heroku"
PLATFORM="heroku"
elif [ -f "vercel.json" ]; then
echo "Detected: Vercel"
PLATFORM="vercel"
elif [ -f "netlify.toml" ]; then
echo "Detected: Netlify"
PLATFORM="netlify"
elif [ -d ".aws-sam" ] || grep -q "AWS::Serverless" template.yaml 2>/dev/null; then
echo "Detected: AWS Lambda"
PLATFORM="aws-lambda"
elif [ -f "function.json" ]; then
echo "Detected: Azure Functions"
PLATFORM="azure"
elif [ -f "cloudbuild.yaml" ]; then
echo "Detected: Google Cloud Functions"
PLATFORM="gcp"
else
echo "Platform: Generic Node.js/Python/Go application"
PLATFORM="generic"
fi
Discovery tasks (run in parallel):
List all endpoints/routes
# For Express apps
grep -r "app\.\(get\|post\|put\|delete\|patch\)" src/
# For serverless functions
find . -name "*.handler.js" -o -name "api/*.ts"
Identify runtime dependencies
# Node.js
jq '.dependencies + .devDependencies' package.json
# Python
cat requirements.txt
# Go
cat go.mod
Find environment variables
# Check for .env files
cat .env.example .env 2>/dev/null | grep -v '^#' | cut -d= -f1
# Check code for process.env usage
grep -r "process\.env\." src/ --include="*.js" --include="*.ts"
Detect database/storage usage
# Database clients
grep -r "new.*Client\|createConnection\|mongoose\.connect" src/
# Storage SDKs
grep -r "S3Client\|Storage\|GridFS" src/
## Migration Assessment Report
**Source Platform**: [Platform detected]
**Application Type**: [Web app / API / Background jobs]
**Primary Language**: [Node.js / Python / Go]
### Application Inventory
**Endpoints Discovered**: [X] routes
- GET: [count]
- POST: [count]
- PUT/PATCH: [count]
- DELETE: [count]
**Dependencies**: [Y] packages
- Compatible with Workers: [count] ā
- Require replacement: [count] ā ļø
- Incompatible: [count] ā
**Environment Variables**: [Z] variables
- Public config: [count]
- Secrets: [count]
**Data Storage**:
- Databases: [PostgreSQL / MySQL / MongoDB / etc.]
- File storage: [S3 / Local files / etc.]
- Caching: [Redis / Memcached / etc.]
### Migration Complexity
**Estimated Effort**: [Small / Medium / Large]
**Risk Level**: [Low / Medium / High]
**Complexity Factors**:
- [ ] Node.js-specific APIs (fs, process, Buffer) - [count] instances
- [ ] Long-running operations (> 30 seconds)
- [ ] Stateful operations (in-memory sessions)
- [ ] Large dependencies (> 50KB bundles)
- [ ] WebSocket connections (need Durable Objects)
- [ ] Database schema changes required
- [ ] Custom middleware/plugins
### Migration Strategy Recommendation
[Detailed strategy based on analysis]
Task binding-context-analyzer(migration source)
Task cloudflare-architecture-strategist(migration source)
Task repo-research-analyst(migration source)
Task workers-runtime-guardian(current codebase)
Task cloudflare-pattern-specialist(current codebase)
Task edge-performance-oracle(current codebase)
Task cloudflare-data-guardian(current codebase)
Task cloudflare-security-sentinel(current codebase)
Task durable-objects-architect(current codebase)
Task workers-ai-specialist(current codebase) (if AI features detected)
<critical_requirement> Present complete migration plan for user approval before starting any code changes. </critical_requirement>
## Cloudflare Workers Migration Plan
**Estimated Timeline**: [X weeks/days]
**Risk Level**: [Low/Medium/High]
### Phase 1: Infrastructure Setup (Day 1-2)
**Tasks**:
1. Create wrangler.toml configuration
- Worker name: [name]
- Account ID: [from wrangler whoami]
- Compatibility date: [latest]
2. Set up Cloudflare bindings:
- [ ] KV namespaces: [list]
- [ ] R2 buckets: [list]
- [ ] D1 databases: [list]
- [ ] Durable Objects: [list]
3. Configure secrets:
```bash
wrangler secret put DATABASE_URL
wrangler secret put API_KEY
# [etc.]
Validation: wrangler whoami and wrangler dev start successfully
2.1 Runtime Compatibility
Critical transformations (MUST DO):
| Current Code | Workers Replacement | Effort |
|---|---|---|
fs.readFileSync() | Store in KV/R2, fetch at runtime | Medium |
process.env.VAR | env.VAR (from handler) | Small |
Buffer.from() | TextEncoder/TextDecoder or native Uint8Array | Small |
crypto (Node.js) | Web Crypto API | Medium |
setTimeout (long) | Durable Objects Alarms | Large |
express middleware | Hono framework | Medium |
Example transformations:
// ā OLD (Node.js / Express)
import express from 'express';
import fs from 'fs';
const app = express();
app.get('/data', (req, res) => {
const data = fs.readFileSync('./data.json', 'utf-8');
res.json(JSON.parse(data));
});
app.listen(3000);
// ā
NEW (Cloudflare Workers + Hono)
import { Hono } from 'hono';
const app = new Hono();
app.get('/data', async (c) => {
// Data stored in KV at build time or fetched from R2
const data = await c.env.DATA_KV.get('data.json', 'json');
return c.json(data);
});
export default app;
2.2 Dependency Replacement
| Heavy Dependency | Workers Alternative | Bundle Size Savings |
|---|---|---|
axios | fetch() (native) | ~12KB ā 0KB |
moment | Date or Temporal | ~70KB ā 0KB |
lodash | Native methods or lodash-es (tree-shake) | ~70KB ā ~5KB |
bcrypt | Web Crypto crypto.subtle | ~25KB ā 0KB |
jsonwebtoken | jose library (Workers-compatible) | ~15KB ā ~8KB |
2.3 Database Migration
If PostgreSQL/MySQL ā D1:
# Export existing schema
pg_dump --schema-only mydb > schema.sql
# Create D1 database
wrangler d1 create my-database
# Convert to SQLite-compatible SQL
# (remove PostgreSQL-specific syntax)
# Apply schema to D1
wrangler d1 execute my-database --file=schema.sql
# Migrate data (iterative batches)
# Export: pg_dump --data-only --table=users mydb > users.sql
# Import: wrangler d1 execute my-database --file=users.sql
If MongoDB ā KV or D1:
If Redis ā KV or Durable Objects:
2.4 Storage Migration
If S3 ā R2:
// ā OLD (AWS S3)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const s3 = new S3Client({ region: 'us-east-1' });
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'file.txt',
Body: buffer
}));
// ā
NEW (Cloudflare R2)
export default {
async fetch(request, env) {
const buffer = await request.arrayBuffer();
await env.MY_BUCKET.put('file.txt', buffer, {
httpMetadata: { contentType: 'text/plain' }
});
return new Response('Uploaded');
}
};
Validation Tasks:
wrangler dev runs locally3.1 Local Testing
# Start Workers dev server
wrangler dev
# Test all endpoints
curl http://localhost:8787/api/users
curl -X POST http://localhost:8787/api/users -d '{"name":"test"}'
# Load testing
wrk -t4 -c100 -d30s http://localhost:8787/
3.2 Agent Validation
Run all agents on migrated code:
Task workers-runtime-guardian(migrated code)
Task cloudflare-security-sentinel(migrated code)
Task edge-performance-oracle(migrated code)
Task binding-context-analyzer(migrated code)
3.3 Integration Testing
Validation: All critical paths tested, no P1 issues
# Deploy to staging environment
wrangler deploy --env staging
# Smoke tests on staging
curl https://my-worker-staging.workers.dev/health
# Monitor logs
wrangler tail --env staging
Validation:
5.1 Read-Only Migration (safe approach):
5.2 Cutover Migration (when confident):
Rollback Plan:
Pre-deployment checklist:
Deployment:
Use the /es-deploy command:
/es-deploy
# Runs all pre-flight checks
# Deploys to production
# Validates deployment
Post-deployment:
wrangler tail7.1 Performance Tuning
Task edge-caching-optimizer(production metrics)
Task edge-performance-oracle(production metrics)
7.2 Cost Optimization
Task kv-optimization-specialist(KV usage)
Task r2-storage-architect(R2 usage)
7.3 Decommission Old Platform
After 2 weeks of stable production:
Track these KPIs:
| Metric | Old Platform | Target (Workers) | Actual |
|---|---|---|---|
| P95 Response Time | [X]ms | < 50ms | __ |
| P99 Response Time | [Y]ms | < 100ms | __ |
| Error Rate | [Z]% | < 0.1% | __ |
| Monthly Cost | $[A] | < $[A/2] | __ |
| Global Availability | [B] regions | 300+ locations | __ |
| Cold Start | N/A | < 10ms | __ |
ā Migration considered successful when:
### 4. User Approval & Confirmation
<critical_requirement> MUST get explicit user approval before proceeding with any code changes or deployments. </critical_requirement>
**Present the migration plan and ask**:
š Migration Plan Complete
Summary:
Key transformations required:
Do you want to proceed with this migration plan?
Options:
### 5. Automated Migration Execution
<thinking>
Only execute if user approves. Work through phases systematically.
</thinking>
**If user says "yes"**:
1. **Create migration branch**
```bash
git checkout -b cloudflare-migration
Phase 1: Infrastructure Setup
Create wrangler.toml:
name = "my-app"
main = "src/index.ts"
compatibility_date = "2025-09-15" # Always 2025-09-15 or later
[[kv_namespaces]]
binding = "CACHE"
id = "..." # User must fill in after creating
remote = true # Connect to real KV during development
[[d1_databases]]
binding = "DB"
database_name = "my-database"
database_id = "..." # From wrangler d1 create
remote = true # Connect to real D1 during development
[[r2_buckets]]
binding = "STORAGE"
bucket_name = "my-bucket"
remote = true # Connect to real R2 during development
Phase 2: Code Transformation
For each identified incompatibility, present fix:
ā ļø Incompatibility #1: fs.readFileSync
Location: src/utils/config.ts:12
Current:
const config = JSON.parse(fs.readFileSync('./config.json', 'utf-8'));
Recommended fix:
// Option 1: Store in KV (if dynamic config)
const config = await env.CONFIG_KV.get('config', 'json');
// Option 2: Import at build time (if static config)
import config from './config.json';
Apply fix? (yes/skip/custom)
Phase 3: Testing
Run automated tests:
npm run typecheck
npm test
wrangler dev # Uses remote bindings configured in wrangler.toml
# Test all endpoints at http://localhost:8787
Phase 4: Deploy to Staging
wrangler deploy --env staging
## š Migration to Cloudflare Workers Complete
**Migration Date**: [timestamp]
**Total Duration**: [X] days
**Status**: ā
SUCCESS / ā ļø PARTIAL / ā FAILED
### Changes Summary
**Files Modified**: [count]
**Dependencies Replaced**: [count]
- [old-package] ā [new-package]
- ...
**Bindings Created**:
- KV: [count] namespaces
- D1: [count] databases
- R2: [count] buckets
- DO: [count] classes
**Code Transformations**:
- Node.js APIs replaced: [count]
- Express ā Hono: ā
- Bundle size: [X]KB ā [Y]KB ([-Z]% reduction)
### Performance Comparison
| Metric | Old Platform | Workers | Improvement |
|--------|-------------|---------|-------------|
| P95 Latency | [X]ms | [Y]ms | [Z]% faster |
| Cold Start | N/A | [A]ms | N/A |
| Global Locations | [B] | 300+ | [C]x increase |
### Deployment URLs
**Staging**: https://my-app-staging.workers.dev
**Production**: https://my-app.workers.dev
**Custom Domain**: (configure in Cloudflare dashboard)
### Post-Migration Tasks
**Immediate** (next 24 hours):
- [ ] Monitor error rates (target < 0.1%)
- [ ] Verify all critical endpoints
- [ ] Check database data integrity
- [ ] Validate secret access
**Short-term** (next week):
- [ ] Add Cache API for performance
- [ ] Implement edge caching strategy
- [ ] Configure custom domain
- [ ] Set up Cloudflare Analytics
**Long-term** (next month):
- [ ] Optimize bundle size further
- [ ] Add Durable Objects (if needed)
- [ ] Implement Workers AI features
- [ ] Decommission old platform
### Monitoring
**Logs**:
```bash
wrangler tail --format pretty
Analytics: https://dash.cloudflare.com ā Workers & Pages ā [your-worker] ā Metrics
Alerts (configure):
If issues detected:
# List deployments
wrangler deployments list
# Rollback to previous
wrangler rollback [previous-deployment-id]
# Or revert DNS to old platform
# (if DNS already switched)
Old Platform: $[X]/month Cloudflare Workers: $[Y]/month Savings: $[Z]/month ([P]% reduction)
Breakdown:
Migration Status: [ā COMPLETE / ā ļø NEEDS ATTENTION]
Recommended Next Steps:
## Platform-Specific Migration Guides
### Heroku ā Workers
**Common patterns**:
- `Procfile` ā `wrangler.toml`
- `process.env.PORT` ā Not needed (Workers handle HTTP automatically)
- Postgres addon ā D1 or external Postgres via Hyperdrive
- Redis addon ā KV or Durable Objects
- Heroku Scheduler ā Cron Triggers
**Example**:
```bash
# Heroku Procfile
web: node server.js
# Workers (no equivalent needed)
# HTTP handled by Workers runtime
Common patterns:
handler(event, context) ā fetch(request, env, ctx)Example:
// AWS Lambda handler
export const handler = async (event, context) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello' })
};
};
// Workers handler
export default {
async fetch(request, env, ctx) {
return new Response(JSON.stringify({ message: 'Hello' }), {
headers: { 'content-type': 'application/json' }
});
}
};
Common patterns:
api/*.ts ā Single Worker with Hono routingExample:
// Vercel Function (api/hello.ts)
export default function handler(req, res) {
res.status(200).json({ message: 'Hello' });
}
// Workers + Hono
import { Hono } from 'hono';
const app = new Hono();
app.get('/api/hello', (c) => {
return c.json({ message: 'Hello' });
});
export default app;
Issue: "Error: Cannot find module 'fs'" Solution: Replace with KV/R2 or bundle file at build time
// ā Runtime file read
const data = fs.readFileSync('./data.json');
// ā
Build-time import
import data from './data.json';
// ā
Runtime from KV
const data = await env.DATA_KV.get('data', 'json');
Issue: "Error: Buffer is not defined" Solution: Use TextEncoder/TextDecoder or Uint8Array
// ā Node.js Buffer
const buf = Buffer.from('hello', 'utf-8');
// ā
Web APIs
const encoder = new TextEncoder();
const buf = encoder.encode('hello');
Issue: "Error: Worker exceeded CPU time limit" Solution: Optimize heavy operations or use Durable Objects
Issue: "Error: D1 database not found" Solution: Verify binding name and database ID in wrangler.toml
# Create D1 database
wrangler d1 create my-database
# Add to wrangler.toml with exact ID from output
[[d1_databases]]
binding = "DB" # Must match env.DB in code
database_id = "..." # From create command
Issue: Bundle size too large (> 1MB) Solution:
wrangler dev extensively before deployingwrangler secret putTrack these to measure migration success:
Performance:
Reliability:
Cost:
Developer Experience:
Remember: Cloudflare Workers is a different runtime. Take time to learn the platform, use the specialized agents, and don't hesitate to ask for help with complex migrations.