From cloudflare-workers
Optimizes Cloudflare Workers performance via CPU profiling, memory reduction, caching, streaming, and bundle minimization for high latency, cold starts, CPU/memory limits, timeouts.
npx claudepluginhub secondsky/claude-skills --plugin cloudflare-workersThis skill uses the workspace's default tool permissions.
Techniques for maximizing Worker performance and minimizing latency.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Techniques for maximizing Worker performance and minimizing latency.
// 1. Avoid unnecessary cloning
// ❌ Bad: Clones entire request
const body = await request.clone().json();
// ✅ Good: Parse directly when not re-using body
const body = await request.json();
// 2. Use streaming instead of buffering
// ❌ Bad: Buffers entire response
const text = await response.text();
return new Response(transform(text));
// ✅ Good: Stream transformation
return new Response(response.body.pipeThrough(new TransformStream({
transform(chunk, controller) {
controller.enqueue(process(chunk));
}
})));
// 3. Cache expensive operations
const cache = caches.default;
const cached = await cache.match(request);
if (cached) return cached;
| Error | Symptom | Fix |
|---|---|---|
| CPU limit exceeded | Worker terminated | Optimize hot paths, use streaming |
| Cold start latency | First request slow | Reduce bundle size, avoid top-level await |
| Memory pressure | Slow GC, timeouts | Stream data, avoid large arrays |
| KV latency | Slow reads | Use Cache API, batch reads |
| D1 slow queries | High latency | Add indexes, optimize SQL |
| Large bundles | Slow cold starts | Tree-shake, code split |
| Blocking operations | Request timeouts | Use Promise.all, streaming |
| Unnecessary cloning | Memory spike | Only clone when needed |
| Missing cache | Repeated computation | Implement caching layer |
| Sync operations | CPU spikes | Use async alternatives |
async function profiledHandler(request: Request): Promise<Response> {
const timing: Record<string, number> = {};
const time = async <T>(name: string, fn: () => Promise<T>): Promise<T> => {
const start = Date.now();
const result = await fn();
timing[name] = Date.now() - start;
return result;
};
const data = await time('fetch', () => fetchData());
const processed = await time('process', () => processData(data));
const response = await time('serialize', () => serialize(processed));
console.log('Timing:', timing);
return new Response(response);
}
// For large JSON, use streaming parser
import { JSONParser } from '@streamparser/json';
async function parseStreamingJSON(stream: ReadableStream): Promise<unknown[]> {
const parser = new JSONParser();
const results: unknown[] = [];
parser.onValue = (value) => results.push(value);
for await (const chunk of stream) {
parser.write(chunk);
}
return results;
}
// ❌ Bad: Loads all into memory
const items = await db.prepare('SELECT * FROM items').all();
const processed = items.results.map(transform);
// ✅ Good: Process in batches
async function* batchProcess(db: D1Database, batchSize = 100) {
let offset = 0;
while (true) {
const { results } = await db
.prepare('SELECT * FROM items LIMIT ? OFFSET ?')
.bind(batchSize, offset)
.all();
if (results.length === 0) break;
for (const item of results) {
yield transform(item);
}
offset += batchSize;
}
}
interface CacheLayer {
get(key: string): Promise<unknown | null>;
set(key: string, value: unknown, ttl?: number): Promise<void>;
}
// Layer 1: In-memory (request-scoped)
const memoryCache = new Map<string, unknown>();
// Layer 2: Cache API (edge-local)
const edgeCache: CacheLayer = {
async get(key) {
const response = await caches.default.match(new Request(`https://cache/${key}`));
return response ? response.json() : null;
},
async set(key, value, ttl = 60) {
await caches.default.put(
new Request(`https://cache/${key}`),
new Response(JSON.stringify(value), {
headers: { 'Cache-Control': `max-age=${ttl}` }
})
);
}
};
// Layer 3: KV (global)
// Use env.KV.get/put
// 1. Tree-shake imports
// ❌ Bad
import * as lodash from 'lodash';
// ✅ Good
import { debounce } from 'lodash-es';
// 2. Lazy load heavy dependencies
let heavyLib: typeof import('heavy-lib') | undefined;
async function getHeavyLib() {
if (!heavyLib) {
heavyLib = await import('heavy-lib');
}
return heavyLib;
}
Load specific references based on the task:
references/cpu-optimization.mdreferences/memory-optimization.mdreferences/caching-strategies.mdreferences/bundle-optimization.mdreferences/cold-starts.md| Template | Purpose | Use When |
|---|---|---|
templates/performance-middleware.ts | Performance monitoring | Adding timing/profiling |
templates/caching-layer.ts | Multi-layer caching | Implementing cache |
templates/optimized-worker.ts | Performance patterns | Starting optimized worker |
| Script | Purpose | Command |
|---|---|---|
scripts/benchmark.sh | Load testing | ./benchmark.sh <url> |
scripts/profile-worker.sh | CPU profiling | ./profile-worker.sh |