Automatically optimizes Cloudflare KV storage patterns, suggesting parallel operations, caching strategies, and storage choice guidance
Automatically optimizes Cloudflare KV storage patterns by detecting sequential operations, missing caching, and wrong storage choices. Activates when KV operations (get/put/delete/list) are found in code, suggesting parallelization with Promise.all() and request-scoped caching to reduce latency by 3x or more.
/plugin marketplace add hirefrank/hirefrank-marketplace/plugin install edge-stack@hirefrank-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
This SKILL automatically activates when:
get, put, delete, or list operations are detected// These patterns trigger immediate alerts:
// Sequential KV operations (multiple network round-trips)
const user = await env.USERS.get(id); // 10-30ms
const settings = await env.SETTINGS.get(id); // 10-30ms
const prefs = await env.PREFS.get(id); // 10-30ms
// Total: 30-90ms just for storage!
// Repeated KV calls in same request
const user1 = await env.USERS.get(id);
const user2 = await env.USERS.get(id); // Same data fetched twice!
// These patterns are validated as correct:
// Parallel KV operations (single network round-trip)
const [user, settings, prefs] = await Promise.all([
env.USERS.get(id),
env.SETTINGS.get(id),
env.PREFS.get(id),
]);
// Total: 10-30ms (single round-trip)
// Request-scoped caching
const cache = new Map();
async function getCached(key: string, env: Env) {
if (cache.has(key)) return cache.get(key);
const value = await env.USERS.get(key);
cache.set(key, value);
return value;
}
cloudflare-architecture-strategist agentedge-performance-oracle agentcloudflare-architecture-strategist agent// ❌ Critical: Sequential KV operations (3x network round-trips)
export default {
async fetch(request: Request, env: Env) {
const userId = getUserId(request);
const user = await env.USERS.get(userId); // 10-30ms
const settings = await env.SETTINGS.get(userId); // 10-30ms
const prefs = await env.PREFS.get(userId); // 10-30ms
// Total: 30-90ms just for storage!
return new Response(JSON.stringify({ user, settings, prefs }));
}
}
// ✅ Correct: Parallel operations (single round-trip)
export default {
async fetch(request: Request, env: Env) {
const userId = getUserId(request);
// Fetch in parallel - single network round-trip time
const [user, settings, prefs] = await Promise.all([
env.USERS.get(userId),
env.SETTINGS.get(userId),
env.PREFS.get(userId),
]);
// Total: 10-30ms (single round-trip)
return new Response(JSON.stringify({ user, settings, prefs }));
}
}
// ❌ High: Same KV data fetched multiple times
export default {
async fetch(request: Request, env: Env) {
const userId = getUserId(request);
// Fetch user data multiple times unnecessarily
const user1 = await env.USERS.get(userId);
const user2 = await env.USERS.get(userId); // Duplicate call!
const user3 = await env.USERS.get(userId); // Duplicate call!
// Process user data...
return new Response('Processed');
}
}
// ✅ Correct: Request-scoped caching
export default {
async fetch(request: Request, env: Env) {
const userId = getUserId(request);
// Request-scoped cache to avoid duplicate KV calls
const cache = new Map();
async function getCachedUser(id: string) {
if (cache.has(id)) return cache.get(id);
const user = await env.USERS.get(id);
cache.set(id, user);
return user;
}
const user1 = await getCachedUser(userId); // KV call
const user2 = await getCachedUser(userId); // From cache
const user3 = await getCachedUser(userId); // From cache
// Process user data...
return new Response('Processed');
}
}
// ❌ High: Using KV for large files (wrong storage choice)
export default {
async fetch(request: Request, env: Env) {
const fileId = new URL(request.url).searchParams.get('id');
// KV is for small key-value data, not large files!
const fileData = await env.FILES.get(fileId); // Could be 10MB+
return new Response(fileData);
}
}
// ✅ Correct: Use R2 for large files
export default {
async fetch(request: Request, env: Env) {
const fileId = new URL(request.url).searchParams.get('id');
// R2 is designed for large objects/files
const object = await env.FILES_BUCKET.get(fileId);
if (!object) {
return new Response('Not found', { status: 404 });
}
return new Response(object.body);
}
}
// ❌ Medium: No TTL strategy (data never expires)
export default {
async fetch(request: Request, env: Env) {
const cacheKey = `data:${Date.now()}`;
// Data cached forever - may become stale
await env.CACHE.put(cacheKey, data);
}
}
// ✅ Correct: Appropriate TTL strategy
export default {
async fetch(request: Request, env: Env) {
const cacheKey = 'user:profile:123';
// Cache user profile for 1 hour (reasonable for user data)
await env.CACHE.put(cacheKey, data, {
expirationTtl: 3600 // 1 hour
});
// Cache API response for 5 minutes (frequently changing)
await env.API_CACHE.put(apiKey, response, {
expirationTtl: 300 // 5 minutes
});
// Cache static data for 24 hours (rarely changes)
await env.STATIC_CACHE.put(staticKey, data, {
expirationTtl: 86400 // 24 hours
});
}
}
// ❌ High: Large values approaching KV limits
export default {
async fetch(request: Request, env: Env) {
const reportId = new URL(request.url).searchParams.get('id');
// Large report (20MB) - close to KV 25MB limit!
const report = await env.REPORTS.get(reportId);
return new Response(report);
}
}
// ✅ Correct: Compress large values or use R2
export default {
async fetch(request: Request, env: Env) {
const reportId = new URL(request.url).searchParams.get('id');
// Option 1: Compress before storing in KV
const compressed = await env.REPORTS.get(reportId);
const decompressed = decompress(compressed);
// Option 2: Use R2 for large objects
const object = await env.REPORTS_BUCKET.get(reportId);
return new Response(object.body);
}
}
When Cloudflare MCP server is available:
// Developer types: sequential KV gets
// SKILL immediately activates: "⚠️ HIGH: Sequential KV operations detected. Use Promise.all() to parallelize and reduce latency by 3x."
// Developer types: storing large files in KV
// SKILL immediately activates: "⚠️ HIGH: Large file storage in KV detected. Use R2 for objects > 1MB to avoid performance issues."
// Developer types: repeated KV calls in same request
// SKILL immediately activates: "⚠️ HIGH: Duplicate KV calls detected. Add request-scoped caching to avoid redundant network calls."
This SKILL ensures KV storage performance by providing immediate, autonomous optimization of storage patterns, preventing common performance issues and ensuring efficient data access.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.