**Status**: Production Ready ✅ | **Last Verified**: 2025-12-27
Manages Cloudflare Queues for asynchronous message processing between Workers.
npx claudepluginhub secondsky/claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/best-practices.mdreferences/consumer-api.mdreferences/error-catalog.mdreferences/http-publishing.mdreferences/producer-api.mdreferences/production-checklist.mdreferences/pull-consumers.mdreferences/r2-event-integration.mdreferences/setup-guide.mdreferences/typescript-types.mdreferences/wrangler-commands.mdtemplates/queues-consumer-basic.tstemplates/queues-consumer-explicit-ack.tstemplates/queues-dlq-pattern.tstemplates/queues-producer.tstemplates/queues-retry-with-delay.tstemplates/wrangler-queues-config.jsoncStatus: Production Ready ✅ | Last Verified: 2025-12-27
Dependencies: cloudflare-worker-base (for Worker setup)
Contents: Quick Start • Critical Rules • Top Errors • Use Cases • When to Load References • Limits
bunx wrangler queues create my-queue
bunx wrangler queues list
wrangler.jsonc:
{
"name": "my-producer",
"main": "src/index.ts",
"queues": {
"producers": [
{
"binding": "MY_QUEUE",
"queue": "my-queue"
}
]
}
}
src/index.ts:
import { Hono } from 'hono';
type Bindings = {
MY_QUEUE: Queue;
};
const app = new Hono<{ Bindings: Bindings }>();
app.post('/send', async (c) => {
await c.env.MY_QUEUE.send({
userId: '123',
action: 'process-order',
timestamp: Date.now(),
});
return c.json({ status: 'queued' });
});
export default app;
wrangler.jsonc:
{
"name": "my-consumer",
"main": "src/consumer.ts",
"queues": {
"consumers": [
{
"queue": "my-queue",
"max_batch_size": 10,
"max_retries": 3,
"dead_letter_queue": "my-dlq"
}
]
}
}
src/consumer.ts:
import type { MessageBatch } from '@cloudflare/workers-types';
export default {
async queue(batch: MessageBatch): Promise<void> {
for (const message of batch.messages) {
console.log('Processing:', message.body);
// Your logic here
}
// Implicit ack: returning successfully acknowledges all messages
},
};
Deploy:
bunx wrangler deploy
Load: references/setup-guide.md for complete 6-step setup with DLQ configuration
Problem: Message exceeds 128 KB limit
Solution: Store large data in R2, send reference
// ❌ Wrong
await env.MY_QUEUE.send({ data: largeArray }); // >128 KB fails
// ✅ Correct
const message = { data: largeArray };
const size = new TextEncoder().encode(JSON.stringify(message)).length;
if (size > 128000) {
const key = `messages/${crypto.randomUUID()}.json`;
await env.MY_BUCKET.put(key, JSON.stringify(message));
await env.MY_QUEUE.send({ type: 'large-message', r2Key: key });
} else {
await env.MY_QUEUE.send(message);
}
Problem: Exceeding 5000 messages/second per queue
Solution: Use sendBatch() and rate limiting
// ❌ Wrong
for (let i = 0; i < 10000; i++) {
await env.MY_QUEUE.send({ id: i }); // Too fast!
}
// ✅ Correct
const messages = Array.from({ length: 10000 }, (_, i) => ({
body: { id: i },
}));
// Send in batches of 100
for (let i = 0; i < messages.length; i += 100) {
await env.MY_QUEUE.sendBatch(messages.slice(i, i + 100));
}
Problem: Single message failure causes all messages to retry
Solution: Use explicit acknowledgement
// ❌ Wrong - implicit ack
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
await env.DB.prepare('INSERT INTO orders VALUES (?, ?)').bind(
message.body.id,
message.body.amount
).run();
}
// If any fails, ALL retry!
},
};
// ✅ Correct - explicit ack
export default {
async queue(batch: MessageBatch, env: Env): Promise<void> {
for (const message of batch.messages) {
try {
await env.DB.prepare('INSERT INTO orders VALUES (?, ?)').bind(
message.body.id,
message.body.amount
).run();
message.ack(); // Only ack on success
} catch (error) {
console.error(`Failed: ${message.id}`, error);
// Don't ack - will retry independently
}
}
},
};
Load references/error-catalog.md for all 10 errors including DLQ configuration, auto-scaling issues, message deletion prevention, and detailed solutions.
When: Simple async job processing (emails, notifications)
Quick Pattern:
// Producer
await env.MY_QUEUE.send({ type: 'email', to: 'user@example.com' });
// Consumer (implicit ack - for idempotent operations)
export default {
async queue(batch: MessageBatch): Promise<void> {
for (const message of batch.messages) {
await sendEmail(message.body.to, message.body.content);
}
},
};
Load: templates/queues-producer.ts + templates/queues-consumer-basic.ts
When: Writing to database, must avoid duplicates
Load: templates/queues-consumer-explicit-ack.ts + references/consumer-api.md
When: Calling rate-limited APIs, temporary failures
Load: templates/queues-retry-with-delay.ts + references/error-catalog.md (Error #2, #3)
When: Production systems, need to capture permanently failed messages
Load: templates/queues-dlq-pattern.ts + references/setup-guide.md (Step 4)
When: Processing thousands of messages per second
Quick Pattern:
{
"queues": {
"consumers": [{
"queue": "my-queue",
"max_batch_size": 100, // Large batches
"max_batch_timeout": 5, // Fast processing
"max_concurrency": null // Auto-scale
}]
}
}
Load: references/best-practices.md → Optimizing Throughput
Load references/setup-guide.md when:
Load references/error-catalog.md when:
Load references/producer-api.md when:
Load references/consumer-api.md when:
Load references/best-practices.md when:
Load references/wrangler-commands.md when:
Load references/typescript-types.md when:
Load references/production-checklist.md when:
Load references/pull-consumers.md when:
Load references/http-publishing.md when:
Load references/r2-event-integration.md when:
Available Agents:
Available Commands:
Critical limits:
Load references/best-practices.md for handling limits and optimization strategies.
Producer: Add queue binding to wrangler.jsonc queues.producers array with binding and queue fields.
Consumer: Configure in wrangler.jsonc queues.consumers array with queue, max_batch_size (1-100), max_batch_timeout (0-60s), max_retries, dead_letter_queue, and optionally max_concurrency (default: auto-scale).
CPU Limits: Increase limits.cpu_ms from default 30,000ms if processing takes longer.
Load references/setup-guide.md for complete configuration examples and templates/wrangler-queues-config.jsonc for production-ready config.
Use @cloudflare/workers-types package for complete type definitions: Queue, MessageBatch<Body>, Message<Body>, QueueSendOptions.
Load references/typescript-types.md for complete type reference with interfaces, generics, type guards, and usage examples.
Key Commands: wrangler queues info (status), wrangler tail (logs), wrangler queues pause-delivery/resume-delivery (control).
Load references/wrangler-commands.md for complete CLI reference with real-time monitoring, debugging workflows, and performance analysis commands.
12-Point Pre-Deployment Checklist: DLQ configuration, message acknowledgment strategy, size validation, batch optimization, concurrency settings, CPU limits, error handling, monitoring, rate limiting, idempotency, load testing, and security review.
Load references/production-checklist.md for complete checklist with detailed explanations, code examples, and deployment workflow.
Questions? Issues?
references/error-catalog.md for all 10 errors and solutionsreferences/setup-guide.md for complete setup walkthroughreferences/best-practices.md for production patternsExpert guidance for Next.js Cache Components and Partial Prerendering (PPR). **PROACTIVE ACTIVATION**: Use this skill automatically when working in Next.js projects that have `cacheComponents: true` in their next.config.ts/next.config.js. When this config is detected, proactively apply Cache Components patterns and best practices to all React Server Component implementations. **DETECTION**: At the start of a session in a Next.js project, check for `cacheComponents: true` in next.config. If enabled, this skill's patterns should guide all component authoring, data fetching, and caching decisions. **USE CASES**: Implementing 'use cache' directive, configuring cache lifetimes with cacheLife(), tagging cached data with cacheTag(), invalidating caches with updateTag()/revalidateTag(), optimizing static vs dynamic content boundaries, debugging cache issues, and reviewing Cache Component implementations.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.