Event-driven architecture skill. Covers event sourcing, CQRS, message brokers (Kafka, RabbitMQ, SQS, NATS), schema versioning, DLQ, retry policies, idempotency. Triggers on: /godmode:event, "event sourcing", "CQRS", "Kafka", "dead letter queue", "idempotency".
From godmodenpx claudepluginhub arbazkhan971/godmodeThis skill uses the workspace's default tool permissions.
references/event-patterns.mdDesigns and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/godmode:event# Detect message broker infrastructure
ls kafka/ docker-compose*.yml 2>/dev/null \
| head -5
grep -rl "kafkajs\|amqplib\|@aws-sdk/client-sqs" \
package.json pyproject.toml 2>/dev/null
# Check for event schemas
find . -name "*.avsc" -o -name "*.proto" \
-o -path "*/events/*" | head -10
EVENT ARCHITECTURE CONTEXT:
Current State: No events | Basic pub/sub | Full CQRS/ES
Throughput: <events per second>
Ordering: None | Per-entity | Global
Retention: <how long to store events>
IF throughput > 10K/s: recommend Kafka
IF ordering per-entity only: Kafka partitions by key
IF need replay: Kafka or NATS JetStream (not RabbitMQ)
IF simple fan-out: SNS/SQS or RabbitMQ
MESSAGE BROKER SELECTION:
| Feature | Kafka | RabbitMQ | SQS/SNS | NATS |
|-----------|--------|----------|---------|-------|
| Throughput| V.High | High | High | V.High|
| Latency | ~5ms | ~1ms | ~50ms | ~0.1ms|
| Ordering | Per-pt | Per-q | FIFO opt| Per-sb|
| Replay | Yes | No | No | Yes* |
THRESHOLDS:
Kafka: use when > 10K events/sec or need replay
RabbitMQ: use when < 10K/sec and need routing
SQS/SNS: use when AWS-native and < 50K/sec
NATS: use when need sub-ms latency
EVENT ENVELOPE (required fields):
event_id: UUID
event_type: "order.placed" (past tense)
event_version: "1.2"
source: "order-service"
timestamp: ISO 8601
correlation_id: UUID (propagated across services)
data: { ... payload ... }
VERSIONING:
Backward compatible: new schema reads old data
Forward compatible: old schema reads new data
RULE: never modify existing fields, only add
RULE: new fields must have defaults
RULE: reserve removed field numbers (protobuf)
RETRY POLICY:
| Attempt | Delay | Action |
|---------|-----------|-------------------------|
| 1 | Immediate | Process message |
| 2 | 1 second | Retry (transient) |
| 3 | 5 seconds | Retry (backoff) |
| 4 | 30 seconds| Retry (extended backoff)|
| 5 | → DLQ | Move to dead letter |
DLQ RULES:
IF DLQ depth > 0: alert team within 5 minutes
IF DLQ depth > 100: page on-call
IF message age in DLQ > 24h: escalate to P1
Every consumer MUST have a DLQ configured
| Pattern | How It Works |
|-------------------|---------------------|
| Idempotency key | Store processed IDs |
| Natural idempotent| Upserts, SET ops |
| Optimistic locking| Version check |
| Dedup table | event_id in DB |
RULE: Every consumer must be idempotent.
At-least-once delivery means duplicates WILL occur.
Store state as immutable event sequence. Rebuild aggregate state by replaying events. Use snapshots every 100 events for performance.
Separate write model (commands → event store) from read model (projections → query-optimized DB). Projection lag target: < 500ms for user-facing reads.
EVENT ARCHITECTURE VALIDATION:
| Check | Status |
|------------------------------------|--------|
| Event envelope follows standard | ? |
| All events have correlation IDs | ? |
| Schema versioning strategy defined | ? |
| DLQ on all consumers | ? |
| Idempotent consumer verified | ? |
| Retry with exponential backoff | ? |
Commit: "event: <system> -- <N> event types, <broker>, <pattern>"
Never ask to continue. Loop autonomously until done.
1. Broker: kafka, rabbitmq, SQS/SNS, NATS configs
2. Schemas: *.avsc, *.proto, events/ directory
3. DLQ: dead-letter config, maxReceiveCount
4. Event sourcing: event_store table, Axon framework
FOR each event domain:
1. Design schema with envelope standard
2. Register in schema registry
3. Implement producer + consumer
4. Configure DLQ + retry policy
5. Verify idempotency with duplicate test
6. IF schema breaks compat: add field, don't modify
7. IF DLQ growing: check handler, fix root cause
Print: Event: {pattern}, {broker}, {N} event types, DLQ: {configured}. Verdict: {verdict}.
timestamp broker event_types dlq_configured idempotency status
KEEP if: schema registered AND DLQ configured
AND idempotent consumer verified
DISCARD if: schema breaks compat OR no DLQ
OR consumer not idempotent
STOP when ANY of:
- All events have schemas with compat checks
- DLQ on every consumer with backoff retry
- Idempotency verified for all consumers
- User requests stop