From atum-stack-backend
Redis 7+ pattern library for backend services — data structures (strings, lists, sets, hashes, sorted sets, streams, bitmaps, hyperloglogs, geo, JSON via RedisJSON), caching patterns (cache-aside, write-through, write-behind, read-through), session storage, rate limiting (token bucket, fixed window, sliding window), distributed locks (SETNX + expiry, Redlock algorithm), pub/sub messaging, streams for event sourcing and message queues (XADD, XREAD, consumer groups), persistence strategies (RDB snapshots vs AOF append-only), Redis Cluster for sharding, Sentinel for HA, and modern clients (node-redis, ioredis for Node, redis-py for Python). Use when implementing caching, rate limiting, session storage, pub/sub, distributed locks, or leaderboards with Redis. Differentiates from mongodb-patterns and postgres-patterns by covering an in-memory data structure store used alongside a primary database, not as the primary data store.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-stack-backendThis skill uses the workspace's default tool permissions.
Redis est un **in-memory data structure store** utilisé pour :
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides implementation of event-driven hooks in Claude Code plugins using prompt-based validation and bash commands for PreToolUse, Stop, and session events.
Redis est un in-memory data structure store utilisé pour :
Redis n'est pas une base de données primaire — toujours l'utiliser en complément de Postgres/MongoDB.
SET user:1:name "Arnaud"
GET user:1:name
INCR user:1:visits
EXPIRE user:1:name 3600
HSET user:1 name "Arnaud" email "a@example.com" age 30
HGET user:1 name
HGETALL user:1
HINCRBY user:1 visits 1
Hash est préférable à plusieurs Strings quand les champs sont accédés ensemble.
LPUSH queue:jobs "job1"
RPOP queue:jobs
LLEN queue:jobs
Utiliser pour les queues simples (pour du vrai message queueing, préférer Streams).
SADD tags:post1 "redis" "nosql" "cache"
SISMEMBER tags:post1 "redis"
SMEMBERS tags:post1
SINTER tags:post1 tags:post2
ZADD leaderboard 1000 "player1" 850 "player2" 1200 "player3"
ZRANGE leaderboard 0 -1 WITHSCORES
ZRANGEBYSCORE leaderboard 500 1500
ZREVRANK leaderboard "player1"
XADD events * userId 123 type login
XREAD COUNT 10 STREAMS events 0
XGROUP CREATE events processors 0
XREADGROUP GROUP processors worker1 COUNT 1 STREAMS events >
XACK events processors <message-id>
Streams = message queue persistante avec consumer groups (alternative à Kafka/RabbitMQ pour les use cases moyens).
async function getUser(id: string) {
const cached = await redis.get(`user:${id}`)
if (cached) return JSON.parse(cached)
const user = await db.users.findById(id)
if (user) {
await redis.setex(`user:${id}`, 3600, JSON.stringify(user))
}
return user
}
// Invalidation
async function updateUser(id: string, data: any) {
await db.users.update(id, data)
await redis.del(`user:${id}`)
}
async function updateUser(id: string, data: any) {
const user = await db.users.update(id, data)
await redis.setex(`user:${id}`, 3600, JSON.stringify(user))
return user
}
DB wrapper qui interroge Redis en premier, tombe sur la DB si miss, puis met en cache automatiquement.
async function rateLimit(userId: string, limit: number, windowSec: number) {
const key = `rate:${userId}:${Math.floor(Date.now() / 1000 / windowSec)}`
const count = await redis.incr(key)
if (count === 1) await redis.expire(key, windowSec)
if (count > limit) throw new Error('Rate limit exceeded')
}
Simple mais burst possible à la frontière de fenêtre.
async function slidingRateLimit(userId: string, limit: number, windowMs: number) {
const key = `rate:${userId}`
const now = Date.now()
const windowStart = now - windowMs
const pipeline = redis.pipeline()
pipeline.zremrangebyscore(key, 0, windowStart)
pipeline.zadd(key, now, `${now}-${Math.random()}`)
pipeline.zcard(key)
pipeline.expire(key, Math.ceil(windowMs / 1000))
const results = await pipeline.exec()
const count = results[2][1] as number
if (count > limit) throw new Error('Rate limit exceeded')
}
Plus précis mais plus coûteux en mémoire.
Pour du rate limiting très précis, utiliser un script Lua atomique (scriptable côté Redis via EVAL). Voir docs officielles Redis — les scripts Lua sont exécutés atomiquement par Redis.
async function withLock<T>(key: string, ttlSec: number, fn: () => Promise<T>): Promise<T> {
const lockKey = `lock:${key}`
const token = crypto.randomUUID()
const acquired = await redis.set(lockKey, token, 'NX', 'EX', ttlSec)
if (!acquired) throw new Error('Lock not acquired')
try {
return await fn()
} finally {
// Release only if we still hold the lock (token match)
const releaseScript = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`
await redis.eval(releaseScript, 1, lockKey, token)
}
}
// Usage
await withLock('user:123:balance', 30, async () => {
const balance = await getBalance(123)
await updateBalance(123, balance - 100)
})
Pour un lock distribué sur plusieurs instances Redis (high-availability), utiliser la library redlock qui implémente l'algorithme Redlock. Martin Kleppmann a critiqué cet algo — pour les cas vraiment critiques, utiliser un lock système (Postgres advisory locks, ZooKeeper).
// Publisher
await redis.publish('events:user-created', JSON.stringify({ userId: 123 }))
// Subscriber (requires a separate connection)
const sub = new Redis()
await sub.subscribe('events:user-created')
sub.on('message', (channel, message) => {
const event = JSON.parse(message)
// handle
})
Limites :
// Producer
await redis.xadd('events', '*', 'type', 'order.created', 'orderId', '123')
// Consumer group (une seule fois)
await redis.xgroup('CREATE', 'events', 'order-processors', '0', 'MKSTREAM')
// Consumer
while (true) {
const messages = await redis.xreadgroup(
'GROUP', 'order-processors', 'worker1',
'COUNT', '10',
'BLOCK', '5000',
'STREAMS', 'events', '>'
)
if (!messages) continue
for (const [stream, entries] of messages) {
for (const [id, fields] of entries) {
try {
await processEvent(fields)
await redis.xack('events', 'order-processors', id)
} catch (err) {
// Will be reclaimed by another consumer after idle timeout
}
}
}
}
Streams = meilleure alternative à LPUSH/RPOP pour du real messaging (persistance, consumer groups, replay).
Snapshots binaires à intervalles réguliers.
save 900 1 # 1+ change, every 15min
save 300 10 # 10+ changes, every 5min
save 60 10000 # 10000+ changes, every 1min
Log de toutes les write operations.
appendonly yes
appendfsync everysec
Activer RDB + AOF simultanément. En cas de restart, Redis utilise AOF (plus récent).
1 master + N replicas. Sentinel monitore et promeut un replica en cas de failover.
Partitionne les données sur plusieurs nodes (16384 hash slots).
npm install ioredis
import Redis from 'ioredis'
const redis = new Redis({
host: process.env.REDIS_HOST,
port: 6379,
password: process.env.REDIS_PASSWORD,
maxRetriesPerRequest: 3,
enableReadyCheck: true,
lazyConnect: false,
})
// Cluster
const cluster = new Redis.Cluster([
{ host: 'node1', port: 6379 },
{ host: 'node2', port: 6379 },
])
import { createClient } from 'redis'
const client = createClient({ url: process.env.REDIS_URL })
await client.connect()
pip install redis
import redis
r = redis.Redis(
host='localhost',
port=6379,
db=0,
decode_responses=True,
)
r.set('key', 'value')
value = r.get('key')
# Async
from redis import asyncio as aioredis
async def main():
r = aioredis.from_url("redis://localhost")
await r.set("key", "value")
value = await r.get("key")
await r.close()
FLUSHDB en prod → data loss totalSET sans EX sur du cache → perte de TTL, memory leakRDB en cloud-hosted sans persistence disk → data lossallkeys-lru pour cache, noeviction pour queue)CONFIG SET slowlog-log-slower-than 10000<service>:<entity>:<id>:<field>postgres-patternsmongodb-patternsrealtime-websocket (dans ce plugin, à venir)deploy-cloudflare / deploy-vercel