Implements multi-tier caching with Redis, in-memory caches, and CDN layers using cache-aside patterns, TTLs, and invalidation to reduce database load and improve read performance.
From database-cache-layernpx claudepluginhub nickloveinvesting/nick-love-plugins --plugin database-cache-layerThis skill is limited to using the following tools:
assets/README.mdreferences/README.mdscripts/README.mdscripts/redis_setup.shGuides Payload CMS config (payload.config.ts), collections, fields, hooks, access control, APIs. Debugs validation errors, security, relationships, queries, transactions, hook behavior.
Designs, audits, and improves analytics tracking systems using Signal Quality Index for reliable, decision-ready data in marketing, product, and growth.
Enforces A/B test setup with gates for hypothesis locking, metrics definition, sample size calculation, assumptions checks, and execution readiness before implementation.
Implement multi-tier caching strategies using Redis, application-level in-memory caches, and query result caching to reduce database load and improve read latency. This skill covers cache-aside, write-through, and write-behind patterns with proper invalidation strategies, TTL configuration, and cache stampede prevention.
docker run redis:7-alpineredis-cli installed for cache inspection and debuggingProfile database queries to identify caching candidates. Focus on queries that: execute more than 100 times per minute, take longer than 50ms, return data that changes less frequently than every 5 minutes, and produce results smaller than 1MB. Use pg_stat_statements or MySQL slow query log.
Design the cache key schema with a consistent naming convention: service:entity:identifier:variant. Examples: app:user:12345:profile, app:products:category:electronics:page:1. Include a version prefix to enable bulk invalidation: v2:app:user:12345.
Implement the cache-aside pattern for read-heavy data:
GET app:user:12345:profileSET app:user:12345:profile <json> EX 3600DEL app:user:12345:profile to invalidateConfigure TTL values based on data change frequency:
Implement cache stampede prevention for high-traffic cache keys:
TTL * 0.8 with probability 1 / concurrent_requestsSET key:lock NX EX 5 to let one request refresh while others serve stale dataAdd application-level L1 cache using an in-memory LRU cache (Node.js: lru-cache, Python: cachetools, Java: Caffeine) for per-process caching of ultra-hot data. Set L1 TTL shorter than Redis TTL (e.g., 60 seconds L1, 5 minutes Redis).
Configure Redis for production:
maxmemory to 75% of available RAMmaxmemory-policy allkeys-lru for cache workloadssave "" (disable RDB persistence) for pure cache usetcp-keepalive 60 and timeout 300Implement cache invalidation on data mutations. After INSERT, UPDATE, or DELETE operations, delete the corresponding cache key and any aggregate/list cache keys that include the modified data. Use Redis key patterns or tag-based invalidation for related keys.
Add cache metrics instrumentation: track cache hit rate (hits / (hits + misses)), cache miss latency (time to populate from DB), Redis memory usage, eviction rate, and average key TTL remaining. Alert when hit rate drops below 80%.
Test cache behavior under load: verify cache hit rate reaches 90%+ for targeted queries, confirm cache invalidation works correctly on updates, and measure end-to-end latency improvement compared to direct database queries.
| Error | Cause | Solution |
|---|---|---|
| Redis connection refused | Redis server down or network issue | Implement circuit breaker pattern; fall through to database on cache unavailability; retry with exponential backoff |
| Cache stampede on popular key expiration | Many concurrent requests hit cache miss simultaneously | Use distributed locking or probabilistic early refresh; extend TTL with jitter (TTL + random(0, TTL*0.1)) |
| Stale data served after database update | Cache invalidation missed or delayed | Audit invalidation paths; use publish/subscribe for cache invalidation events; reduce TTL for sensitive data |
| Redis out of memory (OOM) | Cache size exceeds maxmemory setting | Enable allkeys-lru eviction; reduce TTLs; audit large keys with redis-cli --bigkeys; increase maxmemory |
| Cache key collision | Different data stored under the same key pattern | Include all discriminating parameters in the cache key; add content hash to key for variant detection |
Caching product catalog for an e-commerce site: Product detail pages query 3 tables (products, categories, reviews_summary). Cache the assembled product JSON in Redis with TTL of 10 minutes. Cache hit rate reaches 95% since products change rarely. Category pages use list cache keys app:products:category:electronics:sort:price:page:1 with 5-minute TTL. On product update, invalidate both the product key and all category list keys containing that product.
User session caching with Redis: Store session data as Redis hashes (HSET session:abc123 userId 456 role admin lastAccess 1705341234). Set TTL to 30 minutes with sliding expiration on each access (EXPIRE session:abc123 1800). Session reads drop from 2ms (PostgreSQL) to 0.1ms (Redis), eliminating 50,000 database queries per minute.
API response caching with stale-while-revalidate: Dashboard endpoint takes 3 seconds to compute. Cache the response with 5-minute TTL. When TTL expires, the first request triggers an async background refresh while serving the stale cached response. Subsequent requests within the refresh window also receive the stale response. Dashboard always loads in under 5ms from the client perspective.