Caching strategy skill. Activates when user needs to design cache layers (CDN, application, database, session), implement cache invalidation strategies (TTL, event-based, write-through, write-behind), configure Redis/Memcached/Varnish, or prevent cache stampedes. Triggers on: /godmode:cache, "add caching", "cache invalidation", "Redis setup", "cache strategy", "CDN configuration", "cache stampede", or when the orchestrator detects caching opportunities.
From godmodenpx claudepluginhub arbazkhan971/godmodeThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/godmode:cache/godmode:perf identifies slow queries or high latencyCACHE ASSESSMENT:
Project: <name>
Current Caching: None | CDN only | App-level | Multi-layer
Performance Baseline: P50/P95 latency, Database QPS, Cache hit rate
HOT PATH ANALYSIS: Endpoint, QPS, Latency, Cacheable (Y/N), TTL
IMPACT ESTIMATE: Hit rate, latency reduction, DB load reduction
Multi-layer architecture: CDN/Edge -> Application (Redis/Memcached) -> DB Query Cache -> Source of Truth (Database).
Cache-Aside Pattern (Default):
TTL-Based: TTL = max acceptable staleness. Static config: 24h. Product catalog: 10m. Search results: 1m. Real-time pricing: 10s. Always set a TTL.
Event-Based: On data change, publish event -> consumer deletes cache keys + purges CDN. Near-real-time, precise.
Write-Through: Write to cache AND DB synchronously. Always consistent. Higher write latency.
Write-Behind: Write to cache immediately, async flush to DB. Fast writes. Risk of data loss.
Default recommendation: Cache-aside + TTL + event-based invalidation.
allkeys-lru (recommended). Set maxmemory.{entity}:{id}, {entity}:{id}:{field}, {entity}:list:{filter}Data structures: STRING (single objects), HASH (objects with fields), SORTED SET (leaderboards), SET/HLL (unique counts), LIST (feeds), STRING+TTL (rate limiting), STRING+NX (distributed locks).
Problem: Popular key expires -> thousands of concurrent DB queries.
Track: cache_hit_ratio (alert < 80%), cache_latency_seconds (P95 > 10ms), cache_eviction_total (> 100/min), cache_memory_bytes (> 85%), cache_error_total (> 0).
CACHE STRATEGY VALIDATION:
- All cache keys have TTL: PASS | FAIL
- Invalidation strategy defined: PASS | FAIL
- Stampede prevention on hot keys: PASS | FAIL
- Hit rate monitoring configured: PASS | FAIL
- Memory eviction policy configured: PASS | FAIL
- Key naming consistent: PASS | FAIL
- Sensitive data not cached unencrypted: PASS | FAIL
- Graceful degradation on cache failure: PASS | FAIL
VERDICT: <PASS | NEEDS REVISION>
# Inspect cache status and flush
redis-cli INFO stats | grep hit
curl -s http://localhost:8080/cache/stats
# Cache diagnostics
redis-cli INFO stats | grep -E "hits|misses|evicted"
redis-cli --bigkeys
redis-cli DBSIZE
redis-cli MEMORY USAGE <key>
IF cache hit rate < 80%: audit key design and TTL values. WHEN eviction rate > 100/min: increase maxmemory or audit key sizes. IF P95 cache latency > 10ms: check network, connection pooling, key sizes.
| Flag | Description |
|---|---|
| (none) | Full caching strategy design |
--assess | Assess current caching |
--redis | Configure Redis layer |
--cdn | Design CDN strategy |
--invalidation | Design invalidation only |
--stampede | Implement stampede prevention |
--monitor | Set up monitoring |
Never ask to continue. Loop autonomously until cache hit rate meets target and invalidation is verified.
CACHE STRATEGY COMPLETE:
Cache layers: <N> configured
Technology: <Redis | Memcached | CloudFront | other>
Keys designed: <N> patterns
Invalidation: <TTL | event-based | write-through>
Stampede prevention: <mutex | PER | none>
Hit rate target: <N>%
1. Cache infra: grep for redis, ioredis, memcached; check docker-compose
2. CDN: cloudflare, cloudfront configs; Cache-Control headers
3. Patterns: grep for .get(, .set(, .setex( in service code
4. Monitoring: grep for cache_hit, cache_miss metrics
Run caching tasks sequentially: design, then infrastructure, then monitoring.
| Failure | Action |
|---|---|
| Cache stampede (thundering herd) | Use lock-based refresh or stale-while-revalidate. Add jitter to TTLs. Never let all keys expire simultaneously. |
| Cache poisoning (wrong data cached) | Add cache key versioning. Invalidate on write. Verify cache content matches source on critical paths. |
| Redis OOM | Set maxmemory-policy to allkeys-lru. Audit key sizes with --bigkeys. Set TTLs on all cache keys. |
| Cache hit rate too low | Check key design matches query patterns. Verify TTL is long enough. Monitor which keys are evicted most. |
Append to .godmode/cache-results.tsv:
timestamp layer strategy hit_rate ttl_seconds invalidation_method status
One row per cache layer configured. Never overwrite previous rows.
After EACH cache change:
KEEP if: hit rate improved AND no stale data AND invalidation works on writes
DISCARD if: stale data served OR cache stampede possible OR hit rate decreased
On discard: revert. Fix invalidation logic before retrying.
STOP when ALL of:
- Cache hit rate meets target
- Invalidation verified on all write paths
- No stale data beyond TTL
- Stampede protection active