From ring-dev-team
Implements /readyz readiness probes and startup self-probes for Go/TypeScript services, validating dependencies (DB, cache, queue, TLS) with latency/status beyond K8s basics.
npx claudepluginhub lerianstudio/ring --plugin ring-dev-teamThis skill uses the workspace's default tool permissions.
Scan the project to detect ALL external dependencies:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Scan the project to detect ALL external dependencies:
# Go: detect imports and connection patterns
grep -rn 'pgx\|pgxpool\|mongo\.\|mongo-driver\|redis\.\|valkey\|amqp\|rabbitmq\|s3\|aws' go.mod internal/ pkg/ cmd/
grep -rn 'NewPostgres\|NewMongo\|NewRedis\|NewRabbit\|NewValkey\|WithModule' internal/
# TypeScript/Next.js: detect connection patterns
grep -rn 'MongoClient\|mongoose\|pg\|Pool\|redis\|amqplib\|S3Client' package.json src/ app/ lib/
Build dependency map: PostgreSQL (pgx), MongoDB (mongo-driver), Redis/Valkey (go-redis), RabbitMQ (amqp091-go), S3 (aws-sdk), HTTP clients. For each, detect if TLS is configured (sslmode, tls=true, rediss://, amqps://).
SaaS deployment mode: TLS is MANDATORY for all database connections. No exceptions.
{
"status": "healthy",
"checks": {
"postgres": { "status": "up", "latency_ms": 2, "tls": true },
"mongodb": { "status": "up", "latency_ms": 3, "tls": true },
"rabbitmq": { "status": "up", "connected": true },
"valkey": { "status": "up", "latency_ms": 1, "tls": false }
},
"version": "1.2.3",
"deployment_mode": "saas"
}
status: "healthy" if ALL checks pass, "unhealthy" if ANY failslatency_ms (for connections with ping) and tls (boolean)deployment_mode: from DEPLOYMENT_MODE env or inferred from configversion: from build info or VERSION env// internal/adapters/http/in/readyz.go
type DependencyCheck struct {
Status string `json:"status"`
LatencyMs int64 `json:"latency_ms,omitempty"`
TLS *bool `json:"tls,omitempty"`
Connected *bool `json:"connected,omitempty"`
Error string `json:"error,omitempty"`
}
type ReadyResponse struct {
Status string `json:"status"`
Checks map[string]DependencyCheck `json:"checks"`
Version string `json:"version"`
DeploymentMode string `json:"deployment_mode"`
}
func isCacheDependency(name string) bool {
normalized := strings.ToLower(name)
return strings.Contains(normalized, "redis") ||
strings.Contains(normalized, "valkey") ||
strings.Contains(normalized, "cache")
}
func ReadyHandler(deps Dependencies) fiber.Handler {
return func(c *fiber.Ctx) error {
ctx, cancel := context.WithTimeout(c.UserContext(), 5*time.Second)
defer cancel()
resp := ReadyResponse{
Status: "healthy",
Checks: make(map[string]DependencyCheck),
Version: buildVersion,
DeploymentMode: os.Getenv("DEPLOYMENT_MODE"),
}
// Each check: ping + measure latency + verify TLS
// Use 2s timeout per dependency, 1s for cache
for name, checker := range deps.HealthCheckers() {
timeout := 2 * time.Second
if isCacheDependency(name) {
timeout = 1 * time.Second
}
depCtx, depCancel := context.WithTimeout(ctx, timeout)
check := checker.Check(depCtx)
depCancel()
resp.Checks[name] = check
if check.Status != "up" {
resp.Status = "unhealthy"
}
}
if resp.Status != "healthy" {
return libHTTP.ServiceUnavailable(c, "UNHEALTHY", "Service Unhealthy", resp)
}
return libHTTP.OK(c, resp)
}
}
Each checker MUST verify TLS state from the connection options (e.g., connOpts.TLSConfig != nil for Go, mongoClient.options?.tls for TS). This is what would have caught the Monetarie bug.
RabbitMQ note: The amqp091-go library's *amqp.Connection does not reliably expose TLS state after dialing. For RabbitMQ, TLS detection MUST inspect the connection URL scheme (amqps:// = TLS, amqp:// = plaintext). The checker constructor MUST accept the connection URL alongside the *amqp.Connection object and derive tls: true/false from the scheme. Do not attempt to reflect on the live connection object for this purpose.
"SaaS deployment mode: TLS is MANDATORY" means two separate things that are both required:
| Concern | Responsibility | Mechanism |
|---|---|---|
| Surface TLS state | /readyz probe | Reports "tls": true/false per dependency in JSON response |
| Enforce TLS | Bootstrap / connection code | MUST refuse to start if DEPLOYMENT_MODE=saas and TLS is not configured |
MUST implement both. Surfacing without enforcement means the service starts silently insecure. Enforcement without surfacing means the Tenant Manager cannot confirm TLS posture post-provisioning. Neither alone is sufficient.
Bootstrap enforcement pattern (Go):
if os.Getenv("DEPLOYMENT_MODE") == "saas" && connOpts.TLSConfig == nil {
return nil, fmt.Errorf("TLS is required in SaaS mode but not configured for %s", depName)
}
Same pattern at app/api/admin/health/readyz/route.ts: ping each dependency, measure latency, check TLS, return 200/503 with the same JSON contract. Use Response.json() with appropriate status code.
| Stack | Ready Path | Health Path |
|---|---|---|
| Go API | /readyz | /health |
| Go Worker | /readyz on HEALTH_PORT | /health on HEALTH_PORT |
| Next.js | /api/admin/health/readyz | same as Ready Path |
Next.js exposes a single /api/admin/health/readyz endpoint which serves both readiness and health checks.
The app MUST run all readiness checks at boot and log results BEFORE accepting traffic.
// cmd/app/main.go or internal/bootstrap/selfprobe.go
func RunSelfProbe(ctx context.Context, deps Dependencies, logger Logger) error {
logger.Infow("startup_self_probe_started",
"probe", "self",
)
results := make(map[string]DependencyCheck)
allHealthy := true
for name, checker := range deps.HealthCheckers() {
check := checker.Check(ctx)
results[name] = check
if check.Status == "up" {
logger.Infow("self_probe_check",
"probe", "self",
"name", name,
"status", check.Status,
"duration_ms", check.LatencyMs,
"tls", check.TLS,
)
} else {
logger.Errorw("self_probe_check",
"probe", "self",
"name", name,
"status", check.Status,
"duration_ms", check.LatencyMs,
"error", check.Error,
)
allHealthy = false
}
}
if !allHealthy {
logger.Errorw("startup_self_probe_failed",
"probe", "self",
"results", results,
)
return fmt.Errorf("self-probe failed: one or more dependencies unreachable")
}
logger.Infow("startup_self_probe_passed",
"probe", "self",
"results", results,
)
return nil
}
Self-probe failure MUST affect /health:
var selfProbeOK atomic.Bool // package-level
func init() { selfProbeOK.Store(false) } // unhealthy until proven otherwise
// At startup, after self-probe succeeds:
if err := RunSelfProbe(ctx, deps, logger); err != nil {
// selfProbeOK stays false — /health returns 503
// K8s liveness probe will restart the pod
} else {
selfProbeOK.Store(true)
}
// /health handler
f.Get("/health", func(c *fiber.Ctx) error {
if !selfProbeOK.Load() {
return libHTTP.ServiceUnavailable(c, "UNHEALTHY", "Self-probe failed", nil)
}
return libHTTP.HealthWithDependencies(deps)(c)
})
This is the key insight: /health is no longer just "process alive." It's "startup self-probe passed AND lib-commons runtime dependency state is healthy." A pod that starts but can't reach its databases will be restarted by K8s instead of silently serving errors, and runtime dependency or circuit-breaker failures are still surfaced through the standard lib-commons health handler.
/health reflects result/health, /readyz operates normally/health, K8s restarts pod via liveness probeSELF_PROBE_INTERVAL envNext.js instrumentation.ts register() executes once at process startup and BLOCKS before the first request is served — this IS the self-probe point for Next.js. Use it.
MUST NOT call process.exit() on probe failure inside register(). Doing so prevents K8s from collecting a useful log tail. Instead:
register(): run all dependency checks; if any fail, set a module-level flag (let startupHealthy = false)./api/admin/health/readyz route handler checks this flag./api/admin/health/readyz, sees 503, and withholds traffic — no process.exit() needed.// instrumentation.ts
let startupHealthy = false;
let startupChecks: Record<string, DependencyCheck> = {};
export async function register() {
const results = await runAllChecks();
startupChecks = results;
startupHealthy = Object.values(results).every(c => c.status === "up");
// log results here — process stays alive regardless
}
export { startupHealthy, startupChecks };
The /api/admin/health/readyz route imports startupHealthy and startupChecks from instrumentation.ts and returns 200 or 503 accordingly.
These two mechanisms are complementary, not redundant:
| Mechanism | When | Purpose |
|---|---|---|
| Self-probe | STARTUP — before first request | Validates dependencies are reachable before traffic is allowed |
/readyz | RUNTIME — per request | Validates dependencies are still reachable as K8s readinessProbe |
/health | RUNTIME — per request | Reflects self-probe result AND lib-commons runtime circuit-breaker state |
A pod that passes startup self-probe can still fail /readyz later (e.g., DB goes away mid-run). A pod that fails self-probe should never receive traffic in the first place. Both gates are necessary.
Verify /readyz endpoint, RunSelfProbe function, and /health self-probe wiring all exist.
| Rationalization | Why It's WRONG | Required Action |
|---|---|---|
| "K8s TCP probe is enough" | TCP ≠ app ready. Monetarie incident: pod alive, Mongo dead. | Implement /readyz |
| "/health covers it" | /health without self-probe is blind to dep failures | Add self-probe, wire to /health |
| "TLS check is overhead" | TLS mismatch = silent failure for every query | Check TLS per dependency |
| "Only backend needs this" | Console (frontend) caused the incident | All apps, no exceptions |
| "Dependencies are reliable" | Networks partition. Configs drift. Certs expire. | Check every time |
| "Too many checks slow startup" | Bounded per-dependency timeouts keep overhead low. Incident costs hours. | No excuse |
| "Service has only one dependency" | One broken dependency = total outage. Complexity argument is irrelevant at zero scale. Self-probe is three lines of code. | Implement self-probe, no exceptions |