From load-balancer-tester
Validates load balancer traffic distribution, health checks, failover, session persistence, and SSL for NGINX, HAProxy, AWS ALB/NLB, GCP, Kubernetes Ingress.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin load-balancer-testerThis skill is limited to using the following tools:
Validate load balancer behavior including traffic distribution algorithms, health check mechanisms, failover scenarios, session persistence, and SSL termination. Supports testing for NGINX, HAProxy, AWS ALB/NLB, GCP Load Balancers, and Kubernetes Ingress controllers.
Generates configurations for AWS ALB/NLB, GCP load balancers, Nginx, and HAProxy with health checks, SSL termination, routing rules, sticky sessions, and monitoring.
Detects load test infrastructure (k6, Artillery, Gatling, JMeter), designs scenarios for critical endpoints, executes stress tests, and analyzes results against thresholds. For pre-release validation and scaling.
Creates and runs load tests with k6, JMeter, and Artillery for web apps and APIs. Validates performance under stress, spike, soak, scalability to detect bottlenecks.
Share bugs, ideas, or general feedback.
Validate load balancer behavior including traffic distribution algorithms, health check mechanisms, failover scenarios, session persistence, and SSL termination. Supports testing for NGINX, HAProxy, AWS ALB/NLB, GCP Load Balancers, and Kubernetes Ingress controllers.
curl, wrk, hey, or k6) for sending test trafficX-Backend-Server, Server) to determine which instance served the request.| Error | Cause | Solution |
|---|---|---|
| All requests hit the same backend | Session affinity enabled unintentionally or DNS caching | Disable sticky sessions for distribution tests; use different source IPs; bypass DNS cache |
| Health check passes but backend is unhealthy | Health check endpoint does not reflect actual application health | Configure health checks to hit a deep endpoint that verifies database connectivity |
| 502 Bad Gateway during failover | Health check interval too long; load balancer still routing to failed backend | Reduce health check interval and failure threshold; verify deregistration delay settings |
| SSL certificate error | Certificate does not match domain or is expired | Verify certificate SAN entries; check expiration date; ensure full certificate chain is configured |
| Connection refused on backend port | Firewall or security group blocking load balancer to backend traffic | Verify security group rules allow traffic from load balancer subnet; check backend listen address |
Traffic distribution test with curl:
#!/bin/bash
set -euo pipefail
declare -A counts
for i in $(seq 1 100); do
backend=$(curl -s -H "Host: app.test.com" http://lb.test.com/health \
| jq -r '.hostname')
counts[$backend]=$(( ${counts[$backend]:-0} + 1 ))
done
echo "Traffic distribution:"
for backend in "${!counts[@]}"; do
echo " $backend: ${counts[$backend]} requests"
done
Failover test sequence:
set -euo pipefail
# 1. Verify both backends serve traffic
curl -s http://lb.test.com/health # Backend A
curl -s http://lb.test.com/health # Backend B
# 2. Stop Backend A
docker stop backend-a
# 3. Verify all traffic goes to Backend B (no errors)
for i in $(seq 1 10); do
curl -sf http://lb.test.com/health || echo "FAIL: request $i"
done
# 4. Restart Backend A and verify it rejoins
docker start backend-a
sleep 10 # Wait for health check interval
curl -s http://lb.test.com/health # Should see Backend A again
k6 load test against load balancer:
import http from 'k6/http';
import { check } from 'k6';
export const options = { vus: 50, duration: '30s' };
export default function () {
const res = http.get('http://lb.test.com/api/data');
check(res, {
'status is 200': (r) => r.status === 200, # HTTP 200 OK
'response time < 500ms': (r) => r.timings.duration < 500, # HTTP 500 Internal Server Error
});
}