From replit-pack
Load tests Replit deployments with k6 and autocannon, tunes Autoscale, sizes Reserved VMs, optimizes cold starts, and plans production capacity.
npx claudepluginhub jeremylongshore/claude-code-plugins-plus-skills --plugin replit-packThis skill is limited to using the following tools:
Load testing, scaling strategies, and capacity planning for Replit deployments. Covers Autoscale behavior tuning, Reserved VM right-sizing, cold start optimization, database connection scaling, and capacity benchmarking.
Optimizes Replit apps for faster cold starts, lower memory usage, efficient Nix caching, and quicker deployments via code tweaks, minimal deps, and config changes.
Load tests Vercel deployments to identify scaling limits, cold starts, concurrency thresholds using autocannon and k6. Optimizes serverless function performance and capacity planning.
Guides scalability engineering: horizontal/vertical scaling decisions, Kubernetes auto-scaling, DB read replicas, PgBouncer connection pooling, rate limiting, backpressure, capacity planning. For bottlenecks and growth.
Share bugs, ideas, or general feedback.
Load testing, scaling strategies, and capacity planning for Replit deployments. Covers Autoscale behavior tuning, Reserved VM right-sizing, cold start optimization, database connection scaling, and capacity benchmarking.
| Deployment Type | Scaling Behavior | Cold Start | Best For |
|---|---|---|---|
| Autoscale | 0 to N instances based on traffic | Yes (5-30s) | Variable traffic |
| Reserved VM | Fixed resources, always-on | No | Consistent traffic |
| Static | CDN-backed, infinite scale | No | Frontend assets |
# Quick benchmark with autocannon (built into Node.js ecosystem)
npx autocannon -c 10 -d 30 https://your-app.replit.app/health
# -c 10: 10 concurrent connections
# -d 30: 30 seconds duration
# Output shows:
# - Requests/sec
# - Latency (p50, p95, p99)
# - Throughput (bytes/sec)
# - Error count
// load-test.js — comprehensive Replit load test
import http from 'k6/http';
import { check, sleep } from 'k6';
import { Rate, Trend } from 'k6/metrics';
const errorRate = new Rate('errors');
const coldStartTrend = new Trend('cold_start_duration');
export const options = {
stages: [
{ duration: '1m', target: 5 }, // Warm up
{ duration: '3m', target: 20 }, // Normal load
{ duration: '2m', target: 50 }, // Peak load
{ duration: '1m', target: 0 }, // Cool down
],
thresholds: {
http_req_duration: ['p(95)<2000'], // 95% of requests under 2s
errors: ['rate<0.05'], // Error rate under 5%
},
};
const BASE_URL = __ENV.DEPLOY_URL || 'https://your-app.replit.app';
export default function () {
// Health check
const healthRes = http.get(`${BASE_URL}/health`);
check(healthRes, {
'health returns 200': (r) => r.status === 200,
'health under 1s': (r) => r.timings.duration < 1000,
});
errorRate.add(healthRes.status !== 200);
// Detect cold start
if (healthRes.timings.duration > 5000) {
coldStartTrend.add(healthRes.timings.duration);
}
// API endpoint
const apiRes = http.get(`${BASE_URL}/api/status`);
check(apiRes, {
'api returns 200': (r) => r.status === 200,
});
sleep(1);
}
# Run k6 load test
k6 run --env DEPLOY_URL=https://your-app.replit.app load-test.js
# With JSON output
k6 run --out json=results.json load-test.js
Autoscale cold starts happen when:
- First request after period of no traffic
- Replit needs to start a new container instance
- Typical: 5-30 seconds depending on app size
Reduction strategies:
1. Minimize startup imports (lazy-load heavy modules)
2. Use smaller Nix dependency set
3. Pre-connect database in background (don't block startup)
4. Keep package count low
5. Use compiled JavaScript (not tsx at runtime)
Before (slow cold start):
run = "npx tsx src/index.ts" → compiles TS at startup
After (fast cold start):
build = "npm run build" → compiles during deploy
run = "node dist/index.js" → runs pre-compiled JS
# .replit — optimized for fast cold start
[deployment]
build = ["sh", "-c", "npm ci --production && npm run build"]
run = ["sh", "-c", "node dist/index.js"]
deploymentTarget = "autoscale"
Choose VM size based on load test results:
If peak CPU < 30% → downsize (save money)
If peak CPU > 70% → upsize (prevent throttling)
If peak memory > 80% → upsize (prevent OOM)
Machine sizes:
0.25 vCPU / 512 MB → Simple APIs, < 50 req/s
0.5 vCPU / 1 GB → Standard apps, < 200 req/s
1 vCPU / 2 GB → Moderate traffic, < 500 req/s
2 vCPU / 4 GB → High traffic, < 1000 req/s
4 vCPU / 8-16 GB → Compute-heavy, > 1000 req/s
To change:
Deployment Settings > Machine Size > Select new tier
Redeployment required to apply
// Tune PostgreSQL pool for Replit container limits
import { Pool } from 'pg';
// Small container (0.25 vCPU / 512 MB)
const smallPool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false },
max: 3, // Few connections
idleTimeoutMillis: 10000, // Release quickly
});
// Medium container (1 vCPU / 2 GB)
const mediumPool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false },
max: 10, // More headroom
idleTimeoutMillis: 30000,
});
// Large container (4 vCPU / 8 GB)
const largePool = new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false },
max: 20,
idleTimeoutMillis: 60000,
});
// Dynamic pool sizing based on container resources
function createOptimalPool(): Pool {
const memMB = Math.round(process.memoryUsage().rss / 1024 / 1024);
const maxConns = memMB < 256 ? 3 : memMB < 1024 ? 10 : 20;
return new Pool({
connectionString: process.env.DATABASE_URL,
ssl: { rejectUnauthorized: false },
max: maxConns,
idleTimeoutMillis: 30000,
connectionTimeoutMillis: 5000,
});
}
## Capacity Assessment
### Current State
- Deployment type: [Autoscale / Reserved VM]
- Machine size: [vCPU / RAM]
- Peak RPS: [from load test]
- P95 latency: [from load test]
- Cold start time: [Autoscale only]
### Load Test Results
| Metric | Idle | Normal (20 VU) | Peak (50 VU) |
|--------|------|----------------|--------------|
| RPS | 0 | X | Y |
| P50 latency | - | Xms | Yms |
| P95 latency | - | Xms | Yms |
| Error rate | - | X% | Y% |
| Memory | XMB | XMB | XMB |
### Recommendations
1. [Scale action based on results]
2. [Database pool adjustment]
3. [Cold start mitigation]
4. [Cost optimization]
### Scaling Triggers
- CPU > 70% sustained: upgrade VM
- Memory > 80%: upgrade VM or fix leak
- P95 > 2s: add caching or optimize queries
- Error rate > 1%: investigate root cause
| Issue | Cause | Solution |
|---|---|---|
| Cold start > 15s | Heavy startup | Pre-compile, lazy imports |
| Connection pool exhausted | Too many concurrent requests | Increase pool.max or add queueing |
| OOM during load test | Memory leak under load | Profile with /debug/memory |
| Inconsistent results | Autoscale scaling up | Warm up before measuring |
For reliability patterns, see replit-reliability-patterns.