From networking-plugin
Guides HTTP load and stress testing with oha's real-time TUI, latency correction for coordinated omission, HTTP/2-3 support; benchmarks vs wrk, vegeta, hey for API performance.
npx claudepluginhub laurigates/claude-plugins --plugin networking-pluginThis skill is limited to using the following tools:
Expert knowledge for HTTP load testing using oha, a Rust-based load generator with real-time TUI visualization and proper latency measurement.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Expert knowledge for HTTP load testing using oha, a Rust-based load generator with real-time TUI visualization and proper latency measurement.
Traditional load testers measure only the time from request send to response received. This misses queuing delays when the server slows down, leading to optimistic latency numbers.
Example: If your target is 100 RPS but the server can only handle 50 RPS:
oha addresses this with --latency-correction (enabled by default).
| Percentile | Meaning | Use Case |
|---|---|---|
| p50 (median) | Half of requests faster | Typical user experience |
| p90 | 90% of requests faster | Most users' experience |
| p99 | 99% of requests faster | Tail latency, SLA targets |
| p99.9 | 99.9% of requests faster | Worst-case scenarios |
Rule of thumb: Focus on p99 for SLAs. A 100ms p50 with 2s p99 indicates serious tail latency issues.
| Feature | oha | wrk | vegeta | hey |
|---|---|---|---|---|
| Latency correction | Yes (default) | No | No | No |
| Real-time TUI | Yes | No | No | No |
| HTTP/2 | Yes | No | Yes | Yes |
| HTTP/3 (experimental) | Yes | No | No | No |
| Scripting | No | Lua | No | No |
| CI-friendly output | Yes (JSON) | Limited | Yes | Yes |
| Active maintenance | Yes | Limited | Yes | No |
# macOS
brew install oha
# Cargo (any platform)
cargo install oha
# Verify
oha --version
# 200 requests with 50 concurrent connections
oha -n 200 -c 50 https://api.example.com/health
# Run for 30 seconds
oha -z 30s -c 50 https://api.example.com/health
# Target specific requests per second (QPS)
oha -q 100 -z 30s https://api.example.com/health
# POST with JSON body
oha -m POST -H "Content-Type: application/json" -d '{"key":"value"}' https://api.example.com/data
# Custom headers
oha -H "Authorization: Bearer TOKEN" -H "X-Custom: value" https://api.example.com/protected
# Request body from file
oha -m POST -D @request.json https://api.example.com/data
# HTTP/2
oha --http-version 2 https://api.example.com/health
# Disable keep-alive (new connection per request)
oha --disable-keepalive https://api.example.com/health
# Connection timeout
oha --timeout 10s https://api.example.com/slow-endpoint
# Disable TUI (for scripts/CI)
oha --no-tui -n 1000 https://api.example.com/health
# JSON output for parsing
oha --no-tui -j -n 1000 https://api.example.com/health
# Disable latency correction (compare with corrected)
oha --no-tui --disable-latency-correction -n 1000 https://api.example.com/health
# Fast sanity check - 100 requests, 10 connections
oha -n 100 -c 10 --no-tui https://api.example.com/health
# 5 minutes at 100 RPS with 50 connections
oha -z 5m -q 100 -c 50 https://api.example.com/endpoint
# Gradually increase load
for qps in 50 100 200 400 800; do
echo "Testing at $qps RPS..."
oha --no-tui -j -z 30s -q $qps https://api.example.com/health | jq '.summary'
done
# Shows the coordinated omission effect
echo "With correction:"
oha --no-tui -z 30s -q 500 https://api.example.com/health
echo "Without correction:"
oha --no-tui --disable-latency-correction -z 30s -q 500 https://api.example.com/health
oha -m POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $TOKEN" \
-d '{"user_id": 123, "action": "test"}' \
-z 60s -c 20 \
https://api.example.com/events
The real-time TUI shows:
oha --no-tui -j -n 1000 https://api.example.com/health | jq '.'
Key fields:
.summary.successRate - Percentage of 2xx responses.summary.total - Total requests sent.summary.slowest - Maximum latency.summary.fastest - Minimum latency.summary.average - Mean latency.latencyDistribution - Percentile breakdown (p50, p90, p99, etc.).statusCodeDistribution - Count per HTTP status code| Metric | Good | Concerning | Critical |
|---|---|---|---|
| Success rate | >99.9% | 99-99.9% | <99% |
| p99/p50 ratio | <5x | 5-10x | >10x |
| Error rate | 0% | <1% | >1% |
| p99 latency | <SLA target | Near SLA | >SLA |
High-throughput benchmarking with Lua scripting.
# Basic test
wrk -t12 -c400 -d30s https://api.example.com/health
# With Lua script for custom requests
wrk -t12 -c400 -d30s -s script.lua https://api.example.com/
Use when: Need Lua scripting for complex request patterns or maximum throughput testing.
Limitation: Does not handle coordinated omission.
Go-based load tester with attack/report workflow.
# Generate constant load
echo "GET https://api.example.com/health" | vegeta attack -rate=100/s -duration=30s | vegeta report
# JSON output
echo "GET https://api.example.com/health" | vegeta attack -rate=100/s -duration=30s | vegeta encode --to json
# Plot latencies
echo "GET https://api.example.com/health" | vegeta attack -rate=100/s -duration=30s | vegeta plot > plot.html
Use when: Need CI-friendly pipeline workflow or latency plots.
Limitation: Does not correct for coordinated omission.
Simple HTTP load generator (successor to ab).
# 10000 requests, 100 concurrent
hey -n 10000 -c 100 https://api.example.com/health
# Rate limited
hey -n 10000 -c 100 -q 50 https://api.example.com/health
Use when: Quick ad-hoc testing, familiar with ab.
Limitation: Unmaintained, no coordinated omission handling.
| Context | Command |
|---|---|
| Quick test | oha --no-tui -n 100 -c 10 $URL |
| CI pipeline | oha --no-tui -j -z 30s -q 100 $URL |
| JSON parsing | oha --no-tui -j $URL | jq '.summary' |
| Success rate | oha --no-tui -j $URL | jq '.summary.successRate' |
| Latency p99 | oha --no-tui -j $URL | jq '.latencyDistribution."99"' |
| Fail on errors | oha --no-tui -j $URL | jq -e '.summary.successRate > 99' |
| Flag | Description | Default |
|---|---|---|
-n, --number | Total requests to send | 200 |
-c, --connections | Concurrent connections | 50 |
-z, --duration | Test duration (e.g., 30s, 5m) | - |
-q, --query-per-second | Target QPS rate limit | unlimited |
| Flag | Description |
|---|---|
-m, --method | HTTP method (GET, POST, etc.) |
-H, --header | Add header (repeatable) |
-d, --data | Request body |
-D, --data-file | Request body from file |
| Flag | Description |
|---|---|
--no-tui | Disable real-time TUI |
-j, --json | JSON output (requires --no-tui) |
--disable-latency-correction | Disable coordinated omission fix |
| Flag | Description |
|---|---|
--http-version | 1.0, 1.1, or 2 |
--disable-keepalive | New connection per request |
--timeout | Request timeout |
--connect-timeout | Connection timeout |