From netty
Build Reactor Netty HTTP, TCP, or UDP clients and servers with reactive request handling, lifecycle hooks, and resource-aware startup or shutdown. Use this skill when the work is centered on `HttpServer`, `HttpClient`, `TcpServer`, `TcpClient`, `UdpServer`, or `UdpClient` rather than low-level Netty pipeline APIs.
npx claudepluginhub ririnto/sinon --plugin nettyThis skill uses the workspace's default tool permissions.
Build one Reactor Netty application path end to end: pick the transport, configure the builder, compose inbound and outbound flow, and shut resources down cleanly without dropping into low-level Netty internals.
Guides Next.js Cache Components and Partial Prerendering (PPR): 'use cache' directives, cacheLife(), cacheTag(), revalidateTag() for caching, invalidation, static/dynamic optimization. Auto-activates on cacheComponents: true.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Share bugs, ideas, or general feedback.
Build one Reactor Netty application path end to end: pick the transport, configure the builder, compose inbound and outbound flow, and shut resources down cleanly without dropping into low-level Netty internals.
HttpServer, HttpClient, TcpServer, TcpClient, UdpServer, and UdpClient..handle((inbound, outbound) -> ...) and HTTP route handlers as the main composition points.bindNow(), connectNow(), terminal response retrieval in top-level sample code, or onDispose().block().ChannelPipeline, ByteBuf.release(), ChannelFuture, or custom codecs, move the answer to those lower-level Netty APIs directly instead of the builder surface.Use this skill for:
doOnBound, doOnConnected, and doOnConnectionKeep low-level Netty concerns out of this common path:
ServerBootstrap, Bootstrap, ChannelPipeline, and handler orderingByteBuf ownership and manual release()reactor-netty-http for HTTP and WebSocket workreactor-netty-core for TCP or UDP workHttpServer / HttpClientTcpServer / TcpClientUdpServer / UdpClient.route(...) or .handle(...).handle((inbound, outbound) -> ...)bindNow() returns DisposableServerconnectNow() returns ConnectiondoOnBind, doOnBound, doOnConnection, doOnUnbounddoOnConnect, doOnConnected, doOnDisconnectedonDispose().block() or explicit disposal.<dependencyManagement>
<dependencies>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-bom</artifactId>
<version>${reactor.bom.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.projectreactor.netty</groupId>
<artifactId>reactor-netty-http</artifactId>
</dependency>
</dependencies>
Use reactor-netty-core instead when the work is only TCP or UDP.
DisposableServer from bindNow().Connection from connectNow().Publisher chains.doOnBind, doOnBound, doOnChannelInit, doOnConnection, doOnConnected, and doOnDisconnected only when they change startup, channel extension, or shutdown behavior.| Method | Use when | Return shape |
|---|---|---|
.responseSingle((resp, content) -> ...) | you need the status code + full body aggregated at once | Mono<T> — body is fully buffered |
.responseContent() | you want streaming/chunked processing of the response body | Flux<ByteBuf> — each chunk arrives as it is received |
Aggregated body (status check + full body in one signal):
String body = HttpClient.create()
.get().uri("http://localhost:8080/hello")
.responseSingle((resp, content) -> {
if (resp.status().code() != 200) {
return Mono.error(new RuntimeException("unexpected: " + resp.status()));
}
return content.asString();
})
.block();
Streaming body (process chunks as they arrive):
List<String> chunks = HttpClient.create()
.get().uri("http://localhost:8080/stream")
.responseContent()
.asString()
.collectList()
.block();
Use onErrorResume for per-route fallback logic and let unexpected errors propagate to the subscriber:
Route-level fallback (return an error response for 500):
.route(routes -> routes
.get("/hello", (req, res) -> res.sendString(Mono.just("Hello")))
.get("/fail", (req, res) -> Mono.error(new RuntimeException("boom")))
.errorHandler(500, (req, res) -> res.sendString(
Mono.just("Error: " + req.uri()))))
Handler-level recovery (map errors to fallback values):
HttpClient.create()
.get().uri("http://localhost:8080/hello")
.responseSingle((resp, content) -> content.asString())
.onErrorResume(TimeoutException.class,
timeout -> Mono.just("fallback (timeout)"))
.block();
Do not use blocking try/catch inside reactive lambdas. Reactive errors travel through the error channel, not Java exceptions.
warmup() is optional but useful when startup latency matters..runOn(...) only when the application truly needs custom loop resources..secure(...) when the common path must switch to TLS..responseTimeout(...) or channel options when a client must fail fast..wiretap(true) first for traffic-level troubleshooting..metrics(true) only when the application already has a metrics strategy.HTTP server:
DisposableServer server = HttpServer.create()
.port(8080)
.route(routes -> routes
.get("/hello", (request, response) -> response.sendString(Mono.just("Hello, World!")))
.post("/echo", (request, response) -> response.status(201)
.addHeader("X-Mode", "echo")
.sendString(request.receive().asString().map(body -> "echo: " + body))))
.bindNow();
server.onDispose().block();
HTTP client:
ByteBufFlux.fromString converts a String publisher into a Flux<ByteBuf> — the idiomatic way to send string bodies with Reactor Netty HTTP. It is provided by the reactor-netty-http artifact.
import reactor.netty.ByteBufFlux;
String body = HttpClient.create()
.post()
.uri("http://localhost:8080/echo")
.send(ByteBufFlux.fromString(Mono.just("hello")))
.responseSingle((response, content) -> {
if (response.status().code() != 201) {
return Mono.error(new IllegalStateException("unexpected status: " + response.status()));
}
return content.asString().map(text -> response.responseHeaders().get("X-Mode") + ":" + text);
})
.block();
Warmup before first bind or connect:
warmup() pre-initializes event loops without binding a port. The HttpServer builder remains reusable — bindNow() can be called afterward.
HttpServer server = HttpServer.create().port(8080);
server.warmup().block();
DisposableServer bound = server.bindNow();
bound.onDispose().block();
TCP server:
DisposableServer server = TcpServer.create()
.port(9000)
.handle((inbound, outbound) -> outbound.sendString(inbound.receive().asString().map(text -> "echo: " + text)))
.bindNow();
server.onDispose().block();
TCP client:
Connection connection = TcpClient.create()
.host("localhost")
.port(9000)
.handle((inbound, outbound) -> outbound.sendString(Mono.just("ping"))
.then()
.thenMany(inbound.receive().asString().doOnNext(System.out::println))
.then())
.connectNow();
connection.onDispose().block();
UDP server:
UdpServer.bindNow() returns Connection (not DisposableServer).
Connection udpServer = UdpServer.create()
.port(9001)
.handle((inbound, outbound) -> outbound.sendObject(inbound.receiveObject()))
.bindNow();
udpServer.onDispose().block();
UDP client:
Connection connection = UdpClient.create()
.host("localhost")
.port(9001)
.handle((inbound, outbound) -> outbound.sendString(Mono.just("ping"))
.then()
.thenMany(inbound.receive().asString().doOnNext(System.out::println))
.then())
.connectNow();
connection.onDispose().block();
Lifecycle hook examples:
Server lifecycle (bind, bound, unbound):
DisposableServer server = HttpServer.create()
.port(8080)
.doOnBind(config -> System.out.println("binding " + config.host() + ":" + config.port()))
.doOnBound(bound -> System.out.println("bound " + bound.port()))
.doOnUnbound(bound -> System.out.println("unbound " + bound.port()))
.bindNow();
server.onDispose().block();
Channel init (access the low-level Netty ChannelPipeline before handlers run). Use this when a channel-level option or handler must be set per-connection:
HttpServer.create()
.port(8080)
.doOnChannelInit((observer, channel, remoteAddress) -> {
channel.pipeline().addFirst(new TrafficLoggingHandler());
})
.bindNow();
| Decision | Default | Escalate when |
|---|---|---|
| HTTP vs TCP vs UDP | choose the builder that matches the application protocol | the task needs low-level Netty framing or codecs |
HTTP routing vs .handle(...) | use .route(...) for standard HTTP endpoints | use .handle(...) when you need lower-level response composition |
.responseSingle() vs .responseContent() | .responseSingle() for status + aggregated body | use .responseContent() for streaming/chunked response processing |
| error handling strategy | propagate errors to subscriber; use errorHandler for route-level fallbacks | per-handler onErrorResume recovery is needed for specific exception types |
| default resources vs custom resources | stay on defaults first | open event-loop-and-resources.md for isolation or custom sizing |
| plain text vs TLS | start plain for local flow | open ssl-tls.md when certificates or HTTPS are required |
| simple lifecycle vs operational tuning | start with bind/connect + dispose | open timeouts-and-pool-tuning.md or metrics-and-observability.md when production tuning appears |
| plain HTTP/TCP/UDP vs WebSocket | keep HTTP/TCP/UDP in the common path | open websocket.md for WebSocket upgrade flow |
onErrorResume, errorHandler) rather than blocking try/catch inside lambdas.responseSingle() vs .responseContent()) matches the use case| Anti-pattern | Why it fails | Correct move |
|---|---|---|
blocking inside .handle(...), route handlers, or response mapping | reactive execution stalls and hides latency under ordinary flow | keep blocking at process boundaries or isolate it behind a deliberate reactive bridge |
dropping into ChannelPipeline customization for ordinary HTTP or TCP tasks | the solution leaves the builder-based common path and becomes harder to maintain | stay on Reactor Netty builders unless low-level Netty internals are the actual blocker |
| creating custom loop resources for every server or client by default | resource churn and disposal complexity rise without a clear benefit | stay on shared defaults first and open the resource reference only when isolation is required |
| adding lifecycle hooks everywhere | startup and connection flow become noisy without changing behavior | attach doOn... hooks only where they affect diagnostics, setup, or teardown |
| turning on wiretap or metrics as permanent defaults | noise or overhead grows in paths that do not need it | enable operational features deliberately for diagnostics or an existing observability strategy |
| using Reactor Netty when the real problem is codec or buffer ownership | the builder API stops being the main abstraction and guidance becomes misleading | move the answer to framing, codecs, buffer ownership, and other lower-level Netty APIs directly |
Open these only when the common path is no longer enough:
| Blocker | Open |
|---|---|
| custom loop resources, shared providers, or explicit disposal ordering | event-loop-and-resources.md |
| response timeout, connect timeout, retry, or connection pool tuning | timeouts-and-pool-tuning.md |
| HTTPS, custom trust, or mTLS | ssl-tls.md |
| wiretap, metrics, or access logging | metrics-and-observability.md |
| WebSocket client or server flow | websocket.md |
Return: