From dt-brigid
Multiplayer game networking patterns — architecture models, state synchronization, lag compensation, matchmaking, lobby systems, and Godot 4 multiplayer API. Load when designing or implementing networked multiplayer features.
npx claudepluginhub dreamteam-hq/brigid --plugin dt-brigidThis skill uses the workspace's default tool permissions.
The server is the single source of truth. Clients send inputs; the server simulates and broadcasts results.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
The server is the single source of truth. Clients send inputs; the server simulates and broadcasts results.
Clients own certain state (typically their own movement) and the server relays it.
Every client connects to every other client. No central server.
P2P-style game logic but traffic routes through a central relay. Steam Networking, Epic Online Services, and Photon all offer this.
| Aspect | Dedicated Server | Listen Server |
|---|---|---|
| Hosting | Separate process/machine | One player's machine hosts |
| Performance | Consistent, controllable | Depends on host's hardware/connection |
| Fairness | Equal latency for all | Host has zero latency advantage |
| Cost | Infrastructure cost | Free (player hosts) |
| Availability | Always on | Gone when host leaves (unless host migration) |
| Best for | Competitive, persistent worlds | Casual co-op, LAN parties |
| Game Type | Recommended Model | Why |
|---|---|---|
| Competitive FPS/TPS | Client-server authoritative + dedicated | Anti-cheat, fair latency, server-side hit detection |
| Fighting game | P2P with rollback netcode | Frame precision, 2-player, rollback hides latency |
| Co-op PvE (2-4 players) | Listen server or relay | Low cheat risk, no infra cost |
| MMO / persistent world | Dedicated server cluster | Scale, persistence, authority |
| Battle royale (64-100 players) | Dedicated server, aggressive interest management | Bandwidth, fairness, scale |
| Mobile casual (8-16 players) | Relay server | NAT traversal solved, low infra cost |
| RTS (2-8 players) | P2P lockstep or rollback | Deterministic simulation, minimal bandwidth |
| Turn-based | Client-server (lightweight) | Simple state sync, no latency sensitivity |
Server sends full world snapshots at a fixed tick rate. Clients buffer snapshots and interpolate between them for smooth rendering.
Server tick 10: [Player A at (10, 0), Player B at (5, 3)]
Server tick 11: [Player A at (11, 0), Player B at (5, 4)]
Client renders between tick 10 and 11 at render time,
interpolating positions smoothly.
Only transmit what changed since the last acknowledged snapshot.
Full snapshot: { pos: (10,0), health: 100, ammo: 30, armor: 50 }
Delta (tick+1): { pos: (11,0) } # only position changed
Only send a client data they can perceive or that affects them.
| Strategy | Description | Best For |
|---|---|---|
| Distance-based | Entities beyond radius are culled | Open world, MMO |
| Area of Interest (AOI) | Grid/cell-based regions | Large maps, battle royale |
| Team-based | Only send teammate data + visible enemies | Team shooters |
| Priority-based | Closer/more important entities update more frequently | Bandwidth-constrained |
| Visibility-based | Ray/frustum checks against world geometry | High-fidelity shooters |
| Concept | Typical Values | Purpose |
|---|---|---|
| Server tick rate | 20-128 Hz | Physics simulation, state authority |
| Client send rate | 20-64 Hz | Input transmission to server |
| Client frame rate | 60-240 Hz | Rendering (interpolation fills gaps) |
| Snapshot send rate | 10-30 Hz | State broadcast to clients |
_physics_process() for networked simulation, _process() for visual interpolationRule of thumb for a 20-tick server with 16 players:
Per-entity update: ~40-80 bytes (position, rotation, velocity, state flags)
Per-tick upstream: ~20-40 bytes (input only)
Per-tick downstream: entities_in_relevance * bytes_per_entity * tick_rate
Example: 16 players, 64 bytes/entity, 20 ticks/sec
= 16 * 64 * 20 = 20,480 bytes/sec = ~20 KB/s per client downstream
The client immediately applies its own input locally without waiting for server confirmation.
// Client predicts own movement immediately
public override void _PhysicsProcess(double delta)
{
var input = GatherInput();
SaveInputToBuffer(input, _currentTick);
ApplyMovement(input, delta); // predict locally
SendInputToServer(input, _currentTick);
}
When the server corrects the client, rewind to the corrected state and replay all unacknowledged inputs.
private void OnServerStateReceived(ServerState serverState, int serverTick)
{
// Compare server state to what we predicted at that tick
var predicted = GetPredictedState(serverTick);
if(!StatesMatch(predicted, serverState))
{
// Snap to server state
Position = serverState.Position;
_velocity = serverState.Velocity;
// Replay all inputs from serverTick+1 to currentTick
for(var tick = serverTick + 1; tick <= _currentTick; tick++)
{
var bufferedInput = GetBufferedInput(tick);
ApplyMovement(bufferedInput, _tickDelta);
}
}
}
Render other players (and server-authoritative entities) between received snapshots rather than at their latest known position.
// Interpolate remote players between two server snapshots
private void InterpolateRemoteEntity(RemoteEntity entity, double renderTime)
{
var t0 = entity.SnapshotBuffer[^2]; // older snapshot
var t1 = entity.SnapshotBuffer[^1]; // newer snapshot
var alpha = (float)((renderTime - t0.Time) / (t1.Time - t0.Time));
alpha = Mathf.Clamp(alpha, 0f, 1f);
entity.VisualPosition = t0.Position.Lerp(t1.Position, alpha);
entity.VisualRotation = Mathf.LerpAngle(t0.Rotation, t1.Rotation, alpha);
}
Network jitter means inputs arrive at irregular intervals. Buffer inputs server-side to smooth processing.
Without buffer: tick 10 (input arrives), tick 11 (no input!), tick 12 (2 inputs arrive)
With buffer (2): tick 10 (buffered), tick 11 (buffered), tick 12 (buffered) — smooth
The server rewinds the world to the time the shooting client saw it, then performs the hit check.
Client shoots at tick 50 (sees enemy at position X due to interpolation delay)
Server receives shot at tick 53
Server rewinds enemy positions to tick 50 - interpolation_buffer
Server performs raycast against historical positions
If hit: apply damage at current tick
Primarily used in fighting games and other frame-precise genres. Each client runs the simulation, rolling back and replaying when remote inputs arrive late.
Local frame 10: Predict remote player repeats last input
Frame 11: Remote input for frame 10 arrives — it was different!
Rollback to frame 10, apply correct input, fast-forward to frame 11
If visual state changed: correction is visible but brief
| Class | Role |
|---|---|
MultiplayerAPI | Abstract API — manages peers, RPCs, object replication |
SceneMultiplayer | Default implementation of MultiplayerAPI for scene trees |
MultiplayerPeer | Abstract transport layer — swap implementations |
ENetMultiplayerPeer | UDP-based transport (reliable + unreliable channels) |
WebSocketMultiplayerPeer | WebSocket transport (for HTML5 exports) |
MultiplayerSpawner | Auto-spawn nodes across peers |
MultiplayerSynchronizer | Auto-sync properties across peers |
// Server
public void StartServer(int port = 9999, int maxClients = 16)
{
var peer = new ENetMultiplayerPeer();
var error = peer.CreateServer(port, maxClients);
if(error != Error.Ok)
{
GD.PushError($"Failed to create server: {error}");
return;
}
Multiplayer.MultiplayerPeer = peer;
Multiplayer.PeerConnected += OnPeerConnected;
Multiplayer.PeerDisconnected += OnPeerDisconnected;
}
// Client
public void ConnectToServer(string address = "127.0.0.1", int port = 9999)
{
var peer = new ENetMultiplayerPeer();
var error = peer.CreateClient(address, port);
if(error != Error.Ok)
{
GD.PushError($"Failed to connect: {error}");
return;
}
Multiplayer.MultiplayerPeer = peer;
Multiplayer.ConnectedToServer += OnConnected;
Multiplayer.ConnectionFailed += OnConnectionFailed;
Multiplayer.ServerDisconnected += OnServerDisconnected;
}
// Define RPCs with [Rpc] attribute
// Modes: AnyPeer, Authority (only multiplayer authority can call)
// CallMode: CallLocal, CallRemote (default)
// TransferMode: Unreliable, UnreliableOrdered, Reliable
[Rpc(MultiplayerApi.RpcMode.AnyPeer, CallLocal = true, TransferMode = MultiplayerPeer.TransferModeEnum.Reliable)]
public void SendChatMessage(string message)
{
// Runs on all peers when called with Rpc()
_chatDisplay.AddMessage(message);
}
[Rpc(MultiplayerApi.RpcMode.Authority, CallLocal = false, TransferMode = MultiplayerPeer.TransferModeEnum.UnreliableOrdered)]
public void SyncPosition(Vector2 pos, Vector2 vel)
{
Position = pos;
_velocity = vel;
}
// Calling RPCs
_ = Rpc(MethodName.SendChatMessage, "Hello everyone!"); // Call on ALL peers
_ = RpcId(_targetPeerId, MethodName.SyncPosition, pos, vel); // Call on specific peer
| Mode | Guaranteed Delivery | Ordered | Use For |
|---|---|---|---|
reliable | Yes | Yes | Chat, damage, spawning, game events |
unreliable | No | No | Position updates (stale data is useless) |
unreliable_ordered | No | Yes | Frequent state sync (latest matters, skip stale) |
Automatically replicates node creation across peers.
// In the scene tree:
// Game
// ├── MultiplayerSpawner (spawn_path = Players, auto_spawn_list = [player_scene])
// └── Players/
// Server spawns — all clients automatically get the node
private void SpawnPlayer(long peerId)
{
var player = _playerScene.Instantiate();
player.Name = peerId.ToString();
GetNode("Players").AddChild(player);
// MultiplayerSpawner handles replication to other peers
}
Automatically replicates property changes from authority to other peers.
// Attach MultiplayerSynchronizer as child of the node to sync
// Configure synced properties in the inspector or via code:
// In the scene tree:
// Player (authority = peer_id)
// ├── MultiplayerSynchronizer
// │ replication_config:
// │ position → always (unreliable)
// │ health → on_change (reliable)
// │ animation_state → always (unreliable_ordered)
// └── Sprite2D
Replication modes:
// Server assigns authority over player nodes
private void OnPeerConnected(long peerId)
{
var player = SpawnPlayer(peerId);
player.SetMultiplayerAuthority((int)peerId);
// Now peerId owns this node — their RPCs with Authority mode work
}
// Check authority in game logic
public override void _PhysicsProcess(double delta)
{
if(!IsMultiplayerAuthority())
{
return; // only the authority processes input for this node
}
var input = GatherInput();
ApplyMovement(input, delta);
}
// Same API, different transport — swap ENet for WebSocket
public void StartWebSocketServer(int port = 9999)
{
var peer = new WebSocketMultiplayerPeer();
peer.CreateServer(port);
Multiplayer.MultiplayerPeer = peer;
}
// Client
public void ConnectWebSocket(string url = "ws://127.0.0.1:9999")
{
var peer = new WebSocketMultiplayerPeer();
peer.CreateClient(url);
Multiplayer.MultiplayerPeer = peer;
}
CREATE → WAITING → READY_CHECK → COUNTDOWN → IN_GAME → POST_GAME → DISSOLVE
↓
RETURN_TO_LOBBY
| Strategy | Algorithm | Best For |
|---|---|---|
| Skill-based (ELO) | Simple: win = +K, lose = -K, scaled by opponent rating | 1v1 games, chess-like |
| Skill-based (Glicko-2) | Adds rating deviation and volatility | Games with irregular play frequency |
| Skill-based (TrueSkill) | Bayesian, supports teams | Team-based competitive |
| Region-based | Route to nearest datacenter | Latency-sensitive, global player base |
| Latency-based | Measure RTT, group low-latency peers | P2P games, fighting games |
| Hybrid | Skill + region + latency constraints | Most production multiplayer games |
When the host disconnects in a listen-server model:
| Channel | Delivery | Ordering | Use Case |
|---|---|---|---|
| Reliable ordered | Guaranteed | In-order | Chat, game events, inventory changes |
| Reliable unordered | Guaranteed | Any order | Asset loading, non-sequential data |
| Unreliable ordered | Best-effort | Skip stale | Position/state sync (drop old, keep latest) |
| Unreliable unordered | Best-effort | Any order | VoIP, particle effects, cosmetic events |
# Binary message format — NOT JSON
[Header: 2 bytes]
├── Message type (1 byte, up to 256 message types)
└── Flags (1 byte: compressed, fragmented, priority)
[Sequence number: 2 bytes]
[Payload: variable]
└── Tightly packed fields, not key-value pairs
# Example: position update = 13 bytes total
[type=0x01][flags=0x00][seq=1234][x:float32][y:float32][rotation:uint8]
vs JSON equivalent: {"type":"pos","x":123.45,"y":67.89,"r":180} = 47 bytes
# Include protocol version in handshake
Client → Server: [HANDSHAKE][protocol_version=3][client_version="1.2.0"]
Server → Client: [HANDSHAKE_ACK] or [VERSION_MISMATCH][min_supported=2][current=3]
The most effective anti-cheat is a server that validates everything.
| Principle | Implementation |
|---|---|
| Never trust the client | Server validates all inputs and state transitions |
| Clients send inputs, not results | "I pressed attack" not "I dealt 50 damage to Player B" |
| Server owns the clock | Server tick is authoritative; reject inputs with impossible timestamps |
| Validate movement | Check speed, acceleration, collision against server-side world |
| Validate actions | Rate-limit attacks, ability usage, item consumption |
| Cheat | How It Works | Server-Side Mitigation |
|---|---|---|
| Speed hack | Client modifies movement speed | Server validates position delta per tick |
| Teleport | Client sets arbitrary position | Server rejects position jumps exceeding max velocity |
| Aimbot | Client auto-targets enemies | Server validates aim consistency, impossible reaction times |
| Wallhack | Client renders hidden enemies | Interest management — never send data about enemies the player cannot perceive |
| Damage hack | Client reports inflated damage | Server calculates all damage; clients never report damage values |
| Packet manipulation | Modify packets in transit | Encrypt and authenticate packets; reject tampered messages |
| Replay attack | Re-send valid old packets | Sequence numbers + timestamp windows; reject duplicates |
| Architecture | Typical Capacity | Bottleneck |
|---|---|---|
| Single server, action game | 16-64 players | CPU (simulation), bandwidth |
| Single server, MMO zone | 200-500 players | Bandwidth, interest management |
| Sharded MMO | 1000-10000+ per shard | Database, cross-shard communication |
| Battle royale | 100 per match | Bandwidth at match start, CPU mid-game |
| Pattern | When to Use |
|---|---|
| Containerized game servers (Docker/K8s) | Standard for dedicated servers; quick scaling |
| Auto-scaling groups | Scale server fleet based on matchmaker queue depth |
| Regional deployment | Deploy close to players; use anycast or region-based routing |
| Spot/preemptible instances | Cost savings for non-ranked matches that can tolerate interruption |
| Agones (K8s game server orchestrator) | Open-source; integrates with K8s for game server lifecycle |
// Godot: launch multiple instances from editor
// Project Settings → Run → Multiple Instances → set to 2-4
// Or use OS.Execute() to launch headless server + client windows
// Command line:
// godot --headless --server # server instance
// godot --client --connect=127.0.0.1 # client instance(s)
| Condition | How to Simulate | What It Reveals |
|---|---|---|
| Latency (50-200ms) | Network emulator (clumsy, tc, Godot debug) | Prediction/interpolation quality |
| Packet loss (1-10%) | Network emulator | Reliability of state sync |
| Jitter (variable latency) | Random delay injection | Jitter buffer adequacy |
| Bandwidth limit | Throttle outbound | Compression and prioritization effectiveness |
| Disconnect/reconnect | Kill and restart client process | Reconnection flow, state recovery |
| Reordered packets | Network emulator | Sequence number handling |
# Godot: export with --headless flag or use a server export preset
# Disable rendering, audio, and input — server only needs simulation
# Export preset: "Linux Server" with:
# - Display server: headless
# - Audio driver: Dummy
# - Rendering: disabled or minimal
| Skill | When to Load |
|---|---|
gamedev-godot | Godot engine fundamentals, scene architecture, C# scripting |
gamedev-voice-arthur | Co-developer voice and tone for game project output |
dotnet-architecture | C# server architecture patterns (Clean Architecture, DDD for game servers) |
dotnet-error-handling | Server-side error handling, resilience patterns |
Before shipping a multiplayer feature, verify:
Letting clients report game outcomes instead of inputs. "I killed Player B" instead of "I fired at position (x, y) at tick N." The server must adjudicate.
Sending full world state to every client every tick. Use interest management, delta compression, and priority systems. A player across the map does not need 60Hz updates.
TCP's head-of-line blocking and guaranteed delivery cause stalls when a packet is lost. Use UDP with selective reliability (ENet does this). TCP is fine for login, chat, and lobby — not for position updates.
Serializing game state as JSON wastes 3-5x bandwidth compared to binary encoding. JSON is human-readable — use it for debugging tools and REST APIs, not for 60Hz game state sync.
Assuming network latency is constant. Real networks have variable delay. Without a jitter buffer, entity movement stutters even at low average latency. Always buffer incoming data.
Testing multiplayer only on localhost (0ms latency, 0% packet loss). The game feels perfect locally but breaks at real-world network conditions. Always test with simulated latency and packet loss before shipping.
Implementing custom encryption for game traffic instead of using established protocols (DTLS, TLS). Custom crypto will have vulnerabilities. Use proven libraries.
Running matchmaking on a single server instance. When it goes down, no one can play — even if game servers are healthy. Matchmaking, authentication, and game servers should be independently scalable and deployable.
Assuming the simulation is deterministic across platforms or builds (required for lockstep/rollback) without continuous verification. Floating-point behavior varies across compilers, platforms, and optimization levels. Test with checksums every frame.