gRPC and Protocol Buffers skill. Proto design, code generation, streaming patterns, gRPC-web, load balancing, service mesh integration. Triggers on: /godmode:grpc, "gRPC service", "proto file", "streaming RPC", "gRPC-web".
From godmodenpx claudepluginhub arbazkhan971/godmodeThis skill uses the workspace's default tool permissions.
Designs and optimizes AI agent action spaces, tool definitions, observation formats, error recovery, and context for higher task completion rates.
Enables AI agents to execute x402 payments with per-task budgets, spending controls, and non-custodial wallets via MCP tools. Use when agents pay for APIs, services, or other agents.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
/godmode:grpc# Detect existing proto files and tools
find . -name "*.proto" -not -path "*/node_modules/*" \
2>/dev/null | head -20
# Check for buf configuration
ls buf.yaml buf.gen.yaml 2>/dev/null
# Check gRPC dependencies
grep -r "grpc\|tonic\|grpc-go\|grpcio" \
go.mod Cargo.toml package.json pyproject.toml \
2>/dev/null
GRPC DISCOVERY:
Language: <Go|Rust|Java|Python|TypeScript>
Framework: <tonic|grpc-go|grpc-java|grpc-node>
Proto version: proto3
Consumers: <internal|mobile|browser via gRPC-web>
Patterns: <unary|server-stream|client-stream|bidi>
IF no buf.yaml: create one (not raw protoc)
IF no protos: scaffold from API requirements
IF browser clients: add gRPC-web or Connect
service <Entity>Service {
rpc Get<Entity>(Get<Entity>Request)
returns (<Entity>) {}
rpc List<Entities>(List<Entities>Request)
returns (List<Entities>Response) {}
rpc Create<Entity>(Create<Entity>Request)
returns (<Entity>) {}
rpc Watch<Entities>(Watch<Entities>Request)
returns (stream <Entity>Event) {}
}
PROTO RULES:
1. proto3 syntax always
2. Package = company.domain.v1
3. Enum zero value = UNSPECIFIED
4. Field numbers are permanent — never reuse
5. FieldMask for partial updates
6. Request/Response per RPC (never share)
7. Idempotency key on create/update
8. google.protobuf.Timestamp for time
9. Reserve removed field numbers/names
10. Keep messages < 100 fields
FILE LAYOUT:
protos/<company>/<domain>/v1/
<service>.proto, resources.proto,
enums.proto, events.proto
# Lint protos
buf lint
# Check for breaking changes
buf breaking --against '.git#branch=main'
# Generate code
buf generate
GENERATION RULES:
Generated code NEVER committed — regen in CI
Pin plugin versions in buf.gen.yaml
Run buf lint before generation
Run buf breaking before merge
Generate for ALL consumer languages in one pass
THRESHOLDS:
buf lint errors: must be 0
buf breaking regressions: must be 0
IF breaking change needed: add new field,
reserve old — never modify existing
| Pattern | Use Case |
|--------------|------------------------------|
| Unary | CRUD operations |
| Server stream| Feeds, watches, large data |
| Client stream| Batch uploads, aggregation |
| Bidirectional| Chat, collaboration, sync |
BEST PRACTICES:
Set deadlines on all RPCs (prevent hung conns)
Implement keepalive pings (detect dead conns)
Use flow control / backpressure (prevent OOM)
Send heartbeats on long-lived streams
Implement reconnection with resume token
THRESHOLDS:
Unary deadline: 5s default, 30s max
Stream keepalive: every 30s
Backpressure buffer: 1000 messages max
IF stream idle > 60s without heartbeat: close
OPTIONS:
Envoy Proxy: production, no code changes
Buf Connect: no proxy, full streaming, recommended
grpc-web npm: simple, limited streaming
LIMITATIONS:
No client or bidi streaming in browsers (HTTP/1.1)
Server streaming via chunked transfer
CORS required for cross-origin
CHALLENGE: HTTP/2 long-lived connections mean L4 LBs
route all RPCs to one backend.
SOLUTION: L7 load balancing per individual RPC.
STRATEGIES:
1. Proxy L7 (recommended): Envoy/Nginx/Traefik
2. Client-side: built into grpc-go, grpc-java
3. xDS / service mesh: Istio, Linkerd, Consul
HEALTH CHECK:
Implement grpc.health.v1.Health on every server
IF no health check: load balancer can't route
STATUS CODES (use correctly):
OK, CANCELLED, INVALID_ARGUMENT, NOT_FOUND,
ALREADY_EXISTS, PERMISSION_DENIED,
UNAUTHENTICATED, RESOURCE_EXHAUSTED,
UNAVAILABLE, DEADLINE_EXCEEDED, INTERNAL
INTERCEPTOR CHAIN (in order):
Recovery → Logging → Metrics → Tracing → Auth
METRICS:
grpc_server_handled_total{method,code}
grpc_server_handling_seconds{method}
TEST LAYERS:
Proto: buf lint + buf breaking
Unit: mocked dependencies
Integration: grpcurl / Evans against real server
Streaming: 0 msgs, 1 msg, many, cancel, error
Load: ghz benchmark tool
Contract: buf breaking before merge
STREAMING EDGE CASES:
Cancel mid-stream, error mid-stream,
connection drop, concurrent send/receive,
deadlock detection for bidi streams
GRPC COMPLETE:
Services: <N>, RPCs: <N> unary / <M> streaming
buf lint: PASS, buf breaking: PASS
Health check: implemented
TLS: <mTLS|server-only|plaintext>
Commit: `"grpc: <service> — <N> RPCs, streaming
Never ask to continue. Loop autonomously until done.
1. Proto files: find . -name "*.proto"
2. Language: go.mod, Cargo.toml, package.json
3. Buf config: buf.yaml, buf.gen.yaml
4. Health check: grep grpc.health in source
5. Streaming: grep stream in .proto files
gRPC: {N} services, {M} RPCs (unary/streaming).
buf lint: {status}. Health: {status}.
LB: {strategy}. TLS: {type}.
timestamp project services rpcs streaming_rpcs buf_lint breaking commit_sha
KEEP if: buf lint PASS AND buf breaking PASS
AND health check present
DISCARD if: lint error OR breaking change
OR missing health check
STOP when: buf lint 0 errors AND breaking 0
AND health check implemented AND all RPCs
have deadlines and per-RPC messages
OR user requests stop OR max 10 iterations