From go-agent-skills
Detects Go performance anti-patterns like unnecessary allocations, inefficient string handling, slice/map growth, and suggests optimizations with sync.Pool, benchmarking, and pprof profiling.
npx claudepluginhub eduardo-sl/go-agent-skills --plugin go-agent-skillsThis skill uses the workspace's default tool permissions.
Profile first, optimize second. Never optimize without a benchmark proving the problem.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Profile first, optimize second. Never optimize without a benchmark proving the problem.
strconv over fmt for primitive conversions:// ✅ Good — zero allocations for simple conversions
s := strconv.Itoa(42)
s := strconv.FormatFloat(3.14, 'f', 2, 64)
// ❌ Bad — fmt.Sprintf allocates
s := fmt.Sprintf("%d", 42)
// ✅ Good — use strings.Builder for concatenation
var b strings.Builder
for _, s := range parts {
b.WriteString(s)
}
result := b.String()
// ❌ Bad — repeated concatenation allocates on every +
result := ""
for _, s := range parts {
result += s
}
// ✅ Good — single allocation
users := make([]User, 0, len(ids))
for _, id := range ids {
users = append(users, getUser(id))
}
// ✅ Good — map with capacity hint
lookup := make(map[string]User, len(users))
// ❌ Bad — repeated growing
var users []User // starts at 0, grows via doubling
sync.Pool for frequently allocated, short-lived objects:var bufPool = sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
func process(data []byte) string {
buf := bufPool.Get().(*bytes.Buffer)
defer func() {
buf.Reset()
bufPool.Put(buf)
}()
buf.Write(data)
return buf.String()
}
// ✅ Good — concrete type in loop
func sum(vals []int64) int64 {
var total int64
for _, v := range vals {
total += v
}
return total
}
// ❌ Bad — interface{} causes boxing/unboxing
func sum(vals []interface{}) int64 { ... }
reflect in performance-critical paths:If you need reflection-like behavior at scale, use code generation
(go generate, stringer, protocol buffers).
// ✅ Good — contiguous memory, cache-friendly
type Points struct {
X []float64
Y []float64
}
// ❌ Slower — pointer chasing per element
type Points []*Point
// ✅ Use capacity hints
m := make(map[string]int, expectedSize)
// ✅ For read-heavy concurrent access, use sync.Map
// But ONLY when keys are stable — sync.Map has higher overhead
// for writes than a mutex-protected map.
// ✅ For fixed key sets, consider using a slice with index mapping
// instead of a map.
ALWAYS write benchmarks before and after optimization:
func BenchmarkFoo(b *testing.B) {
// Setup outside the loop
input := generateInput()
b.ResetTimer()
for i := 0; i < b.N; i++ {
result = Foo(input) // assign to package-level var to prevent elision
}
}
// Package-level var prevents compiler from eliminating the call
var result string
Run benchmarks with memory profiling:
go test -bench=BenchmarkFoo -benchmem -count=5 ./...
Compare before/after with benchstat:
go test -bench=. -count=10 > old.txt
# make changes
go test -bench=. -count=10 > new.txt
benchstat old.txt new.txt
go test -cpuprofile=cpu.prof -bench=BenchmarkFoo .
go tool pprof cpu.prof
go test -memprofile=mem.prof -bench=BenchmarkFoo .
go tool pprof -alloc_space mem.prof
import _ "net/http/pprof"
// Access at http://localhost:6060/debug/pprof/
go func() {
log.Println(http.ListenAndServe("localhost:6060", nil))
}()
log/slog is the right default for most services. But when benchmarks show
logging is a bottleneck (high-frequency hot paths, >100k log lines/sec),
consider zero-allocation loggers.
// slog allocates per log call — fine for most services
slog.Info("request handled",
slog.String("method", method),
slog.Int("status", status),
)
// In hot paths where benchmarks prove logging is a bottleneck,
// use zap's zero-allocation core:
logger, _ := zap.NewProduction()
logger.Info("request handled",
zap.String("method", method),
zap.Int("status", status),
)
// zap avoids allocations by using a field pool and typed fields
| Scenario | Logger |
|---|---|
| General service logging | log/slog (stdlib, zero dependencies) |
| High-frequency hot path (>100k lines/sec) | go.uber.org/zap (zero-alloc) |
| Extreme throughput with JSON | github.com/rs/zerolog (zero-alloc JSON) |
// Use slog API everywhere, backed by zap's performance
zapLogger, _ := zap.NewProduction()
slogHandler := zapslog.NewHandler(zapLogger.Core(), nil)
logger := slog.New(slogHandler)
// Code uses standard slog API — can swap backend without changing callers
logger.Info("request handled",
slog.String("method", method),
slog.Int("status", status),
)
// ❌ Bad — logging inside tight loop
for _, item := range millions {
slog.Info("processing item", slog.String("id", item.ID))
process(item)
}
// ✅ Good — sample or batch log
for i, item := range millions {
process(item)
if i%10000 == 0 {
slog.Info("progress", slog.Int("processed", i), slog.Int("total", len(millions)))
}
}
// ✅ Good — log summary after loop
slog.Info("batch complete", slog.Int("count", len(millions)))
NEVER switch loggers without a benchmark proving the need.
slog is fast enough for the vast majority of Go services.
| Anti-Pattern | Fix |
|---|---|
fmt.Sprintf for simple int→string | strconv.Itoa |
| String concatenation in loop | strings.Builder |
| Slice without preallocation | make([]T, 0, n) |
| Map without capacity hint | make(map[K]V, n) |
regexp.Compile inside function | Compile once at package level |
json.Marshal in hot path | Use code-gen (easyjson, sonic) |
| Logging in tight loop | Batch or sample |
defer in very tight inner loop | Manual cleanup (rare, benchmark first) |
Most Go code is not performance-critical. Readability and correctness ALWAYS take priority over micro-optimizations. Only apply these patterns when:
Premature optimization is still the root of all evil, even in Go.