From dotnet-skills
Tuning GC and memory. GC modes, LOH/POH, Gen0/1/2, Span/Memory deep patterns, ArrayPool, profiling.
npx claudepluginhub wshaddix/dotnet-skillsThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Provides security patterns for LLM trading agents with wallet/transaction authority: prompt injection guards, spend limits, pre-send simulation, circuit breakers, MEV protection, key handling.
Garbage collection and memory management for .NET applications. Covers GC modes (workstation vs server, concurrent vs non-concurrent), Large Object Heap (LOH) and Pinned Object Heap (POH), generational tuning (Gen0/1/2), memory pressure notifications, deep Span<T>/Memory<T> ownership patterns beyond basics, buffer pooling with ArrayPool<T> and MemoryPool<T>, weak references, finalizers vs IDisposable, and memory profiling with dotMemory and PerfView.
Out of scope: Span<T>/Memory<T> syntax introduction and basic usage -- see [skill:dotnet-performance-patterns]. Microbenchmarking setup -- see [skill:dotnet-benchmarkdotnet]. CLI diagnostic tools (dotnet-counters, dotnet-trace, dotnet-dump) -- see [skill:dotnet-profiling]. Channel<T> producer/consumer patterns -- see [skill:dotnet-channels].
Cross-references: [skill:dotnet-performance-patterns] for Span<T>/Memory<T> basics and sealed devirtualization, [skill:dotnet-profiling] for runtime diagnostic tools (dotnet-counters, dotnet-trace, dotnet-dump), [skill:dotnet-channels] for backpressure patterns that interact with memory management, [skill:dotnet-file-io] for MemoryMappedFile usage and POH buffer patterns in file I/O.
| Aspect | Workstation | Server |
|---|---|---|
| GC threads | Single thread | One thread per logical core |
| Heap segments | Single heap | One heap per core |
| Pause latency | Lower | Higher (more memory scanned) |
| Throughput | Lower | Higher |
| Default for | Console apps, desktop | ASP.NET Core web apps |
<!-- In the .csproj file -->
<PropertyGroup>
<ServerGarbageCollection>true</ServerGarbageCollection>
</PropertyGroup>
// Or in runtimeconfig.json
{
"runtimeOptions": {
"configProperties": {
"System.GC.Server": true
}
}
}
| Mode | Behavior | Use when |
|---|---|---|
| Concurrent (default) | Gen2 collection runs alongside application threads | Latency-sensitive (web APIs, UI) |
| Non-concurrent | Application threads pause during Gen2 collection | Maximum throughput, batch processing |
{
"runtimeOptions": {
"configProperties": {
"System.GC.Concurrent": true
}
}
}
DATAS dynamically adjusts GC heap size based on application memory usage patterns. It is enabled by default in .NET 8+ Server GC mode. DATAS reduces memory footprint for applications with variable load by shrinking the heap during low-activity periods.
{
"runtimeOptions": {
"configProperties": {
"System.GC.DynamicAdaptationMode": 1
}
}
}
Set to 0 to disable DATAS if you observe excessive GC frequency in steady-state workloads.
Regions replace the older segment-based heap management. Each region is a small, fixed-size block of memory that the GC can allocate and free independently. Regions are enabled by default in .NET 7+ and improve:
No configuration is needed -- regions are the default. To revert to segments (rarely needed):
{
"runtimeOptions": {
"configProperties": {
"System.GC.Regions": false
}
}
}
| Generation | Contains | Collection frequency | Collection cost |
|---|---|---|---|
| Gen0 | Newly allocated objects | Very frequent (milliseconds) | Very cheap (small heap) |
| Gen1 | Objects surviving Gen0 | Frequent | Cheap |
| Gen2 | Long-lived objects | Infrequent | Expensive (full heap scan) |
Objects promote from Gen0 to Gen1 to Gen2 as they survive collections. The GC budget for Gen0 is tuned dynamically -- when Gen0 fills, a Gen0 collection triggers.
# Real-time GC metrics
dotnet-counters monitor --process-id <PID> \
--counters System.Runtime[gen-0-gc-count,gen-1-gc-count,gen-2-gc-count,gc-heap-size]
// Programmatic GC observation
var gen0 = GC.CollectionCount(0);
var gen1 = GC.CollectionCount(1);
var gen2 = GC.CollectionCount(2);
var totalMemory = GC.GetTotalMemory(forceFullCollection: false);
var memoryInfo = GC.GetGCMemoryInfo();
logger.LogInformation(
"GC: Gen0={Gen0} Gen1={Gen1} Gen2={Gen2} Heap={HeapMB:F1}MB",
gen0, gen1, gen2, totalMemory / (1024.0 * 1024));
Objects >= 85,000 bytes are allocated on the LOH. LOH collections only happen during Gen2 collections, and by default the LOH is not compacted (causing fragmentation).
// Force LOH compaction (use sparingly -- expensive)
GCSettings.LargeObjectHeapCompactionMode =
GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect();
| Strategy | Implementation |
|---|---|
| ArrayPool<T> for large arrays | ArrayPool<byte>.Shared.Rent(100_000) |
| MemoryPool<T> for IMemoryOwner pattern | MemoryPool<byte>.Shared.Rent(100_000) |
| Pre-allocate and reuse | Create large buffers once at startup |
| Avoid frequent large string concat | Use StringBuilder or string.Create |
The POH is a dedicated heap for objects that must remain at a fixed memory address (pinned). Before .NET 5, pinning objects on the regular heap prevented compaction. The POH isolates pinned objects so they do not block compaction of Gen0/1/2 heaps.
// Allocate on POH -- useful for I/O buffers passed to native code
byte[] buffer = GC.AllocateArray<byte>(4096, pinned: true);
// The buffer's address will not change, safe for native interop
// and overlapped I/O without explicit GCHandle pinning
Use POH for:
Socket.ReceiveAsync (overlapped I/O)See [skill:dotnet-performance-patterns] for Span<T>/Memory<T> introduction and basic slicing. This section covers ownership semantics and lifetime management for shared buffers.
// Rent from MemoryPool and manage lifetime with IDisposable
using IMemoryOwner<byte> owner = MemoryPool<byte>.Shared.Rent(4096);
Memory<byte> buffer = owner.Memory[..4096]; // Slice to exact size needed
// Pass the Memory<T> to async I/O
int bytesRead = await stream.ReadAsync(buffer, cancellationToken);
Memory<byte> data = buffer[..bytesRead];
// Process the data
await ProcessDataAsync(data, cancellationToken);
// owner.Dispose() returns the buffer to the pool
When transferring buffer ownership between components, use IMemoryOwner<T> to make lifetime responsibility explicit:
public sealed class MessageParser
{
// Caller transfers ownership -- this method is responsible for disposal
public async Task ProcessAsync(
IMemoryOwner<byte> messageOwner,
CancellationToken ct)
{
using (messageOwner)
{
Memory<byte> data = messageOwner.Memory;
// Parse and process...
await HandleMessageAsync(data, ct);
}
// Buffer returned to pool on dispose
}
}
// Span<T> enforces stack-only usage (ref struct)
// These are compile-time errors:
// Span<byte> field; // Cannot store in class/struct field
// async Task Foo(Span<byte> s); // Cannot use in async method
// var list = new List<Span<byte>>(); // Cannot use as generic type argument
// When you need heap storage or async, use Memory<T> instead
public async Task ProcessAsync(Memory<byte> buffer, CancellationToken ct)
{
// Can use Memory<T> in async methods
int bytesRead = await stream.ReadAsync(buffer, ct);
// Convert to Span<T> for synchronous processing within a method
Span<byte> span = buffer.Span;
ParseHeader(span[..bytesRead]);
}
ArrayPool<T> reduces GC pressure by reusing array allocations. Always return rented arrays, and never assume the returned array is exactly the requested size.
// Rent and return pattern
byte[] buffer = ArrayPool<byte>.Shared.Rent(minimumLength: 4096);
try
{
// IMPORTANT: Rented array may be larger than requested
int bytesRead = await stream.ReadAsync(
buffer.AsMemory(0, 4096), cancellationToken);
ProcessData(buffer.AsSpan(0, bytesRead));
}
finally
{
// clearArray: true when buffer contained sensitive data
ArrayPool<byte>.Shared.Return(buffer, clearArray: false);
}
// Create a custom pool for specific allocation patterns
var pool = ArrayPool<byte>.Create(
maxArrayLength: 1_048_576, // 1 MB max array
maxArraysPerBucket: 50); // Keep up to 50 arrays per size bucket
// Use for workloads with predictable buffer sizes
byte[] buffer = pool.Rent(65_536);
try
{
// Process...
}
finally
{
pool.Return(buffer);
}
MemoryPool<T> wraps ArrayPool<T> and returns IMemoryOwner<T> for RAII-style lifetime management:
// MemoryPool returns IMemoryOwner<T> -- dispose to return
using IMemoryOwner<byte> owner = MemoryPool<byte>.Shared.Rent(8192);
Memory<byte> buffer = owner.Memory;
// Slice to exact size (owner.Memory may be larger)
int bytesRead = await stream.ReadAsync(buffer[..8192], ct);
await ProcessAsync(buffer[..bytesRead], ct);
// Dispose returns the underlying array to the pool
| Guideline | Rationale |
|---|---|
Always return rented buffers in finally or using | Leaked buffers defeat the purpose of pooling |
| Slice to exact size before processing | Rented arrays may be larger than requested |
Use clearArray: true for sensitive data | Pool reuse could expose secrets to other consumers |
| Do not cache rented arrays in long-lived fields | Holds pool buffers indefinitely, reducing availability |
Prefer MemoryPool<T> over raw ArrayPool<T> | Disposal-based lifetime is harder to misuse |
Weak references allow the GC to collect the target object when no strong references remain. Use for caches where reclamation under memory pressure is acceptable.
public sealed class ImageCache
{
private readonly ConcurrentDictionary<string, WeakReference<byte[]>> _cache = new();
public byte[]? TryGet(string key)
{
if (_cache.TryGetValue(key, out var weakRef)
&& weakRef.TryGetTarget(out var data))
{
return data;
}
return null;
}
public void Set(string key, byte[] data)
{
_cache[key] = new WeakReference<byte[]>(data);
}
// Periodically clean up dead references
public void Purge()
{
foreach (var key in _cache.Keys)
{
if (_cache.TryGetValue(key, out var weakRef)
&& !weakRef.TryGetTarget(out _))
{
_cache.TryRemove(key, out _);
}
}
}
}
WeakReference<T> overhead outweighs the benefitFor most caching scenarios, prefer MemoryCache with size limits and expiration policies. Weak references are a last resort when you need GC-driven eviction.
Implement IDisposable to release unmanaged resources deterministically:
public sealed class NativeBufferWrapper : IDisposable
{
private IntPtr _handle;
private bool _disposed;
public NativeBufferWrapper(int size)
{
_handle = Marshal.AllocHGlobal(size);
}
public void Dispose()
{
if (_disposed) return;
_disposed = true;
Marshal.FreeHGlobal(_handle);
_handle = IntPtr.Zero;
// No GC.SuppressFinalize needed -- no finalizer
}
}
Finalizers run on the GC finalizer thread when an object is collected. They are a safety net for unmanaged resources that were not disposed explicitly.
public class UnmanagedResourceHolder : IDisposable
{
private IntPtr _handle;
private bool _disposed;
public UnmanagedResourceHolder(int size)
{
_handle = Marshal.AllocHGlobal(size);
}
~UnmanagedResourceHolder()
{
Dispose(disposing: false);
}
public void Dispose()
{
Dispose(disposing: true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (_disposed) return;
_disposed = true;
if (disposing)
{
// Free managed resources
}
// Free unmanaged resources
if (_handle != IntPtr.Zero)
{
Marshal.FreeHGlobal(_handle);
_handle = IntPtr.Zero;
}
}
}
| Cost | Impact |
|---|---|
| Objects with finalizers survive at least one extra GC | Promotes to Gen1/Gen2, increasing memory pressure |
| Finalizer thread is single-threaded | Slow finalizers block all other finalization |
| Execution order is non-deterministic | Cannot depend on other finalizable objects |
| Not guaranteed to run on process exit | Critical cleanup may not execute |
Rule: Use sealed classes with IDisposable (no finalizer) unless you own unmanaged handles. Only add a finalizer as a safety net for unmanaged resources.
Inform the GC about unmanaged memory allocations so it accounts for them in collection decisions:
public sealed class NativeImageBuffer : IDisposable
{
private readonly IntPtr _buffer;
private readonly long _size;
private bool _disposed;
public NativeImageBuffer(long sizeBytes)
{
_size = sizeBytes;
_buffer = Marshal.AllocHGlobal((IntPtr)sizeBytes);
GC.AddMemoryPressure(sizeBytes);
}
public void Dispose()
{
if (_disposed) return;
_disposed = true;
Marshal.FreeHGlobal(_buffer);
GC.RemoveMemoryPressure(_size);
}
}
// React to memory pressure in application logic
var memoryInfo = GC.GetGCMemoryInfo();
double loadPercent = (double)memoryInfo.MemoryLoadBytes
/ memoryInfo.TotalAvailableMemoryBytes * 100;
if (loadPercent > 85)
{
logger.LogWarning("High memory pressure: {Load:F1}%", loadPercent);
// Shed load: reduce cache sizes, reject non-critical requests
}
dotMemory provides heap snapshots and allocation tracking with a visual UI. Use it for investigating memory leaks and high-allocation hot paths.
Workflow:
Key views:
PerfView is a free Microsoft tool for detailed GC and allocation analysis. It uses ETW (Event Tracing for Windows) events for low-overhead profiling.
# Collect GC and allocation events for 30 seconds
PerfView.exe /GCCollectOnly /MaxCollectSec:30 collect
# Collect allocation stacks (higher overhead)
PerfView.exe /ClrEvents:GC+Stack /MaxCollectSec:30 collect
Key PerfView views:
[MemoryDiagnoser] to confirm improvementtry/finally or IMemoryOwner<T> with using.ArrayPool<T>.Rent() may return an array larger than requested. Always slice to the exact size needed before processing.sealed class with IDisposable (no finalizer) for managed-only cleanup.GC.AddMemoryPressure() to hint at unmanaged memory instead.ArrayPool<T> to rent and return large buffers instead of allocating new arrays.