From harness-claude
Optimizes V8 garbage collection in JavaScript/Node.js apps by explaining Scavenge, Mark-Sweep-Compact, and strategies to reduce pauses and allocation pressure.
npx claudepluginhub intense-visions/harness-engineering --plugin harness-claudeThis skill uses the workspace's default tool permissions.
> Understand V8's generational garbage collector — young generation Scavenge, old generation Mark-Sweep-Compact, incremental and concurrent marking — to minimize GC pauses and reduce allocation pressure in performance-critical code.
Analyzes V8 heap snapshots in Chrome DevTools and Node.js using Summary, Comparison, Containment, Dominator views and 3-snapshot technique to identify memory leaks and high usage.
Detects and fixes memory leaks using heap snapshots, profiling tools, and best practices in Node.js and Python apps. Use for memory growth, OOM errors, or usage optimization.
Diagnoses and resolves memory leaks in JavaScript/Node.js apps using memlab, heap snapshots, and comparison scripts. Useful for high memory usage, OOM errors, or leak analysis.
Share bugs, ideas, or general feedback.
Understand V8's generational garbage collector — young generation Scavenge, old generation Mark-Sweep-Compact, incremental and concurrent marking — to minimize GC pauses and reduce allocation pressure in performance-critical code.
--trace-gc output shows frequent or long GC events under loadperformance.measureUserAgentSpecificMemory() or performance.memory shows high heap usageUnderstand the generational hypothesis. Most objects die young — they are allocated, used briefly, and become garbage. V8 exploits this by dividing the heap into generations:
Understand Scavenge (minor GC). The young generation uses a semi-space copying collector:
Understand Mark-Sweep-Compact (major GC). The old generation uses a tracing collector:
Reduce allocation pressure in hot paths. Every allocation eventually triggers GC. In code that runs 60 times per second (animation loops) or thousands of times per second (request handlers), minimize allocations:
// BAD — creates new object every frame (60 objects/second, all become garbage)
function animate() {
const position = { x: calcX(), y: calcY() };
applyPosition(position);
requestAnimationFrame(animate);
}
// GOOD — reuse object, zero allocations per frame
const position = { x: 0, y: 0 };
function animate() {
position.x = calcX();
position.y = calcY();
applyPosition(position);
requestAnimationFrame(animate);
}
Implement object pooling for frequently created/destroyed objects:
class ParticlePool {
constructor(size) {
this.pool = Array.from({ length: size }, () => ({
x: 0,
y: 0,
vx: 0,
vy: 0,
active: false,
}));
this.nextFree = 0;
}
acquire() {
const obj = this.pool[this.nextFree];
obj.active = true;
this.nextFree = (this.nextFree + 1) % this.pool.length;
return obj;
}
release(obj) {
obj.active = false;
obj.x = obj.y = obj.vx = obj.vy = 0;
}
}
Monitor GC in Node.js with --trace-gc:
# Shows every GC event with type, duration, and heap sizes
node --trace-gc server.js
# Output example:
# [12345:0x1234] 100 ms: Scavenge 4.2 (8.0) -> 2.1 (8.0) MB, 1.3 / 0.0 ms
# [12345:0x1234] 5000 ms: Mark-sweep 45.2 (64.0) -> 32.1 (64.0) MB, 85.3 / 0.0 ms
Use performance.measureUserAgentSpecificMemory() for browser heap measurement:
// Requires cross-origin isolation headers
if (performance.measureUserAgentSpecificMemory) {
const result = await performance.measureUserAgentSpecificMemory();
console.log('Total JS heap:', result.bytes);
for (const breakdown of result.breakdown) {
console.log(breakdown.types, breakdown.bytes);
}
}
V8 divides the heap into several spaces:
V8's garbage collector (Orinoco) uses three strategies to reduce pause times:
Incremental marking — breaks the Mark phase into small steps (1-5ms each) interleaved with JavaScript execution. Instead of one 100ms mark phase, runs 20 steps of 5ms.
Concurrent marking — runs marking on background threads while JavaScript executes on the main thread. The main thread only pauses briefly for the final "remark" step.
Parallel Scavenge — uses multiple threads for the young generation copy, reducing Scavenge pause from 3ms to <1ms.
A real-time trading dashboard received WebSocket price updates at 100 messages/second. Each message handler created a new price tick object: { symbol, price, timestamp, change }. At 100 objects/second, allocation rate was 50MB/s. This triggered Scavenge every 100ms and major GC every 10 seconds (200ms pause).
Fix: implemented object pooling with a ring buffer of 1,000 pre-allocated tick objects. When a new tick arrives, the oldest inactive tick is recycled. Allocation rate dropped from 50MB/s to 2MB/s (only new strings for symbol names). Scavenge frequency dropped to every 4 seconds, and major GC pauses dropped from 200ms to <5ms because the old generation held a stable set of pool objects.
A Node.js API server serialized responses using JSON.stringify on objects up to 50MB. At peak load (100 requests/second), peak old-gen usage reached 3.8GB (close to the 4GB --max-old-space-size limit), triggering 300ms major GC pauses every 30 seconds.
Fix: switched to streaming JSON serialization (json-stream-stringify) which serializes incrementally, producing string chunks that are flushed immediately and collected by minor GC. Peak old-gen usage dropped from 3.8GB to 1.2GB because large intermediate string objects no longer accumulated in old space. Major GC pauses dropped to 15ms.
Creating objects in hot loops. array.map(item => ({ ...item, computed: calc(item) })) creates a new object per item. At 60fps with 100 items, that is 6,000 objects/second becoming garbage. Use in-place mutation or pre-allocated arrays when GC sensitivity is critical.
String concatenation in loops. Each str += chunk creates a new string; the old one becomes garbage. For building large strings, use an array and join(), or use a single template literal.
// BAD — O(n) strings become garbage
let html = '';
for (const item of items) {
html += `<div>${item.name}</div>`; // new string each iteration
}
// GOOD — one allocation at the end
const parts = items.map((item) => `<div>${item.name}</div>`);
const html = parts.join('');
Not reusing arrays/objects across animation frames. Creating new arrays or objects each frame for position calculations, collision detection, or particle updates creates constant Scavenge pressure. Pre-allocate and reuse.
Relying on --expose-gc and manual global.gc() in production. Manual GC calls cause a full stop-the-world pause at the worst possible time (when you call it). V8's automatic GC is highly optimized to find the best time to collect. Manual GC is only useful for benchmarking and debugging.
Promoting short-lived objects to old generation. Holding references to temporary objects across multiple GC cycles (in closures, module-scope variables, caches without eviction) causes them to be promoted to old generation. Old generation collection is much more expensive. Ensure temporary objects go out of scope quickly.
--trace-gc flag documentation — https://nodejs.org/api/cli.html#--trace-gc