Skip to content

GC Pressure & Object Pooling

advanced19 min read

What Is GC Pressure?

Here's something that trips up even experienced developers: GC pressure isn't about memory usage — it's about allocation rate. An application using 50 MB of stable memory creates almost no GC work. An application creating and discarding 50 MB of objects per second creates enormous GC pressure, even if peak memory never exceeds 10 MB. Sounds backwards, right? Let me explain.

Every allocation in New Space moves the bump pointer forward. When New Space fills up, the Scavenger runs. High allocation rate means more frequent Scavenge cycles, which means more time spent tracing, copying, and updating references.

// Low GC pressure: allocate once, reuse
const buffer = new Float64Array(1000);
function process(data) {
  for (let i = 0; i < data.length; i++) {
    buffer[i] = transform(data[i]);
  }
  return summarize(buffer, data.length);
}

// High GC pressure: allocate every call
function process(data) {
  const buffer = data.map(transform);  // New array + N new objects every call
  return summarize(buffer);
}

The second version creates a new array and potentially thousands of intermediate values on every call. In a hot path (called 60 times per second in an animation loop), this triggers a Scavenge every few frames.

Mental Model

Think of GC pressure like a kitchen during a dinner rush. If chefs use disposable plates (allocate new objects), the dishwasher (garbage collector) runs constantly. If they use reusable plates (object pooling), the dishwasher barely activates. The food quality is identical — the difference is entirely in the cleanup overhead.

Measuring GC with --trace-gc

Before you optimize anything, you need to measure. The --trace-gc V8 flag shows every GC event with timing:

node --trace-gc your-script.js

Output:

[4321:0x...]   10.2 ms: Scavenge 2.1 (4.0) -> 1.5 (4.0) MB, 0.8 / 0.0 ms
[4321:0x...]   15.7 ms: Scavenge 2.3 (4.0) -> 1.4 (4.0) MB, 0.6 / 0.0 ms
[4321:0x...]  210.5 ms: Mark-sweep 15.2 (20.0) -> 8.1 (20.0) MB, 3.2 / 0.0 ms

Reading the output:

  • Scavenge: Young Generation collection. 2.1 → 1.5 MB = before/after New Space size. 0.8 ms = pause time
  • Mark-sweep: Old Generation collection. 15.2 → 8.1 MB = before/after Old Space size. 3.2 ms = pause time

What to look for:

  • Frequent Scavenges (every few ms): high allocation rate in hot code
  • Growing "after" size: objects surviving longer than expected — potential memory leak or premature promotion
  • Long Mark-sweep pauses: large Old Space with many live objects

For more detail, use --trace-gc-verbose or the Chrome DevTools Performance panel's "GC" track.

Quiz
Your --trace-gc output shows Scavenges happening every 5ms with pause times of 0.8ms. What does this indicate?

Common GC Pressure Sources

You probably write code that creates GC pressure without realizing it. Let's look at the usual suspects.

1. Array Method Chaining

Every .map(), .filter(), .reduce() that returns a new array allocates. And we chain these things constantly:

// Creates 3 intermediate arrays + all their elements
const result = data
  .filter(x => x.active)     // Array 1
  .map(x => x.value * 2)     // Array 2
  .reduce((sum, v) => sum + v, 0);

// Single pass, zero intermediate allocations
let result = 0;
for (let i = 0; i < data.length; i++) {
  if (data[i].active) {
    result += data[i].value * 2;
  }
}

The functional version is cleaner to read but creates 3 arrays on every call. In a hot path, the loop version eliminates all intermediate allocations.

2. Object Spread and Destructuring

// Every iteration creates a new object
items.forEach(item => {
  const enriched = { ...item, timestamp: Date.now(), processed: true };
  emit(enriched);
});

// Mutation avoids allocation (when item ownership is clear)
items.forEach(item => {
  item.timestamp = Date.now();
  item.processed = true;
  emit(item);
});

3. Closures in Hot Loops

This one's sneaky. Every closure is a heap-allocated function object + a context object:

// BAD: creates 10,000 closures
for (let i = 0; i < 10000; i++) {
  elements[i].addEventListener('click', () => handleClick(i));
}

// BETTER: one function, use data attributes
function handleClickEvent(e) {
  handleClick(Number(e.currentTarget.dataset.index));
}
for (let i = 0; i < 10000; i++) {
  elements[i].dataset.index = i;
  elements[i].addEventListener('click', handleClickEvent);
}

4. String Concatenation in Loops

// Each += creates a new string (old string becomes garbage)
let html = '';
for (const item of items) {
  html += `<div>${item.name}</div>`;  // N string allocations
}

// Array join: one allocation at the end
const parts = items.map(item => `<div>${item.name}</div>`);
const html = parts.join('');
Quiz
Which of these patterns creates the MOST GC pressure in a hot loop running 60 times per second?

Object Pooling Pattern

The fix for high-churn hot paths is an old trick from game development. Object pooling pre-allocates a fixed set of objects and recycles them instead of creating and destroying:

class ObjectPool {
  #pool;
  #factory;
  #reset;

  constructor(factory, reset, initialSize = 32) {
    this.#factory = factory;
    this.#reset = reset;
    this.#pool = Array.from({ length: initialSize }, () => factory());
  }

  acquire() {
    if (this.#pool.length > 0) {
      return this.#pool.pop();
    }
    // Pool exhausted — grow by creating a new object
    return this.#factory();
  }

  release(obj) {
    this.#reset(obj);
    this.#pool.push(obj);
  }
}

// Usage: particle system
const particlePool = new ObjectPool(
  () => ({ x: 0, y: 0, vx: 0, vy: 0, life: 0, active: false }),
  (p) => { p.x = 0; p.y = 0; p.vx = 0; p.vy = 0; p.life = 0; p.active = false; },
  1000
);

function spawnParticle(x, y) {
  const p = particlePool.acquire();
  p.x = x; p.y = y;
  p.vx = Math.random() * 2 - 1;
  p.vy = Math.random() * -3;
  p.life = 60;
  p.active = true;
  return p;
}

function killParticle(p) {
  p.active = false;
  particlePool.release(p);
}

No objects are created or destroyed during normal operation. The Scavenger has nothing to collect. GC pressure drops to near zero for the particle system.

Pool Sizing and Growth Strategy

The initial pool size should cover the expected peak usage. If the pool runs dry, new objects are created (the pool grows). If the pool accumulates too many unused objects, memory is wasted.

Strategies:

  • Fixed pool: never grow. Fail or block when exhausted. Simplest, predictable memory
  • Growing pool: create new objects when exhausted. Never shrink. Risk: unbounded growth
  • Bounded growth: allow growth up to a max. Objects beyond the max are not pooled on release
  • Shrinking pool: periodically trim the pool if usage is consistently below capacity

For most JavaScript applications, a growing pool with an upper bound is the pragmatic choice.

Quiz
What is the main trade-off of object pooling?

The Flyweight Pattern

Flyweight is a lighter alternative to pooling. Instead of recycling full objects, you share immutable data and only vary the extrinsic state:

// WITHOUT flyweight: 10,000 objects with duplicated data
const cells = grid.map((_, i) => ({
  row: Math.floor(i / cols),
  col: i % cols,
  style: { background: '#fff', border: '1px solid #ccc', font: '14px monospace' },
  type: 'text',
}));
// 10,000 style objects — most identical

// WITH flyweight: shared immutable style objects
const CELL_STYLES = {
  text: Object.freeze({ background: '#fff', border: '1px solid #ccc', font: '14px monospace' }),
  header: Object.freeze({ background: '#f0f0f0', border: '1px solid #999', font: '14px bold monospace' }),
};

const cells = grid.map((_, i) => ({
  row: Math.floor(i / cols),
  col: i % cols,
  style: CELL_STYLES.text,  // Shared reference, not a copy
  type: 'text',
}));
// 2 style objects shared across 10,000 cells

Flyweight reduces both allocation count and memory usage. It works best when many objects share the same immutable sub-structure.

Pre-Allocated Typed Arrays

For numeric data in hot paths, you can go even further. Typed arrays eliminate object overhead entirely:

// BAD: array of objects — each object is a heap allocation
const particles = [];
for (let i = 0; i < 10000; i++) {
  particles.push({ x: 0, y: 0, vx: 0, vy: 0 });
}

// GOOD: struct-of-arrays with typed arrays
const count = 10000;
const x  = new Float32Array(count);
const y  = new Float32Array(count);
const vx = new Float32Array(count);
const vy = new Float32Array(count);

function updateParticles(dt) {
  for (let i = 0; i < count; i++) {
    x[i] += vx[i] * dt;
    y[i] += vy[i] * dt;
  }
}

The typed array version:

  • Zero per-element allocation: all values are inline in the typed array buffer
  • Zero GC pressure: updating values mutates in place
  • Cache-friendly: sequential memory access patterns for SIMD-style auto-vectorization
  • Fixed memory: the buffer size is allocated once and never changes
Quiz
Why do typed arrays produce less GC pressure than arrays of objects?

Avoiding Allocations in Hot Loops

Reuse Output Objects

// BAD: new object every frame
function getMousePosition(event) {
  return { x: event.clientX, y: event.clientY };
}

// GOOD: reuse a module-level object
const _mousePos = { x: 0, y: 0 };
function getMousePosition(event) {
  _mousePos.x = event.clientX;
  _mousePos.y = event.clientY;
  return _mousePos;
}

The trade-off: the caller must use the result immediately or copy it — the next call overwrites the shared object. This is a valid pattern for hot paths where the result is consumed synchronously (e.g., per-frame calculations).

Hoist Allocations Out of Loops

// BAD: regex created every iteration
for (const line of lines) {
  const match = line.match(/^(\w+):\s*(.+)$/);
  if (match) process(match[1], match[2]);
}

// GOOD: regex created once
const HEADER_RE = /^(\w+):\s*(.+)$/;
for (const line of lines) {
  const match = line.match(HEADER_RE);
  if (match) process(match[1], match[2]);
}

Use Primitives Over Wrapper Objects

// BAD: unnecessary boxing
const count = new Number(42);     // HeapNumber object
const name = new String("Alice"); // String wrapper object

// GOOD: primitives — no heap allocation for SMI and short strings
const count = 42;
const name = "Alice";  // V8 interns short strings

When NOT to Pool

Before you go pooling everything in sight — slow down. Object pooling has costs. Don't apply it everywhere:

  • Cold paths: functions called rarely don't benefit. GC pressure is a hot-path problem
  • Small objects in low-frequency code: the pool management overhead exceeds the GC savings
  • Objects with complex initialization: if resetting an object is as expensive as creating one, pooling adds complexity without benefit
  • When V8's escape analysis works: if TurboFan proves an object doesn't escape, it's stack-allocated. Pooling would defeat this optimization

Profile first with --trace-gc or Chrome DevTools. If Scavenge pauses are under 1ms and infrequent, pooling is premature optimization.

Quiz
When is object pooling likely NOT worth the added complexity?

Key Rules

Key Rules
  1. 1GC pressure is about allocation rate, not memory usage. High churn = frequent Scavenges = main thread pauses.
  2. 2Use --trace-gc to measure GC frequency and pause times before optimizing. Don't guess.
  3. 3Array method chains (.map().filter().reduce()) create intermediate arrays. Use loops for hot paths.
  4. 4Object spread, closures in loops, and string concatenation are common hidden allocation sources.
  5. 5Object pooling eliminates allocation churn: pre-allocate, acquire, use, release, reuse.
  6. 6Typed arrays store numeric data inline — zero per-element GC overhead. Ideal for large numeric datasets.
  7. 7Flyweight pattern shares immutable sub-structures across many objects, reducing both allocation count and memory.
  8. 8Don't pool in cold paths. Profile first. If Scavenge pauses are negligible, pooling is premature optimization.