Skip to content

Ring Buffer Pattern for Zero-Allocation Systems

advanced16 min read

The Problem With push/shift

You've probably written something like this a hundred times. The most natural way to maintain a fixed-size window of recent values in JavaScript is:

const history = [];
const MAX_SIZE = 1000;

function record(value) {
  history.push(value);
  if (history.length > MAX_SIZE) {
    history.shift();  // Remove the oldest entry
  }
}

This works. But shift() is hiding a terrible secret: it's O(n). It removes the first element and moves every other element down by one index. With 1,000 entries, that's 999 element copies per call. At 60fps, that's 59,940 unnecessary memory moves per second. Ouch.

Worse, if value is an object, every push allocates and every shift creates garbage for the GC to collect.

// 60fps animation frame timing
function onFrame(timestamp) {
  record({ timestamp, duration: timestamp - lastTimestamp });
  // Every frame: 1 allocation (new object) + 1 deallocation (shifted object)
  // At 60fps: 60 allocations + 60 deallocations per second
  requestAnimationFrame(onFrame);
}

A ring buffer eliminates both problems: O(1) operations and zero allocations.

Mental Model

Picture a clock face. Instead of a growing line of values where you add to one end and remove from the other, imagine placing values around the clock. A "write" hand points to where the next value goes. A "read" hand points to the oldest value. When the write hand completes a full circle, it overwrites the oldest values — no shifting, no growing, no shrinking. The array size never changes.

Ring Buffer Fundamentals

The concept is dead simple. A ring buffer is a fixed-size array with two pointers:

  • Head (write pointer): where the next value will be written
  • Tail (read pointer): where the oldest value lives
  • Modulo arithmetic: both pointers wrap around using % capacity
class RingBuffer {
  #buffer;
  #head = 0;    // Next write position
  #size = 0;    // Current number of elements
  #capacity;

  constructor(capacity) {
    this.#capacity = capacity;
    this.#buffer = new Array(capacity);
  }

  push(value) {
    this.#buffer[this.#head] = value;
    this.#head = (this.#head + 1) % this.#capacity;
    if (this.#size < this.#capacity) {
      this.#size++;
    }
  }

  get(index) {
    if (index >= this.#size) return undefined;
    const tail = (this.#head - this.#size + this.#capacity) % this.#capacity;
    return this.#buffer[(tail + index) % this.#capacity];
  }

  get length() { return this.#size; }
  get capacity() { return this.#capacity; }
}

Every operation is O(1). The array is allocated once and never resized. The modulo operation makes the pointers wrap around seamlessly.

Quiz
A ring buffer has capacity 4 and contains [A, B, C, D] with head at index 0. What happens when you push(E)?

Typed Array Ring Buffer

And this is where it gets really good. For numeric data, combining the ring buffer with TypedArrays eliminates all GC pressure:

class Float64RingBuffer {
  #buffer;
  #head = 0;
  #size = 0;
  #capacity;

  constructor(capacity) {
    this.#capacity = capacity;
    this.#buffer = new Float64Array(capacity);
  }

  push(value) {
    this.#buffer[this.#head] = value;
    this.#head = (this.#head + 1) % this.#capacity;
    if (this.#size < this.#capacity) this.#size++;
  }

  get(index) {
    if (index >= this.#size) return undefined;
    const tail = (this.#head - this.#size + this.#capacity) % this.#capacity;
    return this.#buffer[(tail + index) % this.#capacity];
  }

  latest() {
    if (this.#size === 0) return undefined;
    return this.#buffer[(this.#head - 1 + this.#capacity) % this.#capacity];
  }

  average() {
    if (this.#size === 0) return 0;
    let sum = 0;
    const tail = (this.#head - this.#size + this.#capacity) % this.#capacity;
    for (let i = 0; i < this.#size; i++) {
      sum += this.#buffer[(tail + i) % this.#capacity];
    }
    return sum / this.#size;
  }

  get length() { return this.#size; }
}

Look at what you get with this approach:

  • Single allocation: the Float64Array is allocated once in the constructor
  • No GC pressure: push() writes a primitive float into the buffer. No objects created
  • Cache-friendly: typed arrays are contiguous memory. Sequential access patterns get hardware prefetch benefits
  • Predictable memory: capacity * 8 bytes (Float64), forever. No growth, no fragmentation
Quiz
Why is a Float64Array ring buffer 'zero allocation' during operation, while an Array ring buffer is not?

Multi-Channel Ring Buffer

In the real world, you rarely track just one value. Real applications need parallel streams of data. A struct-of-arrays approach keeps each channel in its own typed array:

class TelemetryBuffer {
  #timestamps;
  #fps;
  #memory;
  #head = 0;
  #size = 0;
  #capacity;

  constructor(capacity) {
    this.#capacity = capacity;
    this.#timestamps = new Float64Array(capacity);
    this.#fps = new Float32Array(capacity);
    this.#memory = new Float32Array(capacity);
  }

  record(timestamp, fps, memoryMB) {
    this.#timestamps[this.#head] = timestamp;
    this.#fps[this.#head] = fps;
    this.#memory[this.#head] = memoryMB;
    this.#head = (this.#head + 1) % this.#capacity;
    if (this.#size < this.#capacity) this.#size++;
  }

  averageFps() {
    if (this.#size === 0) return 0;
    let sum = 0;
    const tail = (this.#head - this.#size + this.#capacity) % this.#capacity;
    for (let i = 0; i < this.#size; i++) {
      sum += this.#fps[(tail + i) % this.#capacity];
    }
    return sum / this.#size;
  }
}

// Usage in animation loop
const telemetry = new TelemetryBuffer(300); // Last 5 seconds at 60fps

function onFrame(timestamp) {
  const fps = 1000 / (timestamp - lastTimestamp);
  const mem = performance.memory?.usedJSHeapSize / 1048576 ?? 0;
  telemetry.record(timestamp, fps, mem);  // Zero allocations
  requestAnimationFrame(onFrame);
}

Use Cases

Frame Time History (Performance Monitoring)

const frameTimes = new Float64RingBuffer(120); // Last 2 seconds at 60fps

function measureFrame(timestamp) {
  frameTimes.push(timestamp - lastTimestamp);
  lastTimestamp = timestamp;

  // Detect jank: any frame > 16.7ms (60fps budget)
  if (frameTimes.latest() > 16.7) {
    console.warn(`Frame drop: ${frameTimes.latest().toFixed(1)}ms`);
  }

  // Rolling average for smooth FPS display
  const avgFrameTime = frameTimes.average();
  fpsDisplay.textContent = `${(1000 / avgFrameTime).toFixed(0)} fps`;
}

Event Debounce Buffer

const eventBuffer = new RingBuffer(50); // Last 50 events

document.addEventListener('mousemove', (e) => {
  eventBuffer.push({ x: e.clientX, y: e.clientY, t: performance.now() });
});

function getRecentVelocity() {
  if (eventBuffer.length < 2) return 0;
  const latest = eventBuffer.get(eventBuffer.length - 1);
  const prev = eventBuffer.get(eventBuffer.length - 2);
  const dt = latest.t - prev.t;
  if (dt === 0) return 0;
  const dx = latest.x - prev.x;
  const dy = latest.y - prev.y;
  return Math.sqrt(dx * dx + dy * dy) / dt;
}

Undo/Redo with Fixed Memory

const undoBuffer = new RingBuffer(100); // Last 100 states

function recordState(state) {
  undoBuffer.push(structuredClone(state));
}

function undo() {
  if (undoBuffer.length < 2) return null;
  // The current state is at the end; the previous state is second-to-last
  return undoBuffer.get(undoBuffer.length - 2);
}
Quiz
Why is a ring buffer better than an array with push/shift for a real-time telemetry system recording at 60Hz?

Ring Buffer vs push/shift: The Numbers Don't Lie

// Benchmark: 1,000,000 push operations with capacity 1000

// push/shift array
const arr = [];
for (let i = 0; i < 1_000_000; i++) {
  arr.push(i);
  if (arr.length > 1000) arr.shift();
}
// ~450ms, ~1000 GC events (if storing objects)

// Ring buffer
const ring = new Float64RingBuffer(1000);
for (let i = 0; i < 1_000_000; i++) {
  ring.push(i);
}
// ~12ms, 0 GC events

35x faster with zero GC impact. And the gap only widens with larger capacities because shift() is O(n) while ring buffer push() is always O(1).

Bitwise Modulo for Power-of-2 Sizes

If the capacity is a power of 2, you can replace the modulo operation with a bitwise AND, which is faster on most CPUs:

// Instead of: this.#head = (this.#head + 1) % this.#capacity;
// Use:        this.#head = (this.#head + 1) & (this.#capacity - 1);

This works because n % 2^k === n & (2^k - 1) for non-negative integers. Modern CPUs compute AND in a single cycle vs 3-5 cycles for integer division. At 60Hz this is micro-optimization territory, but in audio processing at 44.1kHz it's measurable.

When Not to Use Ring Buffers

Before you refactor everything into ring buffers, a few caveats:

  • Variable-size data: if elements have different sizes, a typed-array ring buffer wastes space. Consider a regular ring buffer or a different structure
  • Random deletion: ring buffers only discard the oldest element. If you need to remove arbitrary elements, use a different data structure
  • Unbounded history: if you need to keep ALL data (not just the last N), a ring buffer isn't appropriate — use a growable array or stream to disk
  • Infrequent access: if data is recorded once per second, the performance difference between ring buffer and push/shift is negligible
Quiz
A ring buffer with capacity 256 uses bitwise AND instead of modulo: head = (head + 1) & 255. Why must the capacity be a power of 2 for this to work?

Key Rules

Key Rules
  1. 1Ring buffers use fixed-size arrays with modulo-wrapped head/tail pointers. O(1) push, O(1) read, O(1) memory.
  2. 2Array push/shift is O(n) for shift. Ring buffers replace this with a pointer increment — orders of magnitude faster at scale.
  3. 3Typed-array ring buffers produce zero GC pressure. Every write is a raw memory store, no heap allocation.
  4. 4Use struct-of-arrays (separate typed arrays per channel) for multi-field telemetry data.
  5. 5Power-of-2 capacities enable bitwise AND instead of modulo — single-cycle wrapping.
  6. 6Ring buffers discard the oldest data when full. If you need unbounded history, use a different structure.
  7. 7Ideal for: frame timing, telemetry, event buffering, audio processing, undo history with fixed depth.