Skip to content

Transferable Objects & Zero-Copy

advanced18 min read

The 302ms vs 0.4ms Benchmark

Look at this benchmark that changed how I think about worker communication:

const buffer = new ArrayBuffer(32 * 1024 * 1024); // 32MB

// Approach 1: Structured clone (copy)
console.time('clone');
worker.postMessage(buffer);
console.timeEnd('clone'); // ~302ms — 32MB copied byte by byte

// Approach 2: Transfer (zero-copy)
console.time('transfer');
worker.postMessage(buffer, [buffer]);
console.timeEnd('transfer'); // ~0.4ms — ownership moved, zero bytes copied

console.log(buffer.byteLength); // 0 — the buffer is neutered

That's a 750x speedup. Not 2x, not 10x — seven hundred and fifty times faster. The transferred buffer moves from the main thread's memory space to the worker's without copying a single byte. The original becomes "neutered" (detached) — its byteLength drops to 0 and any access throws.

This is the single most important optimization in worker communication, and most developers either don't know about it or misunderstand when it applies.

Mental Model

Structured clone is like photocopying a document — you keep the original and the other person gets a copy. Transferring is like handing over the original document itself. No photocopying happens, so it's instant regardless of document size. But you don't have the document anymore — it's gone from your desk. The document didn't move in physical space (the memory address is the same); what moved is the permission to access it. The browser updates its internal bookkeeping to say "this memory now belongs to the worker thread, not the main thread."

How Transfer Works Under the Hood

When you transfer an ArrayBuffer, the browser doesn't move memory. It updates the backing store ownership:

  1. The main thread's ArrayBuffer is detached — its internal pointer is cleared
  2. The worker's new ArrayBuffer is created pointing to the same backing store in memory
  3. No bytes are copied, no allocation happens — just pointer reassignment

This is possible because ArrayBuffer represents a contiguous block of memory with no internal references to JavaScript objects. The engine can safely hand the raw memory to another thread because there's nothing to "interpret" — it's just bytes.

const buffer = new ArrayBuffer(1024);
const view = new Uint8Array(buffer);
view[0] = 42;

console.log(buffer.byteLength);  // 1024
console.log(view[0]);            // 42

worker.postMessage(buffer, [buffer]);

console.log(buffer.byteLength);  // 0 — detached
console.log(view[0]);            // throws TypeError: Cannot perform on a detached ArrayBuffer
Quiz
What happens to a TypedArray view (like Uint8Array) after its underlying ArrayBuffer is transferred to a worker?

What's Transferable

Not everything can be transferred. Here's the complete list of transferable types as of 2025:

TypeUse Case
ArrayBufferRaw binary data, typed arrays, image data
MessagePortBidirectional communication channels
ImageBitmapPre-decoded image data for canvas rendering
OffscreenCanvasWorker-side canvas rendering
ReadableStreamStreaming data to/from workers
WritableStreamStreaming data to/from workers
TransformStreamStream transformation pipelines
VideoFrameVideo processing pipelines
AudioDataAudio processing pipelines

The critical thing to notice: plain objects, arrays, strings, Maps, and Sets are NOT transferable. They always go through structured clone. Only types that represent ownership of an underlying resource (memory block, port, canvas, stream) can be transferred.

// This COPIES the object — transfer list has no effect on it
const data = { name: 'Alice', scores: [95, 87, 92] };
worker.postMessage(data); // structured clone, always a copy

// This TRANSFERS the buffer — zero copy
const pixels = new ArrayBuffer(1920 * 1080 * 4);
worker.postMessage(pixels, [pixels]); // transfer

// You can mix: clone the metadata, transfer the buffer
const message = {
  width: 1920,
  height: 1080,
  pixels: pixels, // this specific field will be transferred
};
worker.postMessage(message, [pixels]);
// message.pixels.byteLength === 0 after this
Quiz
You want to send an array of 100,000 user objects to a worker as fast as possible. What is the most efficient approach?

The Transfer List API

The transfer list is the second argument to postMessage. It tells the browser which objects in the message should be transferred rather than cloned:

// Syntax: worker.postMessage(message, transferList)
// transferList is an array of Transferable objects present in message

const buffer1 = new ArrayBuffer(1024);
const buffer2 = new ArrayBuffer(2048);
const port = new MessageChannel().port1;

worker.postMessage(
  {
    config: { width: 100 },  // cloned
    data: buffer1,            // transferred (in transfer list)
    extra: buffer2,           // transferred (in transfer list)
    channel: port,            // transferred (in transfer list)
  },
  [buffer1, buffer2, port]   // transfer list
);

With structuredClone, the transfer list works the same way:

const original = new ArrayBuffer(1024);
const clone = structuredClone(
  { data: original, label: 'test' },
  { transfer: [original] }
);
// original.byteLength === 0
// clone.data.byteLength === 1024
Common Trap

A common mistake is putting an object in the transfer list that isn't in the message. The transfer list must contain objects that are reachable from the first argument. If you transfer a buffer that isn't referenced by the message, the transfer succeeds (the buffer is detached) but the worker never receives it — the data is effectively destroyed.

const buffer = new ArrayBuffer(1024);
const other = new ArrayBuffer(512);
worker.postMessage({ data: buffer }, [buffer, other]);
// buffer is received by the worker
// other is detached but NOT in the message — data lost!

Performance: When Transfer Wins vs When It Doesn't

Transfer is not always faster. There's a threshold below which the overhead of transfer (detaching the source, creating the target object in the worker) exceeds the structured clone cost:

// For very small buffers, clone can be faster
const tiny = new ArrayBuffer(64);  // 64 bytes
// Clone: ~0.01ms (memcpy 64 bytes is trivial)
// Transfer: ~0.05ms (object detach/create overhead)

// For anything over ~1KB, transfer wins
const medium = new ArrayBuffer(1024);
// Clone: ~0.02ms
// Transfer: ~0.05ms (roughly break-even)

// For large buffers, transfer dominates
const large = new ArrayBuffer(10 * 1024 * 1024); // 10MB
// Clone: ~100ms
// Transfer: ~0.3ms

The crossover point varies by device and browser, but a safe heuristic: transfer ArrayBuffers larger than 1KB, clone anything smaller.

Quiz
For a 64-byte ArrayBuffer, which is faster: structured clone or transfer?

The Encode-Transfer-Decode Pattern

For structured data (arrays of objects), the highest-performance approach is to encode it into a flat ArrayBuffer, transfer it, and decode on the other side:

// Encoding: main thread
function encodeUsers(users) {
  const FIELDS = 3; // id, age, score
  const buffer = new Float64Array(users.length * FIELDS);
  for (let i = 0; i < users.length; i++) {
    const offset = i * FIELDS;
    buffer[offset] = users[i].id;
    buffer[offset + 1] = users[i].age;
    buffer[offset + 2] = users[i].score;
  }
  return buffer;
}

const encoded = encodeUsers(users);
worker.postMessage(encoded.buffer, [encoded.buffer]);

// Decoding: worker thread
self.onmessage = (event) => {
  const FIELDS = 3;
  const view = new Float64Array(event.data);
  const count = view.length / FIELDS;

  for (let i = 0; i < count; i++) {
    const offset = i * FIELDS;
    const id = view[offset];
    const age = view[offset + 1];
    const score = view[offset + 2];
    // process each user...
  }
};

This is how high-performance applications (games, data visualization, audio/video processing) handle worker communication. The encoding/decoding overhead is O(n) but with extremely fast per-element operations (direct memory writes), and the transfer itself is O(1).

For string fields, you'd use TextEncoder/TextDecoder and pack strings into a separate ArrayBuffer with a length-prefixed format. At that point, consider whether the complexity is worth it — sometimes structured clone for 10K objects at 20ms is perfectly acceptable.

ImageBitmap Transfer

ImageBitmap is the transferable alternative to ImageData. It represents a pre-decoded bitmap that can be transferred to a worker for processing or drawn directly to a canvas:

// Main thread: decode image and transfer to worker
const response = await fetch('/photo.jpg');
const blob = await response.blob();
const bitmap = await createImageBitmap(blob);

worker.postMessage({ image: bitmap }, [bitmap]);
// bitmap is now detached — cannot be drawn on main thread

// Worker: process the image
self.onmessage = async (event) => {
  const bitmap = event.data.image;
  const canvas = new OffscreenCanvas(bitmap.width, bitmap.height);
  const ctx = canvas.getContext('2d');
  ctx.drawImage(bitmap, 0, 0);

  const imageData = ctx.getImageData(0, 0, bitmap.width, bitmap.height);
  // Apply filters, transformations, etc.
  applyGrayscale(imageData);
  ctx.putImageData(imageData, 0, 0);

  const result = canvas.transferToImageBitmap();
  self.postMessage({ processed: result }, [result]);
};
What developers doWhat they should do
Trying to transfer plain objects or arrays
Plain objects have internal references, prototype chains, and property descriptors that cannot be safely moved between threads. Only types representing raw resources (memory blocks, ports, canvases, streams) can transfer ownership.
Only ArrayBuffer, MessagePort, ImageBitmap, OffscreenCanvas, ReadableStream, WritableStream, TransformStream, VideoFrame, and AudioData are transferable
Accessing a transferred object after postMessage
After transfer, the object is detached. Its ArrayBuffer has byteLength 0, and any TypedArray view throws on access. Nulling the reference prevents accidental use and makes intent clear.
Treat transferred objects as consumed — null out your reference immediately after transfer
Transferring small buffers under 1KB for performance
Transfer has fixed overhead for detachment and metadata. For buffers under ~1KB, memcpy is faster. Only transfer when the data is large enough (typically over 1KB) that copy time dominates.
Let structured clone handle small data — the transfer overhead exceeds the copy cost for tiny buffers
Putting objects in the transfer list that are not in the message
If a transferable in the transfer list is not referenced by the message, it gets detached (destroyed) but never arrives at the receiver. The data is silently lost.
Only transfer objects that are reachable from the message argument

Challenge: Efficient Image Pipeline

Challenge: Zero-Copy Image Processing

Try to solve it before peeking at the answer.

javascript
// Build an image processing pipeline that:
// 1. Loads an image from a URL
// 2. Sends it to a worker for processing (grayscale filter)
// 3. Receives the result back on the main thread
// 4. Draws it to a visible canvas
//
// Requirements:
// - Zero unnecessary copies (use transfers everywhere possible)
// - Handle errors gracefully
// - Clean up resources after processing

// Hint: createImageBitmap, OffscreenCanvas, transferToImageBitmap

Key Rules

Key Rules
  1. 1Transfer moves ownership of a resource without copying data. The source object becomes detached (neutered) and unusable after transfer.
  2. 2Only specific types are transferable: ArrayBuffer, MessagePort, ImageBitmap, OffscreenCanvas, ReadableStream, WritableStream, TransformStream, VideoFrame, AudioData.
  3. 3Transfer is O(1) regardless of data size. Structured clone is O(n). For a 32MB ArrayBuffer, transfer is ~750x faster than clone.
  4. 4Transfer has fixed overhead — for buffers under ~1KB, structured clone (memcpy) is actually faster. Measure before optimizing small transfers.
  5. 5Every object in the transfer list must be reachable from the message. Unreferenced transferables are detached but never delivered — data is silently destroyed.