Skip to content

Web Workers and Multithreading

advanced18 min read

The Main Thread Is a Bottleneck

Here's something that burns people in production: JavaScript runs on a single thread. Your event handlers, your DOM updates, your layout calculations, your garbage collection — all fighting for the same thread. Run a 200ms JSON parse and your scroll handler freezes. Run image processing and your buttons stop responding.

The browser actually has multiple threads (networking, compositing, rasterization), but your JavaScript gets exactly one. Until you ask for more.

Web Workers give you real OS-level threads. Not fake concurrency like setTimeout chunking. Actual parallel execution on separate CPU cores. The catch? Workers cannot touch the DOM. They live in isolation. And that constraint is exactly what makes them safe.

Mental Model

Think of Web Workers like a back office. The main thread is the front desk — it handles customers (user interactions) and updates the display (DOM). Heavy paperwork (data processing, parsing, crypto) gets sent to the back office via a message slot (postMessage). The back office does the work and slides the result back. The front desk never stops serving customers.

Dedicated Workers: The Basics

A dedicated worker is a thread owned by a single page. You create it, send it messages, and it sends messages back.

const worker = new Worker('/worker.js');

worker.postMessage({ type: 'parse', data: rawCSV });

worker.onmessage = (event) => {
  const parsed = event.data;
  renderTable(parsed);
};

worker.onerror = (event) => {
  console.error('Worker error:', event.message);
};

Inside worker.js:

self.onmessage = (event) => {
  const { type, data } = event.data;

  if (type === 'parse') {
    const result = parseCSV(data);
    self.postMessage(result);
  }
};

That's the entire API. postMessage sends data in, onmessage receives data out. The data is structured-cloned — deep copied — so neither side can accidentally mutate the other's state.

Quiz
When you send an object via postMessage to a Worker, what happens to that object?

Transferable Objects: Zero-Copy Messaging

Structured cloning works great for small data. But what if you're sending a 50MB ArrayBuffer? Copying 50MB blocks the main thread during the postMessage call itself.

Transferable objects solve this. Instead of copying, you transfer ownership. The ArrayBuffer moves to the worker, and the main thread's reference becomes zero-length — unusable.

const buffer = new ArrayBuffer(50 * 1024 * 1024);

worker.postMessage({ pixels: buffer }, [buffer]);

console.log(buffer.byteLength); // 0 — transferred, not copied

The second argument to postMessage is an array of transferable objects. After transfer, the data exists only in the worker. Zero copy overhead.

Transferable types include:

  • ArrayBuffer (and typed arrays that wrap them)
  • MessagePort
  • OffscreenCanvas
  • ImageBitmap
  • ReadableStream / WritableStream / TransformStream
const offscreen = canvas.transferControlToOffscreen();
worker.postMessage({ canvas: offscreen }, [offscreen]);
Common Trap

After transferring an ArrayBuffer, any TypedArray view (like Uint8Array) that pointed to it also becomes unusable. The buffer is neutered, and all views are invalidated. If you need to keep a copy on the main thread, clone it before transferring: const copy = buffer.slice(0); worker.postMessage(buffer, [buffer]);

Quiz
You transfer a 10MB ArrayBuffer to a worker via postMessage. What happens on the main thread?

SharedWorker: One Worker, Multiple Tabs

A SharedWorker is shared across all pages from the same origin. Open your app in 5 tabs? They all talk to the same worker instance.

const shared = new SharedWorker('/shared-worker.js');

shared.port.start();
shared.port.postMessage({ type: 'subscribe', channel: 'updates' });

shared.port.onmessage = (event) => {
  updateUI(event.data);
};

Inside the shared worker:

const ports = new Set();

self.onconnect = (event) => {
  const port = event.ports[0];
  ports.add(port);

  port.onmessage = (event) => {
    for (const p of ports) {
      p.postMessage({ from: 'shared', data: event.data });
    }
  };

  port.start();
};

The key difference: SharedWorker uses MessagePort instead of direct postMessage. Each connecting page gets its own port through the connect event.

Use cases for SharedWorker:

  • WebSocket connection sharing — one socket, all tabs receive updates
  • Cross-tab state synchronization — shopping cart, auth state
  • Shared cache — one worker fetches data, all tabs read from it
SharedWorker browser support

SharedWorker is supported in Chrome, Edge, Firefox, and Opera, but not in Safari. Safari has historically resisted SharedWorker (citing security and complexity). If you need cross-tab communication in Safari, use BroadcastChannel instead — it covers most use cases and works everywhere.

Worker Communication Patterns

Request/Response with ID Tracking

For complex apps, you need to match responses to requests. Use an ID system:

let nextId = 0;
const pending = new Map();

function workerRequest(worker, payload) {
  return new Promise((resolve, reject) => {
    const id = nextId++;
    pending.set(id, { resolve, reject });
    worker.postMessage({ id, ...payload });
  });
}

worker.onmessage = (event) => {
  const { id, result, error } = event.data;
  const handler = pending.get(id);
  if (!handler) return;
  pending.delete(id);
  error ? handler.reject(new Error(error)) : handler.resolve(result);
};

const result = await workerRequest(worker, { type: 'compress', data: blob });

Streaming Results with MessageChannel

For long-running tasks that produce incremental results:

const channel = new MessageChannel();

channel.port1.onmessage = (event) => {
  if (event.data.done) {
    channel.port1.close();
    return;
  }
  updateProgress(event.data.progress);
};

worker.postMessage(
  { type: 'process', port: channel.port2 },
  [channel.port2]
);

The raw postMessage API gets tedious fast. Comlink (by Surma, ex-Google) wraps workers so they feel like regular async function calls.

import * as Comlink from 'comlink';

const api = Comlink.wrap(new Worker('/heavy-math.js'));

const result = await api.fibonacci(40);
const stats = await api.computeStats(dataset);

Inside the worker:

import * as Comlink from 'comlink';

const api = {
  fibonacci(n) {
    if (n <= 1) return n;
    return this.fibonacci(n - 1) + this.fibonacci(n - 2);
  },
  computeStats(data) {
    return { mean: mean(data), stddev: stddev(data) };
  }
};

Comlink.expose(api);

Comlink uses Proxy and MessageChannel under the hood. Every method call becomes a postMessage round-trip, but the DX is dramatically better. You write normal async code; Comlink handles serialization and matching.

Quiz
What problem does Comlink solve for Web Workers?

When to Use Workers (and When Not To)

Workers shine for CPU-bound work that would block the main thread:

Use CaseWhy a Worker
CSV/JSON parsing (large files)Parsing 10MB+ blocks the main thread for 100ms+
Image processing (filters, resize)Pixel manipulation is pure computation
Cryptographic operationsHashing, encryption are CPU-intensive
Search indexingBuilding search indices over content
Data transformationSorting, filtering, aggregating large datasets
WASM executionRun compiled C/Rust code off the main thread
Syntax highlightingTokenizing large code blocks

Workers are not helpful for:

  • DOM manipulation (they cannot access it)
  • Simple fetch requests (the network is already async)
  • Small computations (the overhead of postMessage exceeds the savings)
  • Anything that takes less than ~16ms (you won't notice the difference)
The real cost of postMessage

Each postMessage call has overhead: the structured clone algorithm runs synchronously on the sending thread (blocking it during serialization), then the data is queued for the receiving thread. For small messages (under 1KB), this overhead is negligible — microseconds. For large objects (1MB+), serialization alone can take 5-10ms. This is why Transferable objects matter: they bypass serialization entirely. The rule of thumb — if the data you are sending is larger than the computation you are offloading, the worker is making things worse, not better.

Thread Pool Pattern

Creating and destroying workers is expensive (~40ms each). For frequent short tasks, maintain a pool:

class WorkerPool {
  #workers;
  #queue = [];
  #available;

  constructor(url, size = navigator.hardwareConcurrency || 4) {
    this.#workers = Array.from({ length: size }, () => new Worker(url));
    this.#available = new Set(this.#workers);
  }

  async run(data) {
    const worker = await this.#getWorker();

    return new Promise((resolve, reject) => {
      worker.onmessage = (e) => {
        this.#release(worker);
        resolve(e.data);
      };
      worker.onerror = (e) => {
        this.#release(worker);
        reject(new Error(e.message));
      };
      worker.postMessage(data);
    });
  }

  #getWorker() {
    if (this.#available.size > 0) {
      const worker = this.#available.values().next().value;
      this.#available.delete(worker);
      return Promise.resolve(worker);
    }
    return new Promise((resolve) => {
      this.#queue.push(resolve);
    });
  }

  #release(worker) {
    if (this.#queue.length > 0) {
      this.#queue.shift()(worker);
    } else {
      this.#available.add(worker);
    }
  }

  terminate() {
    this.#workers.forEach((w) => w.terminate());
  }
}

const pool = new WorkerPool('/task-worker.js', 4);

const results = await Promise.all(
  chunks.map((chunk) => pool.run({ type: 'process', chunk }))
);

navigator.hardwareConcurrency tells you how many logical cores are available. Creating more workers than cores gives no benefit — they'd timeshare the same core.

Worker Limitations

Workers live in a sandboxed environment. They cannot:

  • Access document, window, or any DOM API
  • Read or modify the page's DOM tree
  • Access localStorage or sessionStorage
  • Use alert(), confirm(), or prompt()

Workers can use:

  • fetch, XMLHttpRequest
  • IndexedDB
  • WebSocket
  • setTimeout / setInterval
  • crypto.subtle
  • OffscreenCanvas
  • importScripts() (classic workers) or import (module workers)

Module Workers

Modern workers support ES modules:

const worker = new Worker('/worker.js', { type: 'module' });

Inside a module worker, you use standard import statements instead of importScripts(). This enables tree-shaking, better tooling, and the same module graph your main app uses.

What developers doWhat they should do
Creating a new Worker for every small task
Worker creation costs ~40ms and allocates a new thread. For tasks under 50ms, the overhead makes it slower than running on the main thread.
Use a worker pool for frequent short-lived tasks, or keep one long-lived worker for related operations
Sending large objects via postMessage without transferring
Structured cloning runs synchronously on the sender's thread. A 50MB ArrayBuffer copy can block the main thread for 20ms+ — defeating the purpose of using a worker.
Use Transferable objects (ArrayBuffer, OffscreenCanvas) for large data to avoid costly deep copies
Trying to share DOM references with workers
Workers run in a separate global scope (DedicatedWorkerGlobalScope) with no access to document or window. This isolation is what makes them thread-safe — no two threads can mutate the DOM simultaneously.
Send the data the worker needs (as plain objects), not DOM elements. Workers cannot access the DOM by design.
Key Rules
  1. 1Workers run on real OS threads — use them for CPU-heavy work that would block the main thread (parsing, crypto, image processing)
  2. 2postMessage deep-copies data via structured clone — use Transferable objects for large buffers to avoid copying overhead
  3. 3SharedWorker shares one instance across all same-origin tabs — great for WebSocket sharing and cross-tab state
  4. 4Module workers (type: 'module') support ES imports and enable tree-shaking
  5. 5Use navigator.hardwareConcurrency to size your worker pool — more workers than cores gives no parallel benefit
1/11