Skip to content

Server-Sent Events and Long Polling

advanced16 min read

The Most Underrated Browser API

Everyone reaches for WebSocket the moment they need "real-time." But here's the thing most people miss: the majority of real-time features are unidirectional. Notifications, live feeds, stock tickers, build logs, AI chat streaming — the server pushes, the client listens. You don't need a bidirectional channel for that.

Server-Sent Events (SSE) give you exactly this: a persistent HTTP connection where the server streams events to the client. No upgrade handshake, no custom protocol, no library needed. Just EventSource and you're done.

How SSE Works Under the Hood

SSE is just HTTP with a specific content type. The server sets Content-Type: text/event-stream and keeps the connection open, writing events in a simple text format:

event: message
id: 42
data: {"user": "alice", "text": "hello"}

event: typing
id: 43
data: {"user": "bob"}

That's it. Each event is separated by a blank line. The format has four fields:

  • data: — The payload. Multiple data: lines are concatenated with newlines.
  • event: — Custom event type. Defaults to "message" if omitted.
  • id: — Event ID for resumption. The browser sends this back as Last-Event-ID on reconnect.
  • retry: — Reconnection delay in milliseconds. The browser respects this.
const eventSource = new EventSource('/api/events');

eventSource.addEventListener('message', (event) => {
  const data = JSON.parse(event.data);
});

eventSource.addEventListener('typing', (event) => {
  const data = JSON.parse(event.data);
});

eventSource.addEventListener('error', (event) => {
  if (eventSource.readyState === EventSource.CONNECTING) {
    // Browser is auto-reconnecting
  }
});
Mental Model

Think of SSE like a radio broadcast. The server is the station, the client tunes in. The station broadcasts continuously. If the client loses signal (connection drops), it automatically retunes and the station can resume from where the client left off (using the last event ID). Compare this to WebSocket, which is like a phone call — both sides talk, and if the line drops, you have to redial manually.

Automatic Reconnection and Resume

This is SSE's killer feature and the reason it should be your default for server-to-client streaming. When the connection drops, the browser:

  1. Waits for the retry interval (default ~3 seconds, configurable by server)
  2. Reconnects to the same URL
  3. Sends Last-Event-ID header with the last received event ID
  4. The server can resume from that point

You get all of this for free. No library. No code. The browser just does it.

// Server (Node.js / Express)
app.get('/api/events', (req, res) => {
  res.writeHead(200, {
    'Content-Type': 'text/event-stream',
    'Cache-Control': 'no-cache',
    Connection: 'keep-alive',
  });

  const lastId = parseInt(req.headers['last-event-id'] ?? '0', 10);

  // Replay missed events
  const missed = getEventsSince(lastId);
  for (const event of missed) {
    res.write(`id: ${event.id}\ndata: ${JSON.stringify(event.data)}\n\n`);
  }

  // Stream new events
  const unsubscribe = eventBus.subscribe((event) => {
    res.write(`id: ${event.id}\ndata: ${JSON.stringify(event.data)}\n\n`);
  });

  req.on('close', unsubscribe);
});
Quiz
What happens when an SSE connection drops and the client had received events with IDs?

The Fetch-Based SSE Approach

EventSource has limitations: no custom headers (so no Authorization: Bearer ...), no POST method, no request body. For authenticated streams, you need the fetch-based approach using ReadableStream:

async function streamEvents(
  url: string,
  token: string,
  onEvent: (event: { type: string; data: unknown }) => void
) {
  const response = await fetch(url, {
    headers: {
      Authorization: `Bearer ${token}`,
      Accept: 'text/event-stream',
    },
  });

  if (!response.ok || !response.body) {
    throw new Error(`Stream failed: ${response.status}`);
  }

  const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += value;
    const events = buffer.split('\n\n');
    buffer = events.pop() ?? '';

    for (const raw of events) {
      if (!raw.trim()) continue;
      const lines = raw.split('\n');
      let type = 'message';
      let data = '';

      for (const line of lines) {
        if (line.startsWith('event: ')) type = line.slice(7);
        else if (line.startsWith('data: ')) data += (data ? '\n' : '') + line.slice(6);
      }

      onEvent({ type, data: JSON.parse(data) });
    }
  }
}
Common Trap

The fetch-based approach does NOT get automatic reconnection. You lose the browser's built-in retry logic, Last-Event-ID header management, and the readyState tracking. If you go this route, you must implement reconnection yourself. Only use this when you genuinely need custom headers.

Long Polling: The Battle-Tested Fallback

Before SSE, before WebSocket, there was long polling. The technique is elegantly simple:

  1. Client sends a request
  2. Server holds the response open until it has data (or a timeout, typically 30-60 seconds)
  3. Server sends the response
  4. Client immediately sends a new request
  5. Repeat
async function longPoll(
  url: string,
  cursor: string,
  onMessage: (data: unknown, newCursor: string) => void,
  signal: AbortSignal
) {
  while (!signal.aborted) {
    try {
      const res = await fetch(`${url}?cursor=${cursor}`, { signal });
      if (!res.ok) {
        await new Promise((r) => setTimeout(r, 5000));
        continue;
      }
      const { data, nextCursor } = await res.json();
      cursor = nextCursor;
      onMessage(data, nextCursor);
    } catch (err) {
      if (signal.aborted) break;
      await new Promise((r) => setTimeout(r, 5000));
    }
  }
}

Long polling works everywhere. Every HTTP client, every proxy, every CDN understands it. It's the ultimate fallback.

But it has costs: each "event" requires a full HTTP request/response cycle (headers and all), there's inherent latency between when data is available and when the next poll arrives, and each open connection consumes a server thread (or at least a connection slot).

Quiz
What is the primary disadvantage of long polling compared to SSE?

SSE vs WebSocket: When Each Wins

This is not a "which is better" question. It's a "which fits your use case" question.

ConcernSSEWebSocket
DirectionServer → Client onlyBidirectional
ProtocolHTTP (just a long-lived response)Custom protocol over TCP
ReconnectionAutomatic with event ID resumeManual (build it yourself)
AuthenticationCookies work; custom headers need fetch approachToken in URL or first message
Proxy/CDN compatibilityExcellent (it's just HTTP)Can be problematic (some proxies drop)
Max connections per domain6 per domain (HTTP/1.1), unlimited (HTTP/2)No browser limit
Binary dataNo (text only)Yes (ArrayBuffer, Blob)
Overhead per message~20-50 bytes (text format)2-14 bytes (binary frame)
Browser supportUniversal (except IE, which is dead)Universal

Use SSE when: notifications, live feeds, build logs, AI streaming responses, dashboards, stock tickers — anything where only the server pushes data.

Use WebSocket when: chat, collaborative editing, multiplayer games, any scenario where the client sends frequent messages to the server.

Use long polling when: SSE isn't available (very rare today), you need maximum compatibility with ancient infrastructure, or your events are infrequent (less than once every few seconds).

Quiz
For streaming AI chat responses (like ChatGPT), which transport is the best fit?

Edge Cases That Bite in Production

Proxy Buffering

Some reverse proxies (Nginx, Cloudflare) buffer responses before forwarding them. This kills SSE because the client won't receive events until the buffer fills. The fixes:

# Nginx
location /api/events {
    proxy_buffering off;
    proxy_cache off;
    proxy_set_header Connection '';
    proxy_http_version 1.1;
    chunked_transfer_encoding off;
}

On the server, also send a comment line periodically to keep the connection active:

const keepAlive = setInterval(() => {
  res.write(': keepalive\n\n');
}, 15000);

The : keepalive line (starting with a colon) is a comment in the SSE spec — the browser ignores it, but it keeps intermediaries from closing an "idle" connection.

The HTTP/1.1 Connection Limit

Browsers limit HTTP/1.1 connections to 6 per domain. Each SSE connection counts. If you open 6 SSE streams to the same domain, no other HTTP requests can go through. HTTP/2 multiplexes over a single TCP connection, so this limit doesn't apply — but make sure your infrastructure supports HTTP/2.

What developers doWhat they should do
Using WebSocket for server-to-client-only features
SSE gives you automatic reconnection, event ID resume, and HTTP compatibility for free. WebSocket requires you to build all of this manually, and you're paying for bidirectional capability you don't use.
Use SSE with EventSource for unidirectional server pushes
Not setting Cache-Control headers on SSE endpoints
Without these headers, CDNs and proxies may cache or close your SSE connection, breaking the stream. Some will buffer the entire response before forwarding.
Always set Cache-Control: no-cache and Connection: keep-alive
Ignoring the 6-connection-per-domain limit in HTTP/1.1
Opening multiple SSE connections on HTTP/1.1 quickly exhausts the browser's connection pool, blocking all other requests to your domain.
Ensure your infrastructure uses HTTP/2, or multiplex events through a single SSE connection
Interview Question

Design: Live Dashboard with 50 Metrics

You're building a monitoring dashboard that shows 50 real-time metrics, each updating at different frequencies (some every second, some every minute). The dashboard has 10K concurrent viewers. Design the server-to-client push architecture. Consider: should each metric be a separate SSE stream or should you multiplex? How do you handle viewers on slow connections? What happens when a viewer opens the dashboard for the first time — do they get current values immediately?