Skip to content

TCP and TLS Handshakes

intermediate16 min read

The Cost of Saying Hello

After DNS gives your browser an IP address, you might think data starts flowing immediately. It doesn't. Before a single byte of your HTML can travel across the wire, your browser and the server need to have two separate conversations — one to establish a reliable connection (TCP), and another to encrypt it (TLS).

These conversations are called handshakes, and they cost you real time. A TCP handshake is 1 round trip. A TLS 1.2 handshake adds 2 more. That's 3 round trips before any data moves. At 50ms per round trip, you just burned 150ms staring at a blank screen.

TLS 1.3 changed the game. But to appreciate why, you need to understand what came before.

The Mental Model

Mental Model

Imagine ordering coffee at a new cafe. TCP is the handshake: you walk up and say "Hi, I'd like to order" (SYN). The barista says "Sure, I'm ready, what do you want?" (SYN-ACK). You say "Great, here's my order" (ACK). TLS is the whisper agreement: before you say your credit card number, you agree on a secret code so nobody overhearing can steal it. TLS 1.2 takes two back-and-forths to agree. TLS 1.3 does it in one, because you both already know the popular code systems.

TCP: The Three-Way Handshake

TCP (Transmission Control Protocol) provides reliable, ordered delivery of data. Unlike UDP, which is fire-and-forget, TCP guarantees that every byte arrives in order and retransmits anything that gets lost.

But reliability has a cost: the three-way handshake.

This takes exactly 1 RTT (round-trip time). The client sends SYN, waits for SYN-ACK, then sends ACK. The ACK packet can carry the first data payload (the HTTP request), so the actual overhead is 1 RTT.

Why Not Just Start Sending Data?

The three-way handshake exists to:

  1. Verify both parties are alive — the server confirms it can receive, the client confirms it can receive
  2. Synchronize sequence numbers — both sides agree on numbering so packets can be reordered if they arrive out of sequence
  3. Prevent ghost connections — without SYN-ACK, old duplicate packets could open fake connections (called SYN flood attacks exploit this)
Quiz
What is the minimum number of round trips required to complete a TCP three-way handshake?

TLS: Encrypting the Connection

Once TCP is established, HTTPS sites add another handshake layer: TLS (Transport Layer Security). This negotiates encryption so nobody between your browser and the server can read or tamper with the data.

TLS 1.2: The Two-Round-Trip Handshake

TLS 1.2 was the standard for over a decade. Its handshake works like this:

Execution Trace
ClientHello
Client sends supported cipher suites, random number, TLS version
Start of first round trip
ServerHello
Server picks a cipher suite, sends its certificate and random number
Server's half of first round trip
Key Exchange
Client verifies certificate, generates pre-master secret, sends encrypted to server
Start of second round trip
Finished
Both sides derive session keys. Server sends 'Finished' confirmation.
End of second round trip
Encrypted data
Application data can now flow encrypted
2 RTTs total before first byte

That's 2 additional RTTs on top of TCP's 1 RTT. For a new HTTPS connection with TLS 1.2:

TCP handshake:   1 RTT
TLS 1.2:         2 RTTs
─────────────────────────
Total:           3 RTTs before data

At 100ms RTT (typical for cross-continent), that's 300ms of handshaking before a single byte of HTML arrives.

TLS 1.3: The One-Round-Trip Revolution

TLS 1.3 (finalized in 2018, RFC 8446) made a fundamental change: the client sends its key share in the very first message, alongside the ClientHello. The server can compute the shared secret immediately and start sending encrypted data back in the first response.

Execution Trace
ClientHello + Key Share
Client sends supported cipher suites AND its key share upfront
Guesses which key exchange the server will use
ServerHello + Key Share + Encrypted Data
Server picks cipher, sends its key share, and can start sending encrypted data immediately
1 RTT total
Client Finished
Client computes shared secret, sends confirmation
Data already flowing

1 RTT for TLS 1.3 vs 2 RTTs for TLS 1.2. For a new HTTPS connection:

TCP handshake:   1 RTT
TLS 1.3:         1 RTT
─────────────────────────
Total:           2 RTTs before data

That's saving 100ms at 100ms RTT — a massive win for every new HTTPS connection.

How TLS 1.3 achieves 1-RTT

The trick is that TLS 1.3 removed support for legacy key exchange methods and only allows a small set of modern algorithms (like X25519 and P-256). Since there are so few options, the client can optimistically generate a key share for its preferred algorithm and send it in the first message. If the server supports it (and it almost always does), the handshake completes in 1 RTT. TLS 1.2 couldn't do this because it supported dozens of cipher suites.

0-RTT Resumption: The Speed Cheat Code

TLS 1.3 has one more trick: if you've connected to a server before, you can resume with zero additional round trips. The previous session generates a PSK (Pre-Shared Key) that both sides remember. On reconnection, the client sends the PSK and encrypted application data in the very first packet.

Returning visitor with 0-RTT:
TCP handshake:   1 RTT
TLS 1.3 0-RTT:  0 RTTs  (data sent with ClientHello)
─────────────────────────
Total:           1 RTT before data
Common Trap

0-RTT data is vulnerable to replay attacks. An attacker who captures the initial 0-RTT packet can resend it, causing the server to process the request twice. This is why 0-RTT should only be used for idempotent requests (GET, HEAD) — never for POST requests that transfer money or change state. Servers must implement replay protection or limit 0-RTT to safe operations. Most CDNs handle this correctly, but it's worth verifying.

Quiz
How many total RTTs does a brand-new HTTPS connection take with TLS 1.3?

TCP Slow Start: Why New Connections Are Slow

Even after the handshakes complete, TCP doesn't let data flow at full speed immediately. It uses an algorithm called slow start to probe for available bandwidth.

Here's how it works: TCP starts with a small congestion window (typically 10 TCP segments, about 14KB). For each acknowledged segment, the window grows. It roughly doubles every round trip:

RTT 1:   14 KB (10 segments)
RTT 2:   28 KB
RTT 3:   56 KB
RTT 4:  112 KB
RTT 5:  224 KB
...continues until packet loss occurs

This means your first 14KB of HTML arrives after 1 RTT of data transfer, but delivering 100KB requires several round trips as the window grows.

This is why the first 14KB of your HTML matters so much. If your critical CSS and initial HTML fit within 14KB (compressed), they arrive in the first data round trip. If they exceed 14KB, you're waiting for slow start to ramp up.

14KB is compressed size, not raw size

The 14KB slow start threshold refers to the data on the wire after compression. A 50KB HTML file that gzip-compresses to 12KB fits in the first congestion window. Always measure compressed sizes when optimizing for the critical rendering path.

Quiz
Why does TCP start with a small congestion window instead of sending data at full speed?

Congestion Control: What Happens When the Pipe Is Full

Slow start keeps doubling the congestion window until one of two things happens:

  1. Packet loss is detected — the window drops dramatically (TCP Reno halves it, TCP CUBIC uses a cubic function to recover)
  2. The slow start threshold is reached — growth switches from exponential to linear (congestion avoidance mode)

Modern operating systems use TCP CUBIC (Linux default) or TCP BBR (Google's algorithm). BBR is notable because it doesn't wait for packet loss to detect congestion — it measures RTT and bandwidth to find the optimal sending rate. Google reported 2-25% throughput improvements when they deployed BBR on YouTube.

TCP BBR vs CUBIC

Traditional loss-based algorithms (Reno, CUBIC) treat packet loss as the signal to slow down. This works, but it means they fill buffers until packets drop — causing "bufferbloat" (high latency). BBR takes a different approach: it periodically probes for bandwidth and RTT, building a model of the network path. It aims to send at the bottleneck bandwidth without filling intermediate buffers. BBR v2 (deployed at Google since 2023) improves fairness with CUBIC flows and handles packet loss more gracefully than BBR v1.

Connection Reuse: Avoiding Handshake Costs

Since handshakes are expensive, reusing connections is critical:

  • HTTP/1.1 Keep-Alive — connections stay open for multiple requests (default behavior)
  • HTTP/2 multiplexing — a single connection carries all requests to an origin
  • HTTP/3 — uses QUIC, which has its own connection migration (survives network changes)
  • preconnect — completes TCP + TLS before resources are needed
<!-- Complete DNS + TCP + TLS for critical origins early -->
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>

The single most impactful thing you can do for connection performance is reduce the number of unique origins your page depends on. Each new origin means a new TCP + TLS handshake.

Common Mistakes

What developers doWhat they should do
Thinking TLS adds significant overhead to every request
A single HTTP/2 connection to an origin handles all requests. The TLS handshake happens once when the connection opens, then all subsequent requests flow through the same encrypted connection with zero additional TLS overhead.
TLS handshake cost is per-connection, not per-request. With keep-alive and HTTP/2, you pay the cost once.
Ignoring the 14KB critical CSS threshold
TCP slow start means only ~14KB flows in the first data round trip. Exceeding this means waiting for the congestion window to grow, adding RTTs to your critical rendering path. Inline critical CSS to stay under this threshold.
Keep critical CSS and initial HTML under 14KB compressed to fit in the first TCP congestion window
Serving a site over HTTP instead of HTTPS 'for performance'
TLS 1.3 reduced the handshake to 1 RTT (0-RTT for return visits). Modern HTTP features (HTTP/2, HTTP/3, Service Workers, Brotli) all require HTTPS. Google ranks HTTPS sites higher. The 'HTTP is faster' argument died with TLS 1.2.
HTTPS with TLS 1.3 adds only 1 RTT. The security, SEO, and feature benefits far outweigh this cost.

Key Takeaways

Key Rules
  1. 1TCP handshake costs 1 RTT. TLS 1.3 adds 1 more. A new HTTPS connection takes 2 RTTs minimum before data flows.
  2. 2TLS 1.3 halved the TLS handshake from 2 RTTs to 1 by sending key shares in the first message. 0-RTT resumption can eliminate TLS latency for return visits.
  3. 3TCP slow start limits the first data round trip to ~14KB. Critical CSS and HTML should fit within this window.
  4. 4Connection reuse (keep-alive, HTTP/2 multiplexing) avoids repeated handshake costs. Fewer unique origins means fewer handshakes.
  5. 50-RTT is fast but vulnerable to replay attacks. Only use it for idempotent (safe) requests like GET.