HTTP/1.1 — The Baseline
HTTP/1.1 (1997) added persistent connections (keep-alive) over HTTP/1.0’s connection-per-request model. A single TCP connection can serve multiple requests sequentially.
The fundamental constraint: head-of-line blocking. Requests on a single connection are processed in order. If request 1 is slow, requests 2 and 3 wait. Browsers work around this by opening 6 parallel TCP connections per origin.
HTTP/1.1 optimizations that survive to today:
- Keep-Alive (connection reuse)
- Chunked transfer encoding (stream large responses)
- Conditional requests (
If-Modified-Since,ETag) for caching
HTTP/1.1 patterns that are counterproductive with HTTP/2:
- Domain sharding (splitting assets across subdomains for more than 6 connections)
- Concatenating JS/CSS into single files — HTTP/2 multiplexing makes this unnecessary and hurts cache granularity
HTTP/2 — Multiplexing Over TLS
HTTP/2 (2015) introduces streams — logical request/response pairs over a single TCP connection. Multiple streams are multiplexed: you can have 50 concurrent requests on one connection. Each frame is tagged with a stream ID, allowing interleaving.
HTTP/1.1: REQ1 → RESP1 → REQ2 → RESP2 (serial)
HTTP/2: REQ1 ] (concurrent,
REQ2 ] ← multiplexed over 1 TCP one connection)
REQ3 ]
RESP2, RESP1, RESP3 (any order)
Key HTTP/2 features:
- Header compression (HPACK): HTTP headers are repetitive. HPACK uses a shared dynamic table to send diffs instead of full headers. Saves 40–80% header overhead.
- Server push: server can proactively send resources before the client asks. In practice, poorly adopted — browsers deprecated support.
- Stream prioritisation: clients hint at priority. Servers may or may not respect it.
The catch: HTTP/2 still runs over TCP. A single lost packet blocks all streams (TCP-level head-of-line blocking). On lossy networks (mobile, satellite), HTTP/2 can be slower than HTTP/1.1.
HTTP/3 — QUIC Underneath
HTTP/3 (2022 RFC) replaces TCP with QUIC — a UDP-based transport protocol built by Google, now standardised. QUIC provides the reliability of TCP but with independent stream delivery: a lost packet only blocks the stream it belongs to, not all streams.
HTTP/2: [ TCP (stream 1, 2, 3 blocked by one lost packet) ]
HTTP/3: [ QUIC stream 1 | QUIC stream 2 | QUIC stream 3 ]
(loss in stream 2 doesn't block 1 or 3)
QUIC additional benefits:
- 0-RTT connection resumption: known servers can be reconnected with 0 round-trips (data sent with first packet). HTTP/2 requires 1-RTT TCP + 1-RTT TLS = 2 RTTs minimum.
- Connection migration: QUIC connection IDs aren’t tied to IP:port tuples. Switching from Wi-Fi to cellular doesn’t break the connection.
- Built-in encryption: TLS 1.3 is mandatory in QUIC. No unencrypted QUIC.
When to Use What
| Scenario | Recommendation |
|---|---|
| API server behind load balancer | HTTP/2 (TLS) |
| CDN / edge | HTTP/3 if CDN supports it (Cloudflare, Fastly do) |
| Internal microservices | HTTP/2 via gRPC |
| Public API, browser-first | REST over HTTP/1.1 or HTTP/2 |
| Mobile-heavy traffic | HTTP/3 — connection migration + 0-RTT wins |
What to Know for Interviews
- HTTP/2 multiplexing means resource bundling is less important — but caching granularity matters more (one changed file in a bundle busts the whole bundle).
Content-Encoding: gzip/br(Brotli) compress response bodies. Brotli achieves ~20% better compression than gzip for text.Cache-Control: immutable+ content-hashed filenames is the right static asset strategy regardless of HTTP version.