Skip to content

PATTERN Cited by 2 sources

Multiplex many database connections over one HTTP connection

Pattern

When a database protocol is tunneled over HTTP/2 or HTTP/3, make one HTTP connection carry many logical database sessions — each session is just an HTTP stream. The backend sees a small, stable fleet of HTTP connections; the logical session count is decoupled from the underlying transport connection count. Per-invocation / per-request clients (serverless, edge, browsers) get the benefit of a connection pool without having to hold persistent TCP sockets themselves.

Canonical statement

PlanetScale, 2022-08-18: "connection multiplexing with HTTP/2, protocol compression with gzip, brotli, or snappy, all of which lead to a better experience for serverless environments." (Source: sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascript.)

PlanetScale, 2023-01-04: "The HTTP API multiplexes many traditional MySQL connections over a single HTTP connection, reducing the need to open many connections and maintain a connection pool." (Source: sources/2026-04-21-planetscale-faster-mysql-with-http3.)

Mechanism

  1. Database protocol wrapped in HTTP requests — each SQL statement or transaction maps to one HTTP request/response round trip on a stream.
  2. HTTP/2 or HTTP/3 stream multiplexing carries N streams on one connection. The client library sees parallel statements completing independently; the transport layer interleaves their frames.
  3. Edge-side fan-in — an edge node terminating HTTP connections from many clients holds a small pool of backhaul HTTP connections to origin, multiplexing everything over those.

Combined, a fleet of M serverless invocations concurrently talking to the database sees:

M invocations
    ↓ (each on its own HTTP/2 stream)
1 shared HTTP/2 connection to edge POP
    ↓ (fan-in, multiplexed)
small fleet of warm HTTP/2 backhaul connections to origin
MySQL server

Why it's a distinct pattern from plain connection pooling

A classical connection pool (DBCP, pgbouncer, proxysql) sits between a long-lived client process and the database, and holds N TCP+auth connections. Clients borrow and return pooled sockets.

This pattern pushes the multiplexing into the client protocol itself, so:

  • Serverless clients don't need a process-lifetime pool — there's no long-lived client process to own the pool. The HTTP/2 connection is the pool.
  • The backend's per-connection state disappears — it sees HTTP connections, not MySQL connections. No per-MySQL-connection thread, no per-connection buffers, no per-connection auth state.
  • Transport multiplexing is standard-library work, not database-proxy work. The client's fetch() / HTTP/2 library already does it. The server's HTTP stack already does it. No bespoke pooling proxy required.

Why it matters for serverless

Without it: per-invocation MySQL connection setup — each Worker / Edge function invocation opens a fresh MySQL connection. Three cost dimensions:

  • Latency: TCP handshake + TLS handshake + MySQL auth handshake per invocation. On a high-latency link, this is 100+ ms before a single query runs.
  • Connection-pool exhaustion: at scale, thousands of concurrent invocations all holding a MySQL connection would trivially exhaust MySQL's connection ceiling (concepts/connection-pool-exhaustion).
  • Backend resource cost: per-connection memory × N concurrent invocations.

With it: the M invocations multiplex onto one HTTP connection to the edge, and the edge multiplexes onto a small pool of backhaul HTTP connections to origin. The per-invocation cost collapses to "open an HTTP stream" — microseconds, not milliseconds.

Contrast with external proxy pools

Cloudflare's Hyperdrive solves the same problem from the compute tier: the Worker speaks a Hyperdrive binding; Hyperdrive holds persistent connections to the customer's database. Same economic shape — N invocations → 1 pooled backend connection — different architectural home.

Structural comparison:

Dimension This pattern (HTTP-multiplex) Proxy pool (Hyperdrive)
Who owns the pool Database vendor's edge tier Compute vendor's proxy
Wire protocol HTTP/2 or HTTP/3 Native DB binary protocol
Works for any DB? Requires vendor-shipped HTTP API Yes — protocol-transparent
Per-region isolation Database vendor's choice Compute vendor's choice

Different deployments can pick based on whether the database vendor ships an HTTP API (patterns/http-api-alongside-binary-protocol) or whether the workload's database vendor doesn't (in which case the compute-side proxy is the answer).

Seen in

Last updated · 378 distilled / 1,213 read