Skip to content

CONCEPT Cited by 3 sources

Global routing for databases

Global routing for databases is the architectural idea of routing a client's database connection to the closest edge point-of-presence first, then backhauling queries to the actual origin database cluster over a small set of warm, long-held, multiplexed connections on the provider's private network. The client experience is connecting to a nearby edge; the database experience is a tiny, stable fleet of backhaul connections from the edge. Structurally identical to what a CDN does for HTTP, but applied to a stateful database wire protocol.

Canonical statement

PlanetScale, 2022-08-18: "Our new infrastructure and APIs also enable global routing. Similar to a CDN, global routing reduces latency drastically in situations where a client is connecting from a geographically distant location, which is common within serverless and edge compute environments. A client connects to the closest geographic edge in our network, then backhauls over long-held connection pools over our internal network to reach the actual destination. Within the US, connections from US West to a database in US East reduce latency by 100ms or more in some cases, even more as the distance increases." (Source: sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascript.)

Restated and expanded in 2024 when the full architecture is disclosed: "For the past few years, we have been quietly building and architecting a database connectivity layer structured like a CDN, this layer we call the PlanetScale Global Network." (Source: sources/2026-04-21-planetscale-introducing-global-replica-credentials.)

Components (as disclosed by PlanetScale)

  1. Edge point of presence (POP)POP closest to the client. Terminates the client's TCP + TLS + MySQL handshake (concepts/mysql-connection-termination-at-edge).
  2. Latency-based DNS routing — clients resolving aws.connect.psdb.cloud (or similar) land at the edge POP closest to the client's DNS resolver. (On AWS: Route 53 latency policy.)
  3. Warm backhaul pool — a small number of long-held, TLS-terminated, multiplexed connections from each edge POP to each origin region. Query traffic piggybacks on this warm pool.
  4. Mesh latency health-checking (concepts/mesh-latency-health-check) — edge POPs continuously ping each other over warm connections, recording peer-to-peer latency to keep the "nearest alive" decision current.
  5. Per-query routing without reconnection — the client's MySQL session lives at the edge. If the sorted cluster-latency order changes (region added, drained, failing), the next query picks the new clusters[0] transparently.

The latency amortisation argument

For a one-shot query from a client in Reno NV to a MySQL database in us-east-1:

  • Without global routing: client TCP+TLS+MySQL-auth handshake runs end-to-end to us-east-1 — adds ~70 ms each way for the handshakes alone, plus every subsequent query round-trip pays cross-continent cost.
  • With global routing: client handshake runs to the nearest edge (Reno → us-west-2, ~5 ms RTT). Cross- continent traffic rides the warm backhaul connection — no handshake cost, no TLS cost, connection pool amortised across many clients.

The 2022 PlanetScale post claims "100ms or more" saved on US West → US East. The 2024 global-replica-credentials post shows the routing table sorted by measured latency: the client's "next hop" is always clusters[0] in practice.

Contrast with traditional CDNs

CDNs solve static read-caching at the edge: content is replicated; origin is contacted only on cache miss. Global database routing solves dynamic read/write path shortening — there's no cache, every query reaches origin, but the handshake cost and connection-count pressure is absorbed at the edge. PlanetScale itself calls this out: "By doing this at the MySQL connection layer is what separates us from a traditional CDN or Network Accelerator."

Alternative: client-side connection pooling via proxy

(contrast shape)

Cloudflare Hyperdrive solves the same serverless-to-remote-database latency problem from the compute side: the Worker speaks a Hyperdrive binding; Hyperdrive holds the long-lived connection to the customer's Postgres / MySQL. Global routing for databases is the database-vendor side of the same coin — meet the client halfway, don't need a third-party proxy at the compute tier.

Seen in

  • sources/2026-04-21-planetscale-one-million-connectionscanonical benchmarked empirical anchor for the CDN-shape's connection-scaling claim. Liz van Dijk (PlanetScale, 2022-11-01) sustains 1M concurrent open connections through the edge-routing layer + VTTablet in-cluster pool, explicitly framed: "PlanetScale's Global Routing Infrastructure provides another horizontally scalable layer of connectivity, which we put to the test recently to help us prepare for the broader rollout of our serverless driver." Demonstrates that the CDN shape's latency-reduction benefit is composable with a connection-scaling benefit at proxy-tier altitude. Target is framed as "no more than an arbitrary number. After all, our architecture is designed to keep scaling horizontally beyond that point" — the fabric-shape is intrinsically horizontally scalable.

  • sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascript — canonical first public mention, framed as the infrastructure behind the serverless driver launch.

  • sources/2026-04-21-planetscale-introducing-global-replica-credentials — full architectural disclosure; names the system (PlanetScale Global Network), specifies the protobuf data model (Credential / Route / Endpoint), and documents mesh-latency health-checking with latency-sorted cluster lists.
Last updated · 378 distilled / 1,213 read