Skip to content

PATTERN Cited by 3 sources

CDN-like database connectivity layer

Pattern

Structure the database-access path like a CDN. Client connections land at the nearest edge point-of-presence, where the provider-owned fabric terminates the wire protocol (TCP + TLS + database-auth handshake). Queries then travel to the actual origin database cluster over a small, warmed, multiplexed pool of long-held backhaul connections on the provider's private network.

Canonical instance

PlanetScale Global Network. First mention in the August 2022 serverless driver launch: "Our new infrastructure and APIs also enable global routing. Similar to a CDN, global routing reduces latency drastically… A client connects to the closest geographic edge in our network, then backhauls over long-held connection pools over our internal network to reach the actual destination." (Source: sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascript.)

Full architectural disclosure in April 2024: "For the past few years, we have been quietly building and architecting a database connectivity layer structured like a CDN, this layer we call the PlanetScale Global Network." (Source: sources/2026-04-21-planetscale-introducing-global-replica-credentials.)

Composition (PlanetScale reference design)

  • Edge POPs terminate the native database wire protocol — not just TLS. The client's MySQL handshake completes at the edge. The edge holds a live MySQL session open on the client side (canonical concept: concepts/mysql-connection-termination-at-edge).
  • Latency-based DNS routing picks the edge POP nearest the client's DNS resolver.
  • Control-plane-watched routing table: per-credential Route record in etcd names the allowable cluster list. Edge nodes watch for changes and react in near-real-time (patterns/etcd-watched-route-mutation).
  • Mesh health-checking over warmed peer connections: edge POPs continuously ping each other, maintaining peer-to-peer latency measurements. The routing table's cluster list is sorted by measured latency, so clusters[0] is always the best next hop (patterns/warm-mesh-connection-pool, concepts/mesh-latency-health-check).
  • Per-query routing without reconnection: the client session lives at the edge; routing decisions happen per-query, not per-connection. Region adds / drains / failures are transparent to clients.
  • Connection multiplexing on backhaul: a small pool of warm encrypted connections to origin, reused across many client-edge sessions.

Why it matters

  1. Solves concepts/edge-to-origin-database-latency from the database side — compute doesn't move, data doesn't move, but the expensive parts of each connection (handshake, auth) happen on a short link; bulk query traffic rides a warm long link.
  2. Makes serverless / edge compute viable for stateful database workloads. Every invocation is cold from the client side; the edge absorbs that cost.
  3. Operationally, it gives the vendor a single connection-admission plane: every connection goes through fabric the vendor owns, which is where rate limits, credential rotation, tenant routing, deprecation policy, and incident response can be applied uniformly.

The "CDN for HTTP" analogy is not literal

Classical CDNs cache static content at the edge and fetch from origin only on miss. This pattern doesn't cache database state — every query reaches origin — but it borrows the CDN's topology: edge-termination + warm backhaul + latency-based DNS + peer-to-peer latency map. PlanetScale calls this out in the 2024 post: "By doing this at the MySQL connection layer is what separates us from a traditional CDN or Network Accelerator."

Sibling shapes

  • Proxy at the compute tier (Cloudflare Hyperdrive) — structurally inverse. The compute vendor owns the pooled long-lived connection; the database vendor's server is unchanged. Works for any database, not just ones whose vendor built a global network.
  • In-region connection multiplexer (Vitess vttablet, Figma's figcache) — same N-to-1 decoupling, but within one DC/region. This pattern is the globally-distributed variant.
  • Classical CDN (Cloudflare, Fastly, Akamai) — same fabric shape, but caches responses. This pattern forwards every query.

Execution details

  • Edge node count and location — PlanetScale doesn't publish exact POP topology; the 2024 disclosure only confirms that aws.connect.psdb.cloud resolves via Route 53 latency policy across AWS regions.
  • Routing table replication model: Route records are global (safe to replicate, carry no auth); Credential records stay region-local (carry auth). Canonical concepts/credential-route-endpoint-triple.
  • Protocol-level: supports both MySQL binary protocol and the PlanetScale HTTP API — the global network's edge termination is the same, regardless of which protocol the client speaks.

Seen in

  • sources/2026-04-21-planetscale-one-million-connectionsbenchmarked empirical anchor for the pattern's horizontal-scalability property. Liz van Dijk (PlanetScale, 2022-11-01) sustains 1M concurrent open connections through the CDN-shaped layer + VTTablet in-cluster pool, with a ~2-minute ramp via an AWS Lambda 1,000 × 1,000 fan-out. Target characterised as "no more than an arbitrary number. After all, our architecture is designed to keep scaling horizontally beyond that point" — the pattern is intrinsically horizontally scalable in both latency (more edges = more locality) and connection-acceptance (more edges = more capacity). Stacks with patterns/two-tier-connection-pooling — the edge layer plus VTTablet = two-tier proxy pool.

  • sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascript — first public mention, one paragraph, announcing the fabric as backing the new driver. "Within the US, connections from US West to a database in US East reduce latency by 100ms or more in some cases."

  • sources/2026-04-21-planetscale-introducing-global-replica-credentials — full architectural disclosure. Names the system, discloses protobuf schemas, walks through mesh-latency-sorted routing and per-query re-routing without reconnection.
Last updated · 378 distilled / 1,213 read