Skip to content

SYSTEM Cited by 3 sources

PlanetScale Global Network

What it is

The PlanetScale Global Network is PlanetScale's CDN-like database-connectivity layer. It is the path every MySQL connection takes to reach a PlanetScale database cluster. Its job is to terminate MySQL (and gRPC) connections at the edge closest to the client, pool / multiplex / TLS-terminate those connections, and tunnel their queries back to the origin cluster over a small number of long-lived, warmed-up, encrypted backhaul connections.

Canonical description from the 2024-04-17 launch post: "For the past few years, we have been quietly building and architecting a database connectivity layer structured like a CDN, this layer we call the PlanetScale Global Network." (Source: sources/2026-04-21-planetscale-introducing-global-replica-credentials.)

Responsibilities

Quoted verbatim from the launch post:

  • Terminating every MySQL connection globally.
  • Handling [PlanetScale's] RPC API (to support things like database-js).
  • Connection pooling at a near infinite scale.
  • TLS termination.
  • Routing connections to the database.

Architecture

Edge termination

The first hop from a client is the edge node in the nearest POP (point of presence). The edge node fully terminates:

  • the TCP connection
  • the TLS handshake
  • the MySQL protocol handshake

and holds the client's MySQL session open there. The edge functions like an in-DC ProxySQL would — except that the "DC" here is wherever the client is, not wherever the DB is. This is the canonical instance of the MySQL- connection-termination-at-edge concept, which is structurally identical to the HTTP CDN's edge-TLS-termination primitive but applied to a stateful database wire protocol.

Queries are then multiplexed from the edge session over a small number of long-held encrypted connections to the origin cluster. "By terminating this closest to you, a handshake can happen faster. By doing this at the MySQL connection layer is what separates us from a traditional CDN or Network Accelerator."

The Credential / Route / Endpoint triple

PlanetScale's internal data model splits the connection-admission problem into three concerns. Canonical reference: concepts/credential-route-endpoint-triple.

  • Credential (proto: branch, password_sha256, role, expiration, tablet_type). Authoritative auth record. Lives only inside the origin DB cluster's region — never replicated out to the edge. Carries a TabletType enum (PRIMARY, REPLICA, RDONLY) that dictates which Vitess tablet class the caller is admitted to; this is how a single credential is tied to a read-only workload without requiring application-side primary/replica routing.

  • Route (proto: branch, repeated cluster, expiration). Username → cluster-list mapping. Carries no authentication information; safe to replicate globally to every edge region. For a primary-only credential, cluster=["us-east-1"]; for a multi-region replica credential, cluster=["us-east-1", "us-west-2", ...]. Mutated via etcd when a read-only region is added or removed.

  • Endpoint — the hostname the client connects to. Two shapes:

  • Direct: {region}.connect.psdb.cloud (specific edge region, chosen by caller).
  • Optimized: {provider}.connect.psdb.cloud — e.g. aws.connect.psdb.cloud, gcp.connect.psdb.cloud — backed by latency-based DNS routing (Route 53 latency policy on AWS). Resolves to the edge region closest to the client's DNS resolver, not the DB region. "you get routed through the closest region to you, which gives us the CDN effect."

The auth record stays local, the routing record goes global, the hostname is DNS-resolved to the nearest edge — a three-way split of control-plane vs data-plane concerns implemented by where each record physically lives.

etcd-watched route mutation

Both Route and Credential are stored in etcd. Edge nodes watch for changes and react in near-realtime. Canonical pattern: patterns/etcd-watched-route-mutation. "The Route and Credential are stored in etcd which we are able to watch for changes in near realtime and respond to mutations, or deletions as soon as they happen." Adding or removing a read-only region from a customer's cluster is implemented as a single-record mutation to that credential's Route; edge nodes pick up the change without reconnection.

Warm mesh + latency-sorted routing

Edge nodes continuously ping each other across warm connections as part of regular health-checking, recording peer-to-peer latency. Canonical concept: concepts/mesh-latency-health-check. Canonical pattern: patterns/warm-mesh-connection-pool — the same warm connection pool doubles as backhaul-for-queries and latency-measurement-substrate.

When a new Route arrives via etcd watch (or when latency values shift), each edge node sorts the cluster list by measured latency before marking the Route available. "This keeps the 'next hop' decision always clusters[0] in practice. In the event if a hard failure (if for some reason this entire region were down), we could go over to the next option if there were multiple choices." Failover is an emergent property of the sorted-list data structure: an unreachable peer reports infinite latency and naturally drops to the bottom.

Per-query routing without reconnection

Because the client's MySQL session lives at the edge, not at origin, "the Route is utilized on a per-query basis, thus without needing to reconnect or anything, we can route you to the lowest latency next hop in realtime." When the sorted cluster order changes (peer latency shift, region added, region drained, region failed), the next query picks the new clusters[0] via the warmed mesh connection. No client-visible reconnection, no retry/backoff burden on the application.

Why it shows up on this wiki

  • Canonical new pattern: patterns/cdn-like-database-connectivity-layer — the composition of edge termination + latency-DNS + control- plane-watched routing table + mesh latency + warmed multiplexed backhaul, applied to a stateful database wire protocol rather than HTTP.
  • Canonical database-tier answer to concepts/edge-to-origin-database-latency — compare Cloudflare's Hyperdrive as the Worker-side answer to the same problem; the Global Network is the database-side answer (meet the client halfway rather than caching at the compute tier).
  • Canonical instance of concepts/connection-multiplexing at global scale — sibling of Figma's systems/figcache (single-DC multiplexer tier for Redis) and Vitess's vttablet (in-region multiplexer for MySQL). Global Network is the globally-distributed variant.

Seen in

  • sources/2026-04-21-planetscale-one-million-connectionscanonical benchmarked empirical anchor for the "nearly limitless connections" claim. Liz van Dijk (PlanetScale, 2022-11-01, ~2.5 months after the Serverless Driver launch) benchmarks 1,000,000 concurrent open connections against a PlanetScale database via an AWS Lambda 1,000-workers × 1,000-connections-per-worker fan-out, reaching the plateau in under 2 minutes and holding for ~5 minutes before drain. Names the explicit motivation: "PlanetScale's Global Routing Infrastructure provides another horizontally scalable layer of connectivity, which we put to the test recently to help us prepare for the broader rollout of our serverless driver." Confirms the two-tier pool architecture (VTTablet in-cluster pool + Global Routing Infrastructure edge pool) is the specific substrate behind the scaling claim; frames the target number as "no more than an arbitrary number. After all, our architecture is designed to keep scaling horizontally beyond that point." This is 62.5× above the 16,000-connection RDS MySQL instance cap canonicalised in sources/2026-04-21-planetscale-comparing-awss-rds-and-planetscale. Canonical new pattern patterns/two-tier-connection-pooling (app-tier + proxy-tier with the Global Routing Infrastructure as the edge-termination third tier on top). Canonical new concepts concepts/max-connections-ceiling, concepts/memory-overcommit-risk, concepts/lambda-fanout-benchmark. Short post (~600 words) but architecturally dense; clears Tier-3 on the strength of being the definitive empirical-anchor source for subsequent "PlanetScale scales connections" claims across the wiki corpus.

  • sources/2026-04-21-planetscale-introducing-the-planetscale-serverless-driver-for-javascriptfirst public mention of the fabric, Aug 2022, in the serverless-driver launch post: "Our new infrastructure and APIs also enable global routing. Similar to a CDN, global routing reduces latency drastically… A client connects to the closest geographic edge in our network, then backhauls over long-held connection pools over our internal network to reach the actual destination. Within the US, connections from US West to a database in US East reduce latency by 100ms or more in some cases, even more as the distance increases." Architecture details are held back at this point; the 2024 global-replica- credentials post is the full disclosure.

  • sources/2026-04-21-planetscale-introducing-global-replica-credentials — launch post for global replica credentials, structured as the canonical public disclosure of the Global Network architecture. Robenolt + Ekechukwu walk through edge termination, the Credential / Route / Endpoint triple, etcd-watched routing, latency-based DNS, mesh health-checking with latency-sorted cluster lists, and per-query routing without reconnection. Disclosed protobuf shapes for Route and Credential plus the TabletType enum (PRIMARY / REPLICA / RDONLY).
Last updated · 378 distilled / 1,213 read