Skip to content

PLANETSCALE 2023-10-05

Read original ↗

PlanetScale — PlanetScale vs. Amazon RDS

Summary

Unsigned PlanetScale blog post (2023-10-05, re-fetched 2026-04-21) positioning PlanetScale against Amazon RDS as a scale-out MySQL alternative. The post is distinct from — and complements — the earlier Jarod Reyes 2021-09-30 comparison (see sources/2026-04-21-planetscale-comparing-awss-rds-and-planetscale) at a different URL (/blog/planetscale-vs-amazon-rds vs /blog/planetscale-vs-aws-rds).

The durable wiki contribution is a co-canonical architectural description — paired with the same-day PlanetScale-vs-Aurora post which carries the identical VTGate/VTTablet/topo-server paragraph — of the Vitess query-routing data path that sits underneath every PlanetScale MySQL database. The full five-hop flow and the role attribution between VTGate (routing) and VTTablet (pool + health):

"The way that PlanetScale load balances is possible because of a few Vitess components called VTTablet and VTGate as well as PlanetScale's edge infrastructure. VTGate is an application-level query routing layer while VTTablet behaves as a middleware between VTGate and MySQL. This facilitates the flow of connections from the application, to a load balancer, to VTGate, to VTTablet, and then finally to MySQL. The VTTablet will manage connection pooling and perform health checks for MySQL instances, updating its status in a topo-server. Meanwhile, VTGate determines available tablets and their roles via the topo server and reroutes traffic as needed. PlanetScale's edge infrastructure then acts as a frontend load balancer, terminating MySQL connections in the closest edge location."

This paragraph is one of two co-canonical sources on the wiki (alongside the sibling Aurora-comparison post) for the five-hop Vitess data path (app → edge LB → VTGate → VTTablet → MySQL), the division of labor between VTGate (routing) and VTTablet (connection pooling + health checks), and the role of the topo-server as the shared state that both components read / write for health and topology. Previously, ~18 wiki sources mentioned VTGate / VTTablet by name, but none had a dedicated architectural page; both now get canonical system pages anchored on these two 2023-10-05 posts.

Marketing content (pricing, case-study quotes, generic "managed is better" positioning) is ~55–60 % of body and is not distilled to the wiki per Tier-3 scope rules. Everything preserved below is either the data-path architecture, the one novel operational-numbers line (technically-unlimited connections, connection-pooling, queuing), or a load-bearing contrast with the existing RDS / Aurora / Vitess pages.

Key takeaways

  • Five-hop Vitess query data path — canonicalized. "application, to a load balancer, to VTGate, to VTTablet, and then finally to MySQL." The flow is the architectural core of every PlanetScale MySQL database. Edge LB is the first hop and terminates the MySQL protocol at the closest edge location (ties to edge MySQL termination and systems/planetscale-global-network). VTGate is the "application-level query routing layer"; VTTablet is the "middleware between VTGate and MySQL." (Source: Reyes verbatim, this post.)

  • Division of labor: VTGate = router, VTTablet = pool + health. VTGate "determines available tablets and their roles via the topo server and reroutes traffic as needed" — pure stateless query routing. VTTablet "will manage connection pooling and perform health checks for MySQL instances, updating its status in a topo-server" — connection lifecycle + health signal. The topo-server is the shared state store that publishes VTTablet health and role to VTGate; it's the decoupler that lets VTGate make routing decisions without direct MySQL health probes. See the new concepts/vitess-topo-server page and the two-tier health-aware query routing pattern distilled from this architecture.

  • Load balancing between production and replica is automatic via this topology. "Every single PlanetScale database gets all of this underlying infrastructure to ensure the database remains available. Load balancers will distribute traffic between the production branch and replicas as well as balance connections, IOPS, and resource usage." Canonicalizes that read/write splitting is a property of the VTGate+VTTablet+topo-server topology, not something the application has to implement — a structural contrast with RDS where "logic can be manually defined by the user at the application level to direct reads and writes to either instance." (Source: Reyes, this post.)

  • Connection handling: "technically unlimited" on PlanetScale via queueing + pooling; 16k/instance on RDS. "With technically unlimited connections, PlanetScale is equipped to handle high concurrency. PlanetScale offers connection pooling, which scales with your cluster and enables connection requests to queue." The empirical anchor for the "technically unlimited" claim is the separate 1M-connection benchmark (benchmarked in detail here). The architectural mechanism is VTTablet-managed two-tier connection pooling (front-end pool from app → VTGate; back-end pool from VTTablet → MySQL) with request queueing when the back-end pool is saturated. RDS's counterpart is RDS Proxy, which "may increase latency, but will not lead to an application failure and will prevent overwhelming the database with too many connections" (Reyes, this post).

  • Sharding abstraction altitude contrast. "PlanetScale horizontally shards your data abstracted from the application-layer. … With RDS, users can horizontally scale read operations or add additional instances to distribute database operations to." The architectural point: PlanetScale does horizontal sharding at the proxy tier (VTGate decides which shard a query goes to based on VSchema / VIndex); RDS pushes the sharding problem to the application tier (customer writes read/write-split logic + shard routing themselves). Previously argued on the wiki at Reyes 2021 and Dicken IOPS-and-throughput; re-asserted here with the VTGate / VTTablet names attached.

  • Failover topology: 2 default replicas, production-branch primary-equivalent. "Production branches automatically failover to one of two default replicas to improve redundancy. The two replicas reduce the load on the primary branch, and enable users to scale read and write operations. Additional replicas and read-only regions are configurable." This is the first explicit statement on the wiki that PlanetScale's default replication factor is 3 (primary + 2 replicas) with automatic failover, complementing the richer coverage at Brian Morrison II's replication best-practices post. (Source: this post.)

  • Schema change / branching re-asserted as structural RDS gap. "PlanetScale makes it easy to build, test, and deploy database changes to production with minimal risk. Amazon RDS does not provide native tooling for schema changes, and does not support online schema changes or schema reverts." No new mechanism disclosed beyond what's already canonicalised at concepts/online-ddl, concepts/database-branching, and patterns/branch-based-schema-change-workflow; preserved here as a cross-reference.

  • MySQL compatibility vs MySQL engine distinction. "Amazon RDS is not the same as MySQL. MySQL is an open-source relational database management system (RDBMS). It is a database engine that is supported by services like RDS. … PlanetScale's Vitess offering is a MySQL-compatible database platform." Sharpens Vitess's positioning as a middleware speaking the MySQL wire protocol, not a MySQL fork. This is consistent with the existing Vitess page framing.

Systems / concepts / patterns surfaced

  • New systems: systems/vtgate — canonical system page for the Vitess query routing layer. systems/vttablet — canonical system page for the Vitess connection-pool + health-check middleware that sits between VTGate and MySQL. Both were previously referenced by name across ~18 wiki sources without dedicated pages.
  • New concepts: concepts/vitess-topo-server — the shared state store that VTTablet writes health/role into and VTGate reads for routing. First architectural statement on the wiki.
  • New patterns: patterns/query-routing-proxy-with-health-aware-pool — the general architectural shape distilled from VTGate + VTTablet + topo-server (stateless router + stateful pool-owning middleware + shared health/role state store).
  • Extended: systems/vitess (canonical data-path paragraph), systems/planetscale (2023-era VTGate/VTTablet/topo-server architecture disclosure), systems/aws-rds (re-asserts RDS lacks platform-level LB / online DDL / branching), systems/mysql (compat-vs-engine distinction).

Operational numbers

  • Replication factor: 1 primary + 2 default replicas (production branch + 2), configurable for more replicas and read-only regions.
  • PlanetScale paid tier starts at $5/mo with resource-based pricing options.
  • Connection handling: "technically unlimited" via VTTablet connection pooling + queueing (empirical anchor: 1M benchmark).
  • Amazon RDS connection ceiling: not cited in this post; the 16,000-per-instance figure is preserved from Reyes 2021.

Caveats

  • Marketing voice. The post is unsigned PlanetScale developer-advocacy copy, not engineer-authored. Architecture content is ~40–45 % of body; the remaining ~55–60 % is pricing, positioning, and case-study quotes (Block, Cursor, Intercom, MyFitnessPal — "hyper-growth" framing, no TCO methodology disclosed).
  • Topo-server implementation undisclosed. The post names the topo-server's role (VTTablet writes, VTGate reads) but not its implementation — in production Vitess the topo-server is typically etcd or Consul or ZooKeeper. The concepts/vitess-topo-server page preserves this ambiguity and links to the Vitess docs for implementation detail.
  • VTGate stateless-ness not explicitly asserted. The post says VTGate "determines available tablets and their roles via the topo server and reroutes traffic as needed" — the wording implies VTGate is stateless (reads topo-server, routes). Not confirmed here. Treat as the load-bearing design inference; a Vitess-internals post should canonicalize.
  • Edge-infrastructure framing is partial. "PlanetScale's edge infrastructure then acts as a frontend load balancer, terminating MySQL connections in the closest edge location" — ties to systems/planetscale-global-network and concepts/mysql-connection-termination-at-edge but the edge ↔ VTGate connection is not described (same region? cross-region tunnel? connection pool between edge and VTGate?). Preserved as topology sketch; no mechanism disclosed.
  • "Technically unlimited" is the marketing phrasing, not a correctness claim. The architectural enabling mechanism is connection pooling + queueing; the ceiling is the queue depth + back-end pool capacity, not absolute. The 1M-benchmark post is the empirical anchor — benchmarked, not guaranteed.
  • Tier-3 scope disposition: this post clears the scope filter only on the strength of the VTGate / VTTablet / topo-server architectural paragraph, which canonicalizes three new wiki pages that were overdue given 18 sources referencing them. Marketing/pricing/case-study sections are explicitly not reproduced.
  • 2023-era data points: "PlanetScale is cloud-agnostic and supports multiple cloud platforms" (AWS + GCP, October 2023 state). Vendor landscape has evolved since; treat as point-in-time.

Source

Last updated · 470 distilled / 1,213 read