PlanetScale — PlanetScale vs. Amazon Aurora¶
Summary¶
PlanetScale's 2023-10-05 vendor-comparison post positions PlanetScale (managed Vitess/MySQL) against Amazon Aurora across scaling, load-balancing, connection management, cost, management, change-management, and platform-lineage axes. The post is marketing-voice and most of its architectural substance is already canonicalised elsewhere on the wiki via the prior Reyes RDS-vs-PlanetScale (2021) and Morrison II branching-vs-Aurora-blue-green (2024-02) ingests. The durable new contributions from this post that clear the Tier-3 bar:
- First wiki canonicalisation of Aurora's storage-layer geometry: "automatically scales and replicates storage in 10GB increments, distributing it 6-times across 3 availability zones on a single unified storage layer" — 6-way replication across 3 AZs in 10GB chunks on a unified storage substrate decoupled from compute. Canonical Aurora storage-layer datum the wiki had not yet carried.
- First wiki datum on Aurora's 15-read-replicas-per-primary-instance ceiling across different availability zones.
- First wiki datum on Aurora Backtrack — a point-in-time recovery primitive that "can be used to restore the database to a point-in-time (PIT) to recover from DDL operations that introduce a breaking change" — Aurora's answer to the absence of native online-schema-change tooling.
- First wiki datum on Aurora Serverless V2 and I/O Optimized pricing mode for high-IOPS workloads — two Aurora pricing-surface primitives the wiki had not yet enumerated.
- First canonical wiki walkthrough of the full Vitess request-flow path: "application → load balancer → VTGate → VTTablet → MySQL" with explicit role attribution (VTGate = application-level query routing; VTTablet = middleware between VTGate and MySQL managing connection pooling + health checks; topo-server = role-state registry). Extends Vitess's existing canonical content with the end-to-end hop-sequence framing.
Everything else in the post is re-framing of canonicalised PlanetScale positions (16k-RDS-connection-ceiling, Vitess connection pooling, database branching + non-blocking schema changes, cost positioning on commodity EC2, vector search roadmap). Preserved as an additional PlanetScale vendor-voice anchor, with architectural density ~30–40 % of body.
Key takeaways¶
-
Aurora storage is 6-way replicated across 3 AZs in 10GB increments. The post's verbatim canonical datum: "Aurora provides 15 replicas per primary instance in different availability zones. The system automatically scales and replicates storage in 10GB increments, distributing it 6-times across 3 availability zones on a single unified storage layer. It does this without affecting compute resources." This is the first wiki canonicalisation of Aurora's distributed-storage geometry; complements systems/amazon-aurora's existing blue/green + copy-on-write coverage with the substrate-level durability model. The compute-storage separation property ("without affecting compute resources") is the architectural lever that distinguishes Aurora from classic RDS MySQL / PostgreSQL. See concepts/storage-replication-for-durability.
-
Aurora supports 15 read replicas per primary instance across AZs. Per-instance ceiling on Aurora's horizontal-read-scaling axis. The post frames Aurora read scaling as "users can horizontally scale read operations or add additional instances to distribute database operations to", using an Aurora reader endpoint as the load-balanced read entry point. Canonical read-replica pattern with a specific Aurora-fleet ceiling.
-
Aurora Backtrack is Aurora's PITR-based schema-revert workaround. Aurora lacks native online schema change tooling ("Amazon Aurora does not provide native tooling for schema changes, and does not support online schema changes"). The post canonicalises Aurora Backtrack as the emergency revert path: "Although this is not technically a schema change revert, Aurora Backtrack can be used to restore the database to a point-in-time (PIT) to recover from DDL operations that introduce a breaking change." Companion datum to the Morrison II 2024-02 branching-vs-blue-green framing of Aurora's "no revert path" gap — this post names the one PIT-style escape hatch Aurora offers, subject to the blast-radius cost of rolling back the entire database to a timestamp.
-
Vitess request-flow path: application → load balancer → VTGate → VTTablet → MySQL. Canonical wiki enumeration of the hop sequence. VTGate = "an application-level query routing layer". VTTablet = "middleware between VTGate and MySQL" — manages connection pooling + performs MySQL health checks + publishes state to the topo-server. VTGate uses topo-server state to determine available tablets + their roles and reroutes traffic as needed. PlanetScale's edge infrastructure sits in front of VTGate as a frontend load balancer, "terminating MySQL connections in the closest edge location" — the canonical CDN-like database connectivity pattern + two-tier pool already canonicalised in the 2022-11 van Dijk 1M-connections post.
-
Aurora Serverless V2 + I/O Optimized are Aurora's high-variability and high-IOPS pricing primitives. "For users that experience high workload variability, Amazon Aurora offers Serverless V2. There are many ways to purchase Aurora, including paying-as-you-go, reserving resources, and opting into I/O Optimized. I/O Optimized is a pricing model available for customers with high Input/Output operations." First wiki enumeration of Aurora's pricing-surface variants beyond standard on-demand.
-
PlanetScale's cost framing: commodity EC2 right-sizing vs Aurora's proprietary storage layer. The post argues PlanetScale is "objectively cheaper on infrastructure" because "PlanetScale right-sizes resources on many commodity-grade AWS EC2 instances, or the equivalent on other cloud providers, which prevents over provisioning and keeps cost in-line with the user's actual workload." Canonical framing: Aurora bundles compute + proprietary storage (with its own IOPS-pricing surface) while PlanetScale sits on commodity EC2 + commodity storage — same scale-out mechanics, different cost substrate.
-
Multi-cloud vs AWS-only positioning. PlanetScale is cloud-agnostic (deployable on AWS, GCP, and others), with a PlanetScale Managed SKU running inside customer-owned AWS/GCP sub-accounts. Aurora is "intentionally hyper-compatible with AWS tooling" and AWS-only. Positioning axis for customers with multi-cloud or sovereign-cloud requirements.
-
Vitess lineage claim: "only solution built on open-source Vitess, democratized." Historical + organisational positioning: PlanetScale was co-founded by Sugu Sougoumarane (original Vitess co-creator at YouTube, author of the 2022 consensus-algorithms-at-scale series), and PlanetScale employs several of the Vitess maintainers as top Vitess contributors. Not architectural but load-bearing for the "managed Vitess is PlanetScale's moat" positioning.
Systems / concepts / patterns surfaced¶
- New concepts: (none net-new — Aurora's 6-way-3-AZ storage geometry + 15-replica ceiling + Backtrack PITR are all captured as extensions to systems/amazon-aurora rather than standalone concept pages, per the Tier-3 minimal-viable-page discipline.)
- New patterns: (none — all architectural patterns in the post are already canonical: patterns/read-replicas-for-read-scaling, patterns/cdn-like-database-connectivity-layer, patterns/two-tier-connection-pooling, patterns/branch-based-schema-change-workflow.)
- Extended: systems/amazon-aurora (first canonical 6-way-3-AZ storage-geometry disclosure + 15-replica ceiling + Backtrack PITR + Serverless V2 + I/O Optimized pricing primitives); systems/vitess (full request-flow hop-sequence + VTGate/VTTablet/topo-server role attribution); systems/planetscale (2023-era Aurora-comparison voice); systems/mysql (2023-era MySQL-vs-Aurora-vs-PlanetScale framing); systems/aws-rds (contrast framing — Aurora as the "more performant" sibling of RDS).
Caveats¶
- Marketing voice. The post is PlanetScale-authored with no byline — not one of the canonical deep-internals voices (Dicken / Lambert / Noach / Taylor / Sougoumarane). Architectural density ~30–40 % of body; the rest is product positioning, customer logos (Block / Cursor / Intercom / MyFitnessPal), and pricing comparison.
- "Aurora does not support MySQL 8" claim is date-sensitive. The 2023-10-05 publication date pre-dates Aurora's general availability of MySQL 8 compatibility. Not reproduced as a canonical datum here — preserved as 2023-era context only.
- Vector search claim is roadmap. "PlanetScale will soon introduce vector search and storage, which is not currently available in MySQL." The subsequent 2024-10-22 Vectors public beta and 2026-04-21 Vectors GA posts deliver on this; retroactively the roadmap claim is validated.
- Connection-scaling claim is underspecified here. The post says "technically unlimited connections" without the 1M-concurrent-connections benchmark anchor that van Dijk 2022-11-01 provides. Read that post for the empirical datum.
- No quantified Aurora vs PlanetScale cost comparison. "PlanetScale plans tend to be objectively cheaper on infrastructure than running on Amazon Aurora" is asserted without $/IOPS or $/TB worked example. The 2024-02-15 Aurora pricing surprising costs post is the companion post with actual numbers.
- Aurora Backtrack scope not bounded. The post names Backtrack as the revert path for DDL breakage without naming the (a) configurable retention window (currently up to 72 hours); (b) Aurora-MySQL-only constraint (not supported on Aurora PostgreSQL); (c) blast radius (entire-database-rewind, not per-table); (d) write-unavailable during rewind. The 2024-02 Morrison II branching-vs-blue-green post takes the harder line that "Amazon does not permit you to fail back in any way" — the two claims need reconciliation: Backtrack exists but has a narrow scope (Aurora MySQL only, 72h window, whole-DB-rewind, write-unavailable), so Morrison's no-fail-back framing is correct for the "schema revert preserving post-merge writes" axis that PlanetScale's Schema Revert targets.
- Topo-server not named as etcd. The post says "VTTablet will manage connection pooling and perform health checks for MySQL instances, updating its status in a topo-server" without naming the underlying KV store (etcd). Detail elided at vendor-comparison altitude.
- Vitess lineage post-2020. "Vitess is the open-source middleware technology developed at YouTube in 2010" — the YouTube-to-CNCF donation history is elided (Vitess was donated to CNCF in 2018).
Source¶
- Original: https://planetscale.com/blog/planetscale-vs-amazon-aurora
- Raw markdown:
raw/planetscale/2026-04-21-planetscale-vs-amazon-aurora-34df82f7.md
Related¶
- systems/amazon-aurora — the other side of the comparison; this post adds the 6-way-3-AZ-10GB storage geometry + 15-replica ceiling + Backtrack + Serverless V2 + I/O Optimized datums.
- systems/planetscale — the system being positioned.
- systems/vitess — the substrate; this post canonicalises the full VTGate → VTTablet → MySQL request flow with topo-server role attribution.
- systems/mysql — both products' wire protocol.
- systems/aws-rds — Aurora's sibling RDS family.
- systems/aurora-global-database — cross-region Aurora companion.
- sources/2026-04-21-planetscale-comparing-awss-rds-and-planetscale — 2021 Reyes companion vendor-comparison post at the same altitude but targeting RDS rather than Aurora.
- sources/2026-04-21-planetscale-planetscale-branching-vs-amazon-aurora-bluegreen-deployments — 2024-02 Morrison II vendor-comparison post at the mechanism-altitude on blue/green-vs-branching; dual to this post.
- sources/2024-02-15-planetscale-amazon-aurora-pricing-surprising-costs — 2024-02 Morrison II cost-analysis companion with actual dollar numbers.
- sources/2026-04-21-planetscale-one-million-connections — 2022-11 van Dijk empirical anchor for the "technically unlimited connections" claim this post makes without numbers.
- patterns/cdn-like-database-connectivity-layer — PlanetScale's edge + VTGate + VTTablet two-tier substrate.
- patterns/two-tier-connection-pooling — edge pool + VTTablet in-cluster pool.
- patterns/branch-based-schema-change-workflow — PlanetScale's answer to Aurora's lack of online-schema-change tooling.