PlanetScale — PlanetScale branching vs. Amazon Aurora blue/green deployments¶
Summary¶
Brian Morrison II (PlanetScale, 2024-02-02, re-fetched
2026-04-21) publishes a point-by-point vendor
comparison between Amazon
Aurora's blue/green deployments feature and
PlanetScale's
database branching.
Both ship under the superficially-similar framing
"clone production, change the clone, cut traffic over"
but have fundamentally different architectures once
the covers come off: Aurora uses a copy-on-write storage clone
glued together with
binlog replication between the two clusters;
PlanetScale creates an isolated
Vitess cluster per branch + applies schema changes
as non-blocking
gh-ost-style ghost-table migrations on the upstream
branch at merge time. The architectures diverge on
every axis the post surveys: schema-change
compatibility (Aurora limited to binlog-replicatable
changes; PlanetScale analyses the delta + generates
ordered DDL), instance-class scaling (Aurora drops
connections during switchover; PlanetScale does
rolling tablet upgrades),
version upgrades (Aurora requires planned
switchover-with-downtime; PlanetScale does validated
rolling upgrades without a maintenance window),
cost (Aurora doubles the compute footprint during
the green-side lifetime; PlanetScale "only spin up the
resources you need to make schema changes"),
reversibility (Aurora has no revert path — must
create a new blue/green deployment to undo; PlanetScale
has
Schema Revert preserving post-merge writes via the
ghost-table kept in
sync), data consistency (Aurora's copy-on-write
storage clone permits writes on both sides with no
reconciliation mechanism; PlanetScale's branches are
protected from DDL on the production side), and
downtime (Aurora "drops all active connections"
during switchover even for "less than a minute"
happy-path; PlanetScale "does not drop connections for
common maintenance tasks").
Brian Morrison II's fifth wiki ingest (after 2023-02-09 Postgres-to-MySQL, 2023 declarative-schema posts, 2024-01-08 isolation-levels, 2023-11-15 replication-best-practices). This is the first canonical wiki disclosure of Aurora blue/green deployments as a mechanism + fills the vendor-comparison of the two contrasting schema-change approaches axis. The post is structurally a marketing comparison but architectural density is high (~80%) because every comparison axis is grounded in actual mechanism (binlog replication, copy-on-write storage, ghost tables, Vitess + Kubernetes routing) rather than feature bullets.
Key takeaways¶
-
Aurora blue/green deployments = copy-on-write storage clone + binlog replication between the two clusters. Verbatim: "AWS will create a clone of your current Aurora cluster and spin up a brand new 'green' environment. Once the new cluster is configured, AWS will configure binlog replication between the two clusters to keep changes synchronized between them." The blue environment ships synchronous binlog events to the green environment; schema / config changes happen on green and the differences propagate. (Source: Morrison II, 2024.)
-
Switchover is the cutover primitive and has a scripted guardrails lifecycle. Verbatim list: AWS runs compatibility + long-running-operations checks, then (a) stops new writes and drops all connections, (b) waits for final writes and replication to catch up, (c) switches blue to read-only, (d) renames resources so green adopts the original names and blue gets an
old-prefix. Left-over blue environment stays online post-switchover (incurring ongoing cost) but is not usable as a fallback — Aurora provides "no easy or automated way" to fail back. "This means that if you need to revert the changes made in the green environment, you will need to create a new blue/green deployment, undo any previously made changes, and perform a switchover to the blue environment." -
Aurora's permitted schema-change envelope is bounded by binlog-replication compatibility. Verbatim: "schema changes should be limited to what is currently supported by binlog replication in MySQL." Examples called out as supported: adding columns at end of table, creating indexes, dropping indexes — the minor
ALTERs that binlog-replicate safely across a non-identical schema. Structural changes that the MySQL binlog protocol doesn't round-trip cleanly (column type narrowing, column rename, mid-table adds, charset migrations of non-trivial shape) are out of scope for Aurora blue/green. -
PlanetScale branching creates an isolated [[systems/ vitess|Vitess]] cluster per branch. Verbatim: "Each branch in PlanetScale's branching technology for Vitess constitutes its own Vitess cluster and includes several infrastructure pieces." The substrate is explicitly named: data resides on a tablet (a Kubernetes pod running
mysqldwith avttabletsidecar),vtgateis the "lightweight proxy routing service" routing MySQL traffic via the topology service, and automatic tablet replacement happens transparently ("If a tablet goes down for any reason, our systems automatically reroute traffic to a functional tablet and allocate another tablet to replace the downed instance"). Production branches always have at least one replica; paid tiers add more; development branches default to a low-cost instance shape (customisable on Enterprise). -
Schema changes merge from branch to production via deploy requests + a ghost table built at merge time. Verbatim: "PlanetScale approaches this differently by analyzing the delta between the two schemas and generating the DDL statements to execute on the upstream branch in the correct order. This operation is done using a deploy request. When the deploy request is merged, a 'ghost' table is created on the upstream database to apply the new schema. The data is then synchronized between the old table and this new 'ghost' table until you can put it into production." This is the canonical shadow- table mechanism that gh-ost pioneered.
-
PlanetScale supports Data Branching® — creating a branch with a copy of production data by restoring the latest production backup. Verbatim: "PlanetScale supports Data Branching®, allowing you to create a new branch with a copy of your production data by restoring the latest version of a production backup to the new branch. This enables developers to have an isolated environment to test new features or run analytics without affecting their production environment." Base plan and above.
-
Instance-class resizing diverges sharply. On Aurora with blue/green: "changing the instance class to scale up or down … Switchover can reduce downtime for such operations, but it is still disruptive since all client connections will be dropped during the process." On PlanetScale: "we can seamlessly perform rolling upgrades to the tablets without taking down your database. This is based on our use of Vitess on Kubernetes. To make instance type changes, you'd select the new instance type, and our backend systems will do the rest. This allows your applications to continue to operate without being taken offline." This is the canonical wiki contrast between coordinated-fleet-switchover and rolling-upgrade idioms at the database tier.
-
MySQL version upgrades: Aurora's default path (in the automatic-minor-upgrade maintenance window) "can be disruptive due to the downtime to apply the changes or due to undesired behavior changes in the database software"; blue/green reduces downtime but is "a labor-intensive task and requires significant planning". PlanetScale validates new versions centrally ("PlanetScale engineers carefully verify that new versions are compatible with the system before they are applied") then applies them via rolling upgrades "taking advantage of the Vitess routing and Kubernetes technologies".
-
Cost asymmetry during the green-side lifetime. Verbatim: "if you have one write node and three reader nodes, there will be a point in time where you are effectively paying for eight total nodes (4 in blue and 4 in green) and only really using half of them. Once a switchover is performed, your blue environment is left running, which incurs additional costs." PlanetScale positions: "we only initially spin up the resources you need to make schema changes. If you want more resources or want to use your production data, you can do so, but it is not a necessity."
-
Reversibility: Aurora has no revert path; PlanetScale has Schema Revert. Aurora "does not permit you to fail back in any way" — to undo a green-side change after switchover, operators must "create a new blue/green deployment, undo any previously made changes, and perform a switchover to the blue environment". PlanetScale's [[patterns/ instant-schema-revert-via-inverse-replication|Schema Revert]]: "we retain the former production table for a period of time but continue to sync changes into it. When you revert the changes, our system will simply flip the statuses of the two tables, making the old production table active again, but it will contain the writes since the merge." Canonical datum: revert preserves post-merge writes via inverse sync on the retained table.
-
Data-consistency risk from copy-on-write storage clone. Verbatim: "Amazon's blue/green deployment initially duplicates only compute resources and clones data storage using a copy-on-write mechanism. This can help with storage costs when running parallel environments but introduces potential data inconsistencies across environments. Since writes are allowed in the green environment, the same data can technically be changed in both environments. If this happens, Amazon has no easy or automated way to reconcile which version is correct. Resolving conflicts is challenging, and the responsibility for data consistency falls on you." PlanetScale branches are structurally isolated (separate clusters); safe- migrations-enabled branches are "protected from DDL statements" to prevent the analogous failure.
-
Switchover downtime is not zero. Aurora's marketed "less than a minute" is the happy path; long-running operations "still permitted by Amazon's guardrails" extend the window "since the process needs to wait for those to complete." PlanetScale "does not drop connections for common maintenance tasks" via the Vitess proxy tier (canonical composition with two-tier connection pooling).
Systems / concepts / patterns surfaced¶
- New systems: Amazon Aurora (already a stub; extended, not created) + no new system pages.
- New concepts: concepts/blue-green-deployment (canonical wiki definition of the generic deployment pattern + Aurora-specific instance with the copy-on-write-clone + binlog-replication composition
- scripted-switchover lifecycle); [[concepts/
copy-on-write-storage-fork]] (the Aurora
storage-clone mechanism as a canonical wiki concept —
fast clone, shared pages until diverged, ongoing
storage cost only for divergent pages); [[concepts/
rolling-upgrade]] (canonical wiki definition of the
fleet-tablet-rolling-replace idiom under Vitess +
Kubernetes as contrast to coordinated-switchover);
concepts/mysql-version-upgrade (major + minor
version upgrade as an operational primitive + two
strategies — maintenance-window vs rolling upgrade);
concepts/schema-revert (the canonical wiki
definition of reverting a deployed schema change
while preserving post-merge writes); [[concepts/
ghost-table-migration]] (the
gh-ost-style mechanism as a canonical concept page distinct from concepts/shadow-table — same substrate, different altitude); concepts/data-branching (PlanetScale's Data Branching® — branch-with-restored-production- data variant of concepts/database-branching). - New patterns: [[patterns/blue-green-database- deployment]] (the Aurora-family architectural pattern: storage clone + binlog replication + scripted guardrails + switchover with connection-drain + resource-rename + blue-retained-read-only post-cutover — distinct from application-tier blue/green where the database is shared); [[patterns/rolling-instance- upgrade]] (the Vitess-family alternative: swap tablets one at a time, drain connections gracefully per tablet, no coordinated cutover — canonical wiki contrast to blue/green at the database tier).
- Extended: systems/amazon-aurora (first canonical wiki disclosure of blue/green mechanism + switchover lifecycle + copy-on-write-clone + post-switchover cost + reversibility gap); [[systems/ aws-rds]] (blue/green also applies to RDS instances); systems/planetscale (branching-vs-blue/green vendor-comparison framing); systems/vitess (this is the tablet-as-unit-of-rolling-upgrade canonical disclosure); systems/mysql (binlog-replication-as- blue/green-sync-mechanism constraint on schema-change envelope); systems/gh-ost (PlanetScale uses gh-ost mechanism for the ghost-table migration under deploy requests); concepts/binlog-replication (canonical wiki framing as the Aurora-blue/green sync substrate); concepts/database-branching (canonical framing as the alternative to blue/green deployments); [[concepts/ pre-flight-schema-conflict-check]] (branching + deploy- request lifecycle); [[patterns/shadow-table-online- schema-change]] (ghost-table under PlanetScale deploy requests); [[patterns/instant-schema-revert-via- inverse-replication]] (Schema Revert retaining the post-merge ghost table as the inverse-replication substrate); [[patterns/branch-based-schema-change- workflow]] (canonical contrast to blue/green).
Operational numbers¶
- Aurora blue/green switchover happy path: "less than a minute" advertised; actual window = guardrails check duration + final write / replication catch-up + connection-drain + resource-rename + DNS / endpoint flip. All client connections dropped during the window.
- Compute cost during green lifetime: effectively 2× the production footprint (Aurora clones read replicas 1:1 with the primary). Worked example: 1 primary + 3 replicas = 4 nodes; green clone adds 4 more; total 8 during the green-side lifetime, of which 4 are actively serving.
- Post-switchover cost: blue environment "is left running" at the operator's discretion; no automatic teardown. Ongoing cost until explicitly deleted.
- Storage cost: copy-on-write mitigates the initial storage duplication cost; divergent pages accrue incrementally.
- Schema-change envelope: bounded by MySQL binlog replication compatibility — additive columns + indexes OK; mid-table column adds + column renames + type narrowing + some charset changes out of scope.
- PlanetScale revert window: canonical wiki datum from sibling [[sources/2026-04-21-planetscale- behind-the-scenes-how-schema-reverts-work|Schema Reverts post]] is 30 minutes (not restated in this post).
- PlanetScale tablet upgrade: rolling; no quantitative datum disclosed in post (tablet- replacement latency, rolling-fleet upgrade duration, connection-drain overlap — all elided).
Caveats¶
- Vendor-comparison voice. This is a marketing post that positions PlanetScale favourably. Every comparison axis is framed to highlight Aurora limitations; the Aurora mechanism is described accurately but the framing is unambiguously advocative. Aurora strengths not engaged: SLA commitments, IAM + VPC integration depth, existing RDS ecosystem tooling, predictable pricing at the contracted instance tier.
- Aurora schema-change envelope not enumerated
exhaustively. The post names the boundary
("supported by binlog replication") without listing
every compatible vs incompatible
ALTERshape. Operators evaluating Aurora blue/green should consult the official Aurora blue/green docs for the current enumeration. - PlanetScale ghost-table mechanism hand-waved. The
ghost-table + sync-then-swap pattern is named but the
full
gh-ostmechanism isn't walked — the patterns/shadow-table-online-schema-change page and systems/gh-ost page together cover the details across related ingests. - Rolling-upgrade latency + topology not quantified.
PlanetScale's rolling tablet upgrade is claimed
without telemetry — no tablet-replacement duration,
no connection-drain overlap, no fleet-wide rolling-
upgrade completion time. The qualitative "seamless"
framing is accurate at the macro level but elides the
per-tablet quiesce cost + client-side retry
obligations (via
vtgateproxy tier). - Data Branching® mechanism not detailed. Post names Data Branching® as "restoring the latest version of a production backup" without covering backup encryption, restore duration on large datasets, storage cost on the branch, or the relationship to [[sources/2026-04-21-planetscale- faster-backups-with-sharding|VTBackup / Singularity]] substrate canonicalised in the sibling 2024-07-30 post.
- Copy-on-write storage clone conflict-resolution left to operator. Aurora's two-side-writeable window is a real failure mode but the post asserts it as a design criticism without describing operator workflows for avoiding it ("don't write to green" is the obvious answer; the post could have framed the copy-on-write clone as write-to-green-only-for-DDL rather than writes-allowed-on-both-sides).
- PlanetScale's "protected from DDL" claim has a scope. "Branches with safe migrations enabled (which is required to use deploy requests) are protected from DDL statements" — this is a configuration-dependent claim, not a structural one. Operators who disable safe migrations can re-open the Aurora-equivalent data-consistency risk.
- No cross-cluster Aurora-Global + PlanetScale-Portals comparison. Both products have cross-region read- replica stories; neither is engaged in this post. See [[sources/2026-04-21-planetscale-introducing- planetscale-portals-read-only-regions|Portals post]] for the PlanetScale side; Aurora Global Database is the RDS-family analogue.
- No quantified cost comparison for the green-side lifetime. Post states Aurora effectively pays for the doubled node count during the green-environment lifetime but doesn't provide a worked dollar figure. The sibling [[sources/2026-04-21-planetscale- increase-iops-and-throughput-with-sharding|Dicken 2024-08-19 post]] provides the IOPS-cost-cliff analogue on a different axis.
- MySQL-only scope. Aurora Postgres has its own blue/green semantics (introduced later, different schema-change envelope); not engaged here. Likewise PlanetScale for Postgres hadn't launched in early 2024.
- "Less than a minute" switchover is nominal, not guaranteed. The post acknowledges this but doesn't survey actual distribution of real-world switchover durations. Long-running operations + replica lag + DNS propagation all pull the tail.
- 2024-02-02 publication — pre-dates PlanetScale Metal (March 2025). The rolling-tablet-upgrade framing is agnostic to Metal vs cloud-storage-backed tablets but the infrastructure substrate mentioned ("Vitess on Kubernetes") predates the Metal launch.
Source¶
- Original: https://planetscale.com/blog/planetscale-branching-vs-amazon-aurora-blue-green-deployments
- Raw markdown:
raw/planetscale/2026-04-21-planetscale-branching-vs-amazon-aurora-bluegreen-deployments-a388a681.md
Related¶
- systems/planetscale — product positioned against Aurora blue/green on seven axes.
- systems/amazon-aurora — the competitor; this is the canonical wiki disclosure of Aurora blue/green mechanism.
- systems/aws-rds — Aurora's parent family; blue/green applies to RDS too.
- systems/vitess — the Kubernetes substrate under PlanetScale branching + rolling upgrades.
- systems/gh-ost — the mechanism PlanetScale uses for ghost-table migrations under deploy requests.
- systems/mysql — binlog replication is the Aurora blue/green sync substrate.
- concepts/blue-green-deployment — the generic deployment pattern + Aurora's database-tier instance.
- concepts/copy-on-write-storage-fork — the Aurora storage clone mechanism.
- concepts/rolling-upgrade — the Vitess alternative to coordinated switchover.
- concepts/database-branching — PlanetScale's primitive compared against Aurora blue/green.
- concepts/data-branching — the branch-with-production-data variant.
- concepts/binlog-replication — the Aurora blue/green sync mechanism + its schema-change envelope constraint.
- concepts/schema-revert — PlanetScale's reversibility primitive contrasted with Aurora's no-revert gap.
- concepts/ghost-table-migration — the PlanetScale merge-time mechanism.
- concepts/mysql-version-upgrade — two strategies compared.
- concepts/deploy-request — the merge primitive on the PlanetScale side.
- patterns/blue-green-database-deployment — canonical Aurora-family pattern.
- patterns/rolling-instance-upgrade — canonical Vitess-family alternative.
- patterns/shadow-table-online-schema-change — the gh-ost mechanism on the PlanetScale side.
- patterns/instant-schema-revert-via-inverse-replication — PlanetScale's revert path.
- patterns/branch-based-schema-change-workflow — the branching-wrapped end-to-end workflow.
- companies/planetscale — company page.