PATTERN Cited by 2 sources
Near-atomic multi-change deployment¶
Problem¶
Traditional multi-ALTER deployment on MySQL has a nasty
shape: the engine serialises DDL, and individual ALTERs on
large tables take hours. Deploying three 8-hour changes
sequentially means 24 hours of partially-deployed schema.
During that window:
- The schema doesn't match source control.
- Incident response has no clean abort path — completed migrations are irreversible without authoring new DDL.
- Every new deploy-request stacked on top has ambiguous base state.
- Application behaviour is subtly different as each migration lands (some columns exist, some don't; some views reflect new definitions, some don't).
The problem is to deploy N schema changes as a single unit that completes all-together (or not-at-all) in an application-visible event measured in seconds, not hours.
Solution¶
Treat the set of schema changes in a deploy-request as a single deployment unit. Run every long-running migration's copy + catch-up phase in parallel, holding each in staged-then-sealed state. When every migration reports ready-to-complete, seal them all together — cut over each in rapid sequence (a few seconds apart), preceded by the immediate DDL changes that were deferred to the end.
The application-visible result is one cut-over event per migration but all within seconds — operationally indistinguishable from an atomic multi-table schema transaction, even though MySQL provides no such primitive.
Canonical verbatim shape (Source: sources/2026-04-21-planetscale-deploying-multiple-schema-changes-at-once):
"PlanetScale deploys your entire set of changes near-atomically, which means the production database schema remains stable throughout the deployment process and changes almost all at once when all changes are ready."
Mechanism¶
The pattern composes four building blocks:
schemadiffanalysis phase (cost: milliseconds to seconds):- Parse current-production vs target-branch schemas.
- Compute the diff DDL.
- Partition diffs into equivalence classes.
-
Within each class, find a valid execution permutation by in- memory validity checking.
-
Shadow- table online schema change + staging:
- For each long-running migration, create the shadow, backfill under consistent snapshots, tail the binlog.
- Hold each migration in catch-up state indefinitely (staged-then- sealed).
-
Publish a ready flag per migration to the deploy controller.
-
Gate coordination:
- Deploy controller polls ready flags.
- When every flag is
ready, the gate opens. -
During staging, the deployment is fully cancellable with zero production impact.
-
Near-atomic seal:
- Apply cut-overs to every long-running migration in the pre-computed order (a few seconds apart).
- Apply the immediate DDL (
CREATE TABLE,ALTER VIEW) at the end, respecting per-class dependency ordering fromschemadiff. - Stage [[patterns/instant-schema-revert-via-inverse- replication|inverse replication]] streams on every migration for the 30-minute revert window.
When to use¶
Use near-atomic multi-change deployment when:
- Multiple schema changes are semantically one feature (app code depends on all of them together).
- Individual migrations take long enough that sequential apply creates an operationally painful window (hours to days).
- Full cancellation-ability during staging has operational value (incident response, design iteration).
- The reverse-order revert
- 30-minute window is enough rollback flexibility (vs forever-undoable via new forward migrations).
Do NOT use it for:
- Single schema changes. One migration has no cross- migration coordination to exploit. Standard Online DDL is simpler.
- Schema changes that must be permanent immediately. The 30-minute revert window adds resource cost (inverse replication streams, storage). For schema changes that genuinely need to be irreversible immediately, the revert-window primitive is cost without benefit.
- Hundred-table branches. Resources are not infinite.
The post warns: "Altering a hundred tables in one
deployment request is not feasible and possibly not the
best utilization of database branching."
schemadiffwill refuse branches exceeding the "reliably safe path" threshold. Decompose into multiple smaller deploy-requests.
Trade-offs¶
- Resource amplification during staging. Every staged
migration holds a full shadow table; binlog tailing
accumulates;
schemadiff's in-memory validation scales with graph complexity. For N migrations on M-sized tables, storage cost during staging is O(N × M). - Gate-readiness timing is a coordination problem. The longest-running migration gates the entire deployment. The deploy controller must handle per-migration failure (a single stuck migration blocks the gate) — the post doesn't disclose the failure-recovery semantics.
- Cannot deliver true atomicity. The cut-over is "a few seconds apart," not instantaneous. An incident during the seal window could leave some migrations completed and some not. MySQL's lack of transactional DDL is the hard constraint; this pattern minimises the exposure, doesn't eliminate it.
- Revert window resource cost. Pre-staged inverse replication streams for 30 minutes after seal × N migrations is continuously-running work. Worth it for the operational property, but is not free.
Composes with¶
- patterns/stage-all-complete-together — the coordination primitive underneath the gate.
- patterns/topological-order-by-equivalence-class — the analytical primitive for within-class ordering.
- patterns/interleaved-multi-table-migration-copy-phases — the concurrency-bounded primitive for running multiple copy phases without overwhelming the primary.
- patterns/shadow-table-online-schema-change — the per- migration data-motion primitive.
- patterns/instant-schema-revert-via-inverse-replication — the per-migration reversibility primitive.
- patterns/expand-migrate-contract — an orthogonal pattern that sequences schema evolution over multiple deploy-requests. Near-atomic multi-change handles the within-a-deploy-unit scope; expand-migrate-contract handles the across-deploy-units, across-time scope. Both compose when a larger evolution needs to be split across multiple near-atomic deploys for safety.
Canonical implementation¶
PlanetScale's deploy-request system on MySQL +
Vitess is the canonical wiki instance.
The Shlomi Noach post (2023-08-29) is the public-facing
architectural description; the
Vitess
21 release notes (2026) confirm the mechanism continues
to evolve (more INSTANT DDL analysis, charset handling,
schemadiff capabilities).
No non-Vitess implementation is disclosed on the wiki as of 2026-04-23. Atlas CLI's declarative apply and Skeema / Bytebase GitOps-style schema-change platforms handle single-step apply but don't provide the gated-multi-migration coordination layer — they operate as clients of the underlying engine's single-DDL semantics, accepting the sequential-apply cost model.
Seen in¶
- sources/2026-04-21-planetscale-gated-deployments-addressing-the-complexity-of-schema-deployments-at-scale — earliest canonical wiki disclosure of the pattern. Shlomi Noach's 2022-09-06 Gated Deployments launch post introduces the deploy-unit framing (multi-change + multi-shard dimensions) and the app-facing property ("the deployment can be considered more atomic; up till the final stage, no change is reflected in production"). The 2022 post also canonicalises the scheduling rule "we run as much of the bulk work as possible upfront, sequentially, and then run the more lightweight work in parallel" — sequential copy + parallel tail is named at the launch-post altitude; the 2023 successor formalises it as interleaved-copy- phases. The 2022 post extends this pattern with the multi-shard and operator- scheduled-cutover dimensions.
- sources/2026-04-21-planetscale-deploying-multiple-schema-changes-at-once — canonical wiki first disclosure. Shlomi Noach introduces the pattern end-to-end: the 8-hour / 24-hour sequential-apply complaint, the copy-and-swap emulation as the enabling primitive, the equivalence-class partition for dependency resolution, the staged-then- sealed shape, the near-atomic cut-over window, the 30-minute reverse-order revert window. The architectural load-bearer for every PlanetScale multi-migration deploy-request.
Related¶
- concepts/near-atomic-schema-deployment
- concepts/gated-schema-deployment
- concepts/staged-then-sealed-migration
- concepts/cancel-before-cutover
- concepts/multi-shard-schema-sync
- concepts/reverse-order-revert
- concepts/schema-diff-equivalence-class
- concepts/schema-dependency-graph
- concepts/cutover-freeze-point
- concepts/online-ddl
- systems/vitess
- systems/vitess-schemadiff
- systems/vitess-vreplication
- systems/mysql
- systems/planetscale
- patterns/stage-all-complete-together
- patterns/topological-order-by-equivalence-class
- patterns/interleaved-multi-table-migration-copy-phases
- patterns/shadow-table-online-schema-change
- patterns/instant-schema-revert-via-inverse-replication
- patterns/operator-scheduled-cutover
- patterns/expand-migrate-contract
- companies/planetscale