Skip to content

CONCEPT Cited by 1 source

Aurora storage quorum

Definition

Aurora storage quorum is the replication scheme that Amazon Aurora uses internally within a single cluster to make writes durable: redo-log entries produced by the writer compute node are forwarded to Aurora's dedicated storage appliances, each storage appliance holds data in 10 GiB segments that are replicated across three availability zones with six total copies per segment, and the writer ack's the client only after at least four of the six segments have confirmed the write.

Brian Morrison II's canonical framing (PlanetScale, 2024-01-24):

"Instead of storing the redo log entries directly on the attached volumes, they are forwarded to dedicated Aurora storage appliances in the same availability zone as the source compute node. Data on this appliance is stored within 10 GiB segments spread across three availability zones in a given region. Before the compute node responds to the application, Aurora will ensure that at least four of the six default segments have a replicated copy of the data to ensure durability should a data center be taken offline." (Source: sources/2026-04-21-planetscale-planetscale-vs-amazon-aurora-replication)

Why 6 × 3 × 4

The geometry is chosen to survive one entire AZ failure plus one additional segment failure:

  • 6 segments distributed 2-per-AZ across 3 AZs.
  • Lose 1 AZ → 4 segments remain → writes can still ack (quorum = 4).
  • Lose 1 AZ + 1 additional segment → 3 remain → writes block but no data loss (reads can reconstruct from any 3 of the original 6 via read quorum).

This is a classic replication-for-durability instance where the quorum is tuned so both read and write quorums always intersect (read quorum 3 + write quorum 4 > 6). It's also the reason Aurora advertises its 3-AZ deployment as cross-AZ-safe out of the box — no customer configuration is required to get cross-AZ durability.

Consequences at the replica path

Read-only compute nodes in an Aurora cluster don't replay a binlog; they talk directly to the same storage segments as the writer. "Since data is replicated on the storage level, read-only compute nodes can be started at any time in an availability zone containing a copy of the data for that node to read." (Source: Morrison.) This is the canonical OLTP illustration of compute–storage separation — readers share the underlying storage and receive out-of-band page-update notifications from the writer to keep their buffer cache coherent. See patterns/storage-forwarded-redo-log-replication for the full pattern.

Trade-off vs traditional MySQL replication

Axis Aurora storage quorum MySQL binlog replication
Primary writes to Storage appliances (6-copy, 3-AZ) Local disk + binlog tailed by replicas
Replica has own data copy No — shares storage Yes — each replica holds a full copy
Write-ack wait Until 4 of 6 segments ack Local binlog flush (async) or 1 replica ack (semi-sync)
Rolling upgrade via replacing replicas Not possible — no separate replicas to upgrade Canonical pattern (see concepts/rolling-upgrade)
Horizontal sharding Not supported for Aurora MySQL (2024-01) VTGate-native on Vitess/PlanetScale (see concepts/horizontal-sharding)
Replication lag Still present for read-only compute nodes — "replication lag still needs to be considered" Present; depends on replica apply rate

Seen in

Last updated · 470 distilled / 1,213 read