CONCEPT Cited by 1 source
Active/active replication¶
Definition¶
Active/active replication is a database-cluster topology with multiple active writers — two or more nodes all accepting writes against the same logical dataset and replicating to one another. Every node is simultaneously a primary (from its own clients' perspective) and a replica (of every other node's writes).
It is the topology operators sometimes reach for when they want to "scale writes horizontally" with minimal app-tier changes. The canonical wiki position — from PlanetScale's first-party best-practices guidance — is that for MySQL (and most single-engine OLTP databases), active/active is a structural anti-pattern because MySQL has no native write-write conflict resolution, and sharding is the correct answer when one primary's write ceiling is reached.
The canonical framing — Morrison on PlanetScale¶
Brian Morrison II's 2023-11-15 best-practices post gives the canonical wiki framing against active/active MySQL:
"The alternate configuration is active/active, which means multiple servers are actively used to read and write data. Active/active might seem like a good idea since you have two servers to process write requests, though each server is processing the others query workload, making write distribution more of an illusion. The failover between servers can appear seamless though conflicts can easily occur as there is no native conflict resolution logic within MySQL. When conflicts do occur, neither node can be considered the source of truth for a rebuild without significant data loss." (Source: sources/2026-04-21-planetscale-mysql-replication-best-practices-and-considerations)
Three load-bearing observations Morrison makes:
- Write distribution is an illusion — each node in an active/active pair still processes the other's workload via replication, so the effective write capacity is bounded by the slowest node, not summed.
- Failover appears seamless but isn't — because both nodes accept writes, a network partition between them means both sides can accept conflicting writes, leaving operators with no authoritative source of truth for recovery.
- Recovery requires data loss — when conflicts do occur, there is no algorithmic way to pick a winner in vanilla MySQL; rebuilding consistency requires discarding one side's writes.
Where the anti-pattern breaks down¶
The "no native conflict resolution" argument is MySQL-specific. Some systems do support active/active through additional machinery:
- MySQL Group Replication + Galera — layer distributed consensus (or group-communication primitives) on top of MySQL to serialise writes across the cluster. Correctness achieved at the cost of write-latency = cross-cluster-round-trip and reduced throughput vs single-primary.
- CockroachDB / Spanner — designed ground-up around distributed consensus per range/shard; every write already goes through a Raft/Paxos decision, so active/active is the native topology.
- CRDT-overlay systems (CRDTs) — restrict the workload to operations that commute (counter increments, set unions) so concurrent writes provably converge to the same state regardless of arrival order.
- Last-write-wins (LWW) via timestamps — pick a winner by wall-clock; works only if application semantics tolerate arbitrary write loss. concepts/last-write-wins.
None of these apply to a vanilla MySQL active/active configuration. Morrison's post is specifically about MySQL replication, so the anti-pattern framing is correct for that substrate.
Why sharding is the canonical alternative¶
Morrison's load-bearing alternative: "We always recommend using an active/passive configuration for replication, and sharding if you need more throughput from your database."
Sharding gives you multiple primaries writing at once — but each primary writes to a disjoint subset of the data, so there are no write-write conflicts. Each shard is internally active/passive; cross-shard transactions need 2PC or careful design (concepts/atomic-distributed-transaction, patterns/ordered-commit-without-2pc). This is the canonical PlanetScale/Vitess architecture: many active/passive shards, no active/active.
Failure modes in vanilla MySQL active/active¶
- Split-brain on partition — both nodes continue accepting writes on their side of the partition; merging is undefined.
- Row-level write-write conflicts — two clients update the same row on different nodes; the replay order decides whose write survives, with no visibility into the conflict.
- Auto-increment collisions — classic MySQL hazard; mitigated by
auto_increment_increment+auto_increment_offsetbut leaves a fundamental uniqueness-gap if either offset is misconfigured. - Failover ambiguity — neither node is authoritative, so choosing one for rebuild loses the other's writes.
All of these are absent from active/passive.
Seen in¶
- sources/2026-04-21-planetscale-mysql-replication-best-practices-and-considerations — canonical wiki anti-pattern disclosure for active/active MySQL with the "no native conflict resolution" kill-shot argument.
- Implicitly contrasted against across the MySQL/Postgres/Vitess corpus — every production topology canonicalised on the wiki is active/passive + sharded.
Related¶
- concepts/active-passive-replication — the canonical alternative.
- concepts/horizontal-sharding — the canonical answer for write-scale.
- concepts/conflict-free-replicated-data-type — the CRDT overlay that makes active/active correct at the cost of workload restrictions.
- concepts/split-brain — the dominant failure mode.
- concepts/last-write-wins — a conflict-resolution rule that can be layered on, with significant caveats.
- concepts/no-distributed-consensus — the context under which active/active is structurally unsafe.
- systems/mysql — the substrate this anti-pattern applies to.
- companies/planetscale — the canonical first-party voice against.