Skip to content

CONCEPT Cited by 1 source

Active/passive replication

Definition

Active/passive replication is a database-cluster topology with exactly one active writer (the primary / source / leader) and one or more read-only replicas (the passive nodes / secondaries / followers). Every write flows through the primary; the primary streams its change log to the replicas; the replicas apply those changes locally and serve read-only traffic.

It is the canonical MySQL and Postgres replication topology, and the one both engines are designed around — their replication primitives (MySQL binlog, Postgres WAL, GTIDs, primary-replica protocol handshakes) all assume exactly one upstream source of truth per replica.

The canonical framing — Morrison on PlanetScale

Brian Morrison II's 2023-11-15 best-practices post gives the canonical wiki framing:

"When replicating with active/passive mode, one MySQL server acts as the source and all other servers are read-only replicas from that source. In this configuration, the replicas can be used to serve up read-only queries, but all writes must be sent to the source. This helps split the load across all replicas." (Source: sources/2026-04-21-planetscale-mysql-replication-best-practices-and-considerations)

And the load-bearing operational rule: "We always recommend using an active/passive configuration for replication, and sharding if you need more throughput from your database." When a single primary can no longer absorb write load, the answer is sharding (partition the data, one primary per shard), not active/active replication (multiple writers on the same data).

Properties

  • Single source of truth — the primary's state is authoritative; any divergence on replicas is either replication lag (transient, bounded by network + apply throughput) or a bug.
  • Writes serialise on one node — all transactions get a total order at the primary's binlog/WAL, inheriting the primary's single-node ACID guarantees.
  • No conflict resolution needed — because only one node writes, there is no possibility of write-write conflicts on the same row.
  • Reads scale horizontally — adding replicas is the canonical lever for read-heavy workloads (patterns/read-replicas-for-read-scaling).
  • Writes do not scale horizontally — the single primary is the hard ceiling on write throughput.
  • Consistency is tunable per-read — reads can go to the primary (strong) or a replica (eventually consistent, lag-bounded).

Trade-off against active/active

Morrison's framing on the alternative:

"Active/active might seem like a good idea since you have two servers to process write requests, though each server is processing the others query workload, making write distribution more of an illusion. … conflicts can easily occur as there is no native conflict resolution logic within MySQL."

The canonical wiki rule: for MySQL (and Postgres), choose active/passive; for write-throughput scaling, shard the data rather than go multi-writer. Multi-writer protocols (Galera, MySQL Group Replication, CockroachDB, Spanner) exist but require either (a) distributed consensus on every write (high cost) or (b) CRDT-style conflict resolution with application-semantic awareness (restricted workload fit).

Replication mode is orthogonal

Active/passive is a topology choice; the synchronisation mode is a separate axis:

  • Asynchronous (async) — primary acks before replicas apply. Default MySQL posture.
  • Semi-synchronous (semi-sync) — primary waits for at least one replica to persist the change in its relay log before acking. PlanetScale's posture within a region.
  • Synchronous (fully-sync) — primary waits for every replica to apply. Rare in OLTP; prohibitive latency cost.

All three modes are compatible with active/passive. Morrison's post canonicalises PlanetScale's mixed posture: one semi-sync replica as guaranteed failover candidate + async replicas for extra read capacity.

Failover

Active/passive failover promotes one of the passive replicas to the new primary. Two paths:

  • Planned failover (software rollout, node maintenance) — use graceful leader demotion to drain in-flight work and cut over without application-visible errors. Vitess PRS is the canonical instance.
  • Unplanned failover (primary crash, network partition) — use the unplanned-failover playbook: fence the downed primary, promote the best-candidate replica (ideally the semi-sync-flagged one), re-point the application, re-point the remaining replicas.

Both paths preserve the single-writer invariant across the transition.

Seen in

  • sources/2026-04-21-planetscale-mysql-replication-best-practices-and-considerationscanonical wiki datum for active/passive as PlanetScale's recommended MySQL topology; framed against active/active's conflict-resolution absence; sharding rather than multi-writer as the write-scale answer.
  • Implicitly across the entire MySQL / Postgres / Vitess / Planetscale corpus — every primary + replica topology on the wiki is active/passive.
Last updated · 378 distilled / 1,213 read