Skip to content

CONCEPT Cited by 3 sources

Serializable isolation

Serializable is the strongest ANSI-SQL isolation level. It guarantees that concurrent transactions yield the same result as some serial execution of the same transactions run one after the other — i.e. no interleaving-induced anomaly is ever observable. Correctness-wise it is the theoretical endpoint of the ACID I axis; operationally it is "generally considered to be impractical, even for a non-distributed database." (Source: sources/2026-04-21-planetscale-pitfalls-of-isolation-levels-in-distributed-databases.)

Why Serializable is impractical in production

Per Sugu Sougoumarane's canonical PlanetScale pedagogy post, Serializable has two structural failure modes that make it unviable as a default:

  1. Excess locking duration / unnecessary coupling. A long-running transaction holds its read locks for its entire lifetime. Any other transaction wanting to write to those rows blocks — even when the reader doesn't require the value to remain stable after the read. Sougoumarane's retail-order worked example: an order-creation transaction that reads the exchange_rate row then does long-running inventory + credit-check work blocks the unrelated exchange-rate-updater process for the full duration. "This possibly unintended dependency may prevent the system from scaling."

  2. Frequent deadlocks. Two concurrent transactions each SELECT a row (acquiring a shared read lock) then try to UPDATE the same row (upgrading to an exclusive lock). Each blocks on the other's read lock → classic deadlock. "A Serializable setting is also subject to frequent deadlocks."

Both failure modes stem from the same root: Serializable serialises transactions that did not need to be serialised, and in a distributed setting this serialisation forces cross-shard coordination (centralised concurrency control or a globally consistent clock) that defeats the point of distribution. See concepts/distributed-isolation-coupling-cost.

Neither MySQL nor PostgreSQL defaults to Serializable:

"It is not a coincidence that all the existing popular databases like Postgres and MySQL recommend against it." (Source: same.)

Implementations

Lock-free "Serializable" systems have the same problems

"There are ways to provide Serializable consistency without locking data. However, such systems are subject to the same problems described above; conflicting transactions just end up failing differently. The root cause of the problem is in the isolation level itself, and no implementation can get you out of those constraints." Canonical framing: the problem is not in the implementation, it is in the level. (Source: sources/2026-04-21-planetscale-pitfalls-of-isolation-levels-in-distributed-databases.)

When to use it anyway

Three legitimate use cases:

  1. Correctness-critical short transactions where the cost of a single deadlock-retry is dwarfed by the cost of a correctness bug (financial ledgers, inventory reservations).
  2. Case-by-case via on-demand Serializable reads — run the bulk of the workload at a lower level and upgrade specific reads to Serializable using SELECT … FOR UPDATE / LOCK IN SHARE MODE. This is the load-bearing lever that makes the practical isolation hierarchy work.
  3. Applications auditing for phantom reads and write-skew that Snapshot Isolation permits — Serializable is the only level that prevents both structurally.

Seen in

Last updated · 550 distilled / 1,221 read