PATTERN Cited by 1 source
Lowest isolation plus explicit locks¶
Pattern¶
Design the application to run at the lowest isolation level
the database supports — typically
READ COMMITTED — and
acquire explicit locks on the specific rows where
correctness requires stronger guarantees, via
on-demand Serializable
reads. Accept the remaining constraints as application-design
discipline:
- Prefer
READ COMMITTEDas the default. Upgrade specific reads via locking. - Avoid multi-statement transactions where possible.
- Avoid distributed transactions in sharded databases by keeping transactionally-related rows within the same shard.
Canonicalised from Sugu Sougoumarane's 2020 PlanetScale pedagogy post on isolation pitfalls.
The canonical prescription, verbatim¶
From the article's conclusion: (Source: sources/2026-04-21-planetscale-pitfalls-of-isolation-levels-in-distributed-databases)
To be scalable, an application should avoid relying on any advanced isolation features of a database. It should instead try to use as few of the guarantees as it can. If you can write an application to work with ReadCommitted isolation level, then moving to SnapshotRead should be discouraged. Serializable or RepeatableRead are almost always a bad idea.
It is also better to avoid multi-statement transactions if possible. However, as the application evolves, this need may become unavoidable. At that point, try mainly relying on the atomic guarantees of transactions, and stay at the lowest isolation level the database system supports.
If using a sharded database, avoid distributed transactions. This can be achieved by keeping related rows within the same shard.
Why this is correct despite looking risky¶
The immediate reaction to "use the lowest isolation level" is that correctness will suffer. The pattern answers this by observing that correctness is not a per-database-default property — it is a per-transaction property. The application knows which transactions need stronger guarantees and which do not:
- Reads that must not change mid-transaction (bank balance
during withdrawal, inventory count during reservation) →
SELECT … FOR UPDATEupgrades these exactly and only these. - Reads that tolerate concurrent changes (exchange rate,
reference-data lookup, dashboard counter) → plain
SELECTatREAD COMMITTEDpays no contention cost. - Reads spanning multiple tables for reporting consistency
→ upgrade the transaction as a whole via
SET TRANSACTION ISOLATION LEVEL REPEATABLE READfor the duration.
The database-default-isolation approach conflates all three cases at the highest required strength. The explicit-lock approach scopes the strength exactly.
Why multi-statement transactions are the next axis¶
Even at READ COMMITTED, multi-statement transactions hold
locks acquired by earlier statements for the duration of the
transaction. A long-running multi-statement transaction blocks
other transactions wanting to touch the same rows — degrading
throughput proportionally. Mitigations:
- Split the work. Do the
SELECTin one transaction, theUPDATEin a second. Accept the small race window or close it via optimistic concurrency (version column) rather than pessimistic locks. - Move computation out. Read the values, compute outside the database, then write back in a short transaction.
- Use stored procedures / server-side logic to shrink the wall-clock duration of the lock-holding transaction.
Why distributed transactions are the final axis¶
Cross-shard concepts/atomic-distributed-transaction|2PC pays the coupling cost on every commit — metadata storage + prepare round-trip + commit round-trip + recovery complexity on node crashes. The "availability of the system is dictated by the worst-performing shard" — one slow shard stalls all cross- shard transactions. Mitigations:
- Co-locate transactionally-related rows on the same shard. Pick a shard key that maps related entities together (user ID for all of a user's data, tenant ID for all of a tenant's data).
- Use weaker primitives than 2PC where acceptable. Vitess's Consistent Lookup Vindex commits an authoritative write + dependent lookup write in a deterministic order without 2PC — accepting brief visibility windows where the lookup and data tables disagree, in exchange for eliminating the 2PC round-trip cost.
- Compensate asynchronously. Write to the two shards independently and reconcile via a background job that detects and fixes discrepancies.
Meta-principle: concurrency propagates¶
Canonical framing, verbatim: "These recommendations may conflict with the general advice of not prematurely optimizing your program, but this case is different. This is something that one must do from the beginning, because it is very hard to refactor a non-concurrent program to be concurrent." (Source: same.)
Concurrency constraints propagate through the application at
design time and cannot be bolted on later. A sequential
application that assumes reads are stable cannot be retrofitted
to tolerate concurrent writes by adding FOR UPDATE
everywhere — the composition of independently-correct locking
reads is not itself correct (deadlocks compound, lock ordering
matters, holding many locks across RPCs starves everyone).
This is the rare case where "design for scale from the beginning" is the correct advice over "don't prematurely optimise."
When the pattern does not apply¶
- Small applications that will never scale past a single
database. The coordination cost of explicit locks at
READ COMMITTEDoutweighs the benefit vs just running atREPEATABLE READand letting the database handle it. The pattern's benefits compound with scale. - Applications with no concurrent writers. Read-only warehousing / reporting workloads have no contention to optimise away; pick whichever level gives the cleanest consistency reasoning.
- Strictly-regulated workloads (financial clearing, healthcare) where auditors require Serializable by contract. Honor the contract; optimise elsewhere.
Composition with other patterns¶
- On its own: gives a scalable single-database baseline.
- Plus on-demand Serializable reads: this pattern's tactical mechanism for upgrading specific reads.
- Plus ordered commit without 2PC: Vitess's shard-topology mechanism that makes the "avoid distributed transactions" discipline practical even when data naturally spans shards.
Seen in¶
- sources/2026-04-21-planetscale-pitfalls-of-isolation-levels-in-distributed-databases — Sugu Sougoumarane's canonical 2020-era prescription (re-fetched 2026-04-21). Three-part structure. The meta- principle about concurrency propagation is uniquely load-bearing for deciding when to apply this pattern.
Related¶
- concepts/mysql-transaction-isolation-levels
- concepts/read-committed-isolation
- concepts/snapshot-isolation
- concepts/serializable-isolation
- concepts/distributed-isolation-coupling-cost
- concepts/atomic-distributed-transaction
- concepts/locking-read-mysql
- patterns/on-demand-serializable-read
- patterns/ordered-commit-without-2pc
- systems/vitess