CONCEPT Cited by 1 source
Optimistic locking¶
Definition¶
Optimistic locking is a concurrency-control discipline in which a writer reads a row (including a version column or equivalent monotonic marker), performs arbitrary application-side computation, and then writes the update conditionally on the version column still matching what was observed at read time. If the version has advanced (some other writer committed in the interim), the conditional update affects zero rows; the application detects the conflict and retries from scratch (re-read, re-compute, re-attempt).
Canonical SQL shape:
-- 1. Read the row + version.
SELECT id, balance, version FROM accounts WHERE id = ?;
-- 2. Application computes new_balance based on observed row.
-- 3. Conditional update.
UPDATE accounts
SET balance = ?, version = version + 1
WHERE id = ?
AND version = <version_observed_in_step_1>;
-- 4. If affected-rows == 0, a concurrent writer won the race.
-- Retry from step 1.
The optimistic framing: the discipline assumes conflicts are rare, so the common case pays nothing (no held lock, no held transaction, no coordination); only on conflict does the application retry.
Contrast with pessimistic locking
(locking reads / SELECT …
FOR UPDATE /
row-level lock): hold an exclusive lock for the whole
read-compute-write window, forcing other writers to wait. This
serialises correctly but holds lock resources (and therefore
the transaction) for the full application-side compute, which
is exactly the shape that breaks under platforms with hard
transaction timeouts like
PlanetScale's 20 s transaction cap.
Why optimistic locking matters for platforms with short¶
transaction windows
PlanetScale's remediation ladder for users who repeatedly bump into the 20 s transaction cap reads (Source: sources/2026-04-21-planetscale-supports-notes-from-the-field):
For simple workloads, consider using optimistic locking instead of transactions …
The structural reason: a pessimistic SELECT … FOR UPDATE
→ application-side compute → UPDATE → COMMIT sequence
holds a transaction open for the duration of application
work, not just database work. When the application work
involves network calls, sorting, external-system reads, or
anything else that can drift past 20 s, the transaction hits
the
tx-killer timeout and gets rolled back.
Optimistic locking collapses the transaction window to
just the three database statements at the bottom of the
sequence (the conditional UPDATE), each of which is
millisecond-order. The application-side computation between
the read and the update happens outside any transaction, so
it can take as long as it needs without ever entering the
20 s envelope.
Mechanisms¶
Version column¶
A dedicated monotonically-increasing integer column (often
version or lock_version) incremented by every update:
Every UPDATE reads the observed version in the WHERE
clause and increments in the SET clause. This is the most
common implementation; ORMs like ActiveRecord, Hibernate,
and Django ORM ship first-class optimistic-locking support
on a version column.
Timestamp column¶
Uses updated_at or similar as the version marker. Works in
principle but vulnerable to clock skew (two updates in the
same clock-tick may have identical timestamps, so the
conflict goes undetected). Version-integer is the safer
default.
Content-based comparison¶
WHERE balance = <observed_value> AND status = <observed_value> …
— no dedicated version column; the WHERE clause lists every
field the application read. Works but is fragile (forgetting a
field means missed conflicts) and verbose.
Compare-and-set primitives¶
At the database layer: MySQL's UPDATE … WHERE version = ?
with ROW_COUNT() check. At the application layer: higher
concurrency primitives like Java's
AtomicInteger.compareAndSet map to the same discipline at
the JVM level rather than the database.
Canonical cases¶
- Single-row state updates with bounded retries — account balance, counter, flag transitions. Rare conflict rate, small retry cost. This is the sweet spot.
- Simple workflow state machines — the post-cited use case: update "order status from PENDING to CONFIRMED, iff still PENDING" is exactly the shape optimistic locking is best at.
- Cache-through writes — read from cache, compute, write to database conditional on version, invalidate cache on success. Avoids long-lived row locks during cache population.
When optimistic locking stops working¶
- High conflict rate — if 10+ concurrent writers are contending on the same row, retry loops burn CPU without making progress; pessimistic locking or row-level lock is cheaper in contention terms.
- Multi-row invariants — updating account A and account B together in a money-transfer is the canonical multi-row constraint; optimistic locking on each row does not serialise A+B together. Need either a genuine transaction (if within the 20 s cap) or a saga with compensations.
- Append-only + aggregate — slotted-counter patterns and event-sourcing are sometimes better than optimistic-locking a single hot aggregate row.
Relationship to isolation levels¶
Optimistic locking works at every isolation level because the
correctness argument is the conditional update, not the
isolation guarantee. Even under
READ UNCOMMITTED the
conditional update enforces the invariant: if the observed
version is stale, the WHERE clause doesn't match and no row
is updated. This is why ORM optimistic-locking implementations
are portable across databases that disagree about isolation
semantics.
Seen in¶
- sources/2026-04-21-planetscale-supports-notes-from-the-field — canonical wiki citation of optimistic locking as PlanetScale Support's second-ladder-rung recommendation when users run into the 20 s transaction cap. Framed as "for simple workloads, consider using optimistic locking instead of transactions" — i.e. as the primary tool for shortening the transaction boundary. Paired on the same ladder with sagas (for multi-step workflows) and ETL offload (Airbyte / Stitch) for the OLAP cases optimistic locking can't solve.