CONCEPT Cited by 1 source
Predicate locking (Postgres)¶
Predicate locks are Postgres's
non-blocking locking primitive for tracking the row-sets
(not the individual rows) a transaction has touched, used
under SERIALIZABLE isolation to detect conflicts with
concurrent transactions without ever blocking access.
Definition¶
From Ben Dicken's PlanetScale walkthrough (Source:
sources/2026-02-22-planetscale-what-is-a-database-transaction):
"As transactions read and write rows, Postgres creates
predicate locks, which are 'locks' on sets of rows
specified by a predicate. For example, if a transaction
updates all rows with IDs 10–20, it will take a lock on the
predicate WHERE id BETWEEN 10 AND 20."
Unlike shared/exclusive row locks, predicate locks do not
block access to those rows by anyone. Instead they are
annotations — records of "which row-sets is which transaction
interested in" — that Postgres uses at commit time to detect
whether any two concurrent transactions could have violated
SERIALIZABLE guarantees by operating on overlapping
predicates.
How it enables optimistic concurrency¶
Combined with multi-version row storage (see concepts/xmin-xmax-row-versioning), predicate locks let Postgres run optimistically:
- Transactions never wait for a lock.
- Transactions record which row-sets they read and which they wrote via predicate locks.
- At commit time, Postgres checks whether any concurrent transaction's reads overlapped with this transaction's writes (or vice versa) in a way that breaks serializability.
- If so, it kills one of the conflicting transactions, which must retry.
The result is called Serializable Snapshot Isolation (SSI) — formally strong enough to forbid write-skew anomalies (which plain snapshot isolation allows).
Contrast with MySQL locking¶
MySQL / InnoDB takes the opposite approach:
| Aspect | MySQL SERIALIZABLE |
Postgres SERIALIZABLE |
|---|---|---|
| Locking strategy | Pessimistic — always X-lock on update | Optimistic — predicate-track, never block |
| Concurrent conflict | Waits on lock, deadlock possible | Never blocks, kills at commit |
| Escape hatch | Deadlock detector kills a transaction | Commit-time SSI check kills a transaction |
| Retry needed? | Yes | Yes |
Both approaches require application-side retry logic, but for different reasons: MySQL because lock contention can cause deadlocks that the engine breaks by killing a participant, Postgres because committing may hit an SSI violation and fail.
Why it matters at scale¶
Predicate locking shines in read-mostly workloads with
occasional writes: readers never block, writers never block,
and the only cost is a commit-time check that usually passes
cleanly. Under heavy write contention on overlapping
predicates, Postgres may kill transactions frequently enough
that retry costs start to approach the cost of MySQL-style
blocking locks — which is why neither engine's SERIALIZABLE
mode is a free lunch.
Not the same as row locks¶
Postgres still takes conventional row-level locks for
SELECT ... FOR UPDATE / FOR SHARE and for ordinary write
operations. Predicate locks are specifically the SSI
bookkeeping layer for tracking what predicates transactions
have touched, distinct from the row-level locks that
enforce individual row write serialization. See
concepts/shared-lock-vs-exclusive-lock for the row-level
picture.
Seen in¶
- PlanetScale — What is a database transaction? — canonical explanation of the
WHERE id BETWEEN 10 AND 20predicate-lock example.
Related¶
- concepts/snapshot-isolation — the weaker model predicate locks + SSI strengthen into true Serializability.
- concepts/mysql-transaction-isolation-levels · concepts/shared-lock-vs-exclusive-lock · concepts/gap-locking
- concepts/deadlock-vs-lock-contention — the MySQL counterpart escape hatch.
- concepts/database-transaction
- systems/postgresql