CONCEPT Cited by 2 sources
Eventual consistency¶
Eventual consistency is a liveness guarantee: if no new updates are made to a shared value, all observers will eventually converge on the same read. No bound is given on how fast convergence happens; correctness rests on convergence happening at all.
Contrast concepts/strong-consistency (read-after-write: any read after a write must reflect that write). Eventual consistency trades the synchronous-read-after-write guarantee for availability — readers get some value during partitions or transitions, not an error.
In auto-sharders¶
systems/dicer chose eventually-consistent Assignments: the Clerk (client library) and Slicelet (server library) maintain locally-cached Assignments and receive updates asynchronously from the Assigner. During transitions (split, merge, replication, move) Clerks and Slicelets may briefly disagree about which pod owns a given key.
The trade-off, per the Databricks post (Source: sources/2026-01-13-databricks-open-sourcing-dicer-auto-sharder):
- Won: availability and fast recovery. Clients don't stall waiting for a coordinator; pods recover quickly after transitions.
- Given up: strong key-ownership. Two pods may briefly think they own a key; applications either tolerate this or add their own mutex.
This is the same space systems/slicer and systems/centrifuge chose differently with leases — stronger ownership, at the cost of lease-refresh overhead and more complex failure-handling.
When eventual consistency is safe¶
- Cache workloads — transient stale reads are fine; final-state correctness comes from the backing store.
- Idempotent writes — double-owner transient state is absorbed by deduplication.
- Read-mostly serving with a snapshot-consistent storage layer (e.g., systems/unity-catalog).
When it isn't¶
- Exclusive-lock style workloads ("only one worker processes this job") need leases or consensus. Dicer's "soft" leader election (concepts/soft-leader-election) names this limit explicitly.
Historical CAP framing (NoSQL default)¶
Most early NoSQL databases shipped eventual consistency as the default, prioritising Availability + Partition tolerance (AP) over Consistency under the CAP theorem. For roughly a decade this defined the market's perception of NoSQL as a category — even for exceptions like MongoDB, which was designed CP (Consistency + Partition tolerance) from the start and "was often lumped in with the rest, leading to the imprecise label of having 'light consistency'" (Source: sources/2025-09-25-mongodb-from-niche-nosql-to-enterprise-powerhouse). The adoption argument MongoDB later made against this framing — 70%+ of the Fortune 500, 7 of the 10 largest banks on MongoDB — is pitched as the empirical correction of the lumped-in categorisation.
The perception-vs-reality gap matters architecturally: during the decade when NoSQL = eventual consistency, system-of-record workloads (banking ledgers, medical records, order checkout) stayed on relational databases regardless of what the individual NoSQL database could actually guarantee. Closing the gap required per-operation knobs (tunable consistency) + multi-document ACID transactions as a demonstrable capability, not a category-wide rebrand.
Seen in¶
- sources/2026-01-13-databricks-open-sourcing-dicer-auto-sharder — Dicer's explicit consistency choice; the prior-art systems (Slicer, Centrifuge) that chose differently.
- sources/2025-09-25-mongodb-from-niche-nosql-to-enterprise-powerhouse — MongoDB's own framing of eventual consistency as the industry-default NoSQL posture it spent 15 years differentiating from; the historical context for why tunable consistency + multi-doc ACID mattered for enterprise adoption.