Skip to content

CONCEPT Cited by 1 source

Tunable consistency

Tunable consistency is the property that a database lets applications choose the consistency + durability level per operation, rather than fixing one level for the whole system. The canonical realization is MongoDB's read concerns and write concerns: each individual read or write declares how many replicas must acknowledge, how deeply the write must be persisted, and whether it needs to see majority-committed state.

The one-size-fits-all problem

Traditional databases ship with a single consistency mode: SQL RDBMS → ACID, most early NoSQL systems → eventually consistent. This forces an architectural mistake: different data in the same application has different consistency needs, but a single-mode database forces the app to choose the worst-case level for all data — or to split across multiple databases.

MongoDB's framing:

"Not all data is created equal. Within a single application, some data — like a 'page view count' — might not have the same consistency requirements as an 'order checkout value'." — MongoDB (Source: sources/2025-09-25-mongodb-from-niche-nosql-to-enterprise-powerhouse)

Without tunable consistency you get the multi-database problem for consistency reasons alone: one DB for loose-consistency counters, another for strictly-consistent checkout — with all the synchronization pain that creates.

Shape of the knob (MongoDB)

The spectrum MongoDB exposes:

  • "Fire and forget" — write is sent, no acknowledgement waited for. Fastest, weakest durability. Appropriate for page-view counters / similar telemetry that can tolerate loss.
  • Acknowledged writes to the primary — single-node durability.
  • Journaled writes — write sits in the WAL before ack.
  • Majority writes — a majority of voting replica-set members have the write before ack; the write survives a single-replica loss and is visible under majority reads.

Read concerns mirror this: local (whatever's on the node), majority (guaranteed-durable, survives failover), linearizable (real-time read-your-writes).

Why it unlocks single-database consolidation

With per-operation tuning, one logical database can host both the checkout and the page-view-counter workloads. The application picks:

  • Checkout write → w: "majority", j: true → fully ACKed, durable, majority-committed.
  • Page-view counter write → w: 0 or w: 1 → low-latency, tolerate loss.

This is the consistency-axis answer to the same three-database problem MongoDB's Vector Search addresses on the search-axis: reduce the number of specialised stores you operate by pushing the axis into the existing database.

Trade-offs

  • More cognitive load per write site — every write now has a consistency decision, and getting the wrong level ships a bug that's hard to detect in testing.
  • Visible to libraries / drivers — ORMs and drivers need to expose the knob; ad-hoc fire-and-forget defaults can silently lose writes.
  • Monitoring shiftswhich write concern was used becomes a production-observability question (inflight writes tagged by concern).

Seen in

  • sources/2025-09-25-mongodb-from-niche-nosql-to-enterprise-powerhouse — canonical MongoDB-side articulation: "This flexibility provides the best price/performance tradeoffs for modern applications." Named as one of four enterprise-grade improvements (HA replica sets, horizontal sharding, tunable consistency, multi-document ACID) that moved MongoDB from niche to system-of-record.
Last updated · 200 distilled / 1,178 read