Skip to content

CONCEPT Cited by 2 sources

Strong Consistency (Read-after-Write)

Strong read-after-write consistency is the guarantee that once a write to a key completes, any subsequent read of that key observes the written value — with no "convergence" window of stale reads.

In distributed storage, strong consistency is expensive enough that many systems ship with eventual consistency (or strong for new keys and eventual for overwrites) and hand the reconciliation problem to the customer.

Why it's a simplicity feature, not a performance feature

From the 2025 S3 post, on S3's Dec 2020 move to strong read-after-write consistency:

"When we moved S3 to a strong consistency model, the customer reception was stronger than any of us expected (and I think we thought people would be pretty darned pleased!). We knew it would be popular, but in meeting after meeting, builders spoke about deleting code and simplifying their systems."

The customer-visible value is code deletion — all the retry loops, versioned-read patterns, and "wait then re-read" hacks that were the price of eventual consistency disappear. This is the same simplicity framing as patterns/conditional-write.

(Source: sources/2025-03-14-allthingsdistributed-s3-simplicity-is-table-stakes)

Scope of the S3 guarantee

Strong read-after-write consistency for:

  • New object PUTs (this was already strong pre-2020).
  • Overwrite PUTs (this is what changed in 2020).
  • LIST reflecting prior writes.
  • DELETE then read — subsequent reads fail as expected.

Critically, this applies across all S3 APIs, with no opt-in or per-bucket flag. It's "just how S3 works now" — a property, not a configuration.

Relationship to conditional operations (2024)

With strong consistency in place, S3 could layer conditional writes (compare-and-set against metadata / object version) on top without having to invent a separate locking primitive — the consistency guarantee is the foundation. See patterns/conditional-write.

Seen in

Last updated · 200 distilled / 1,178 read