Skip to content

CONCEPT Cited by 1 source

Write-through cache

Definition

A write-through cache is a caching policy in which every write goes through the cache and the authoritative backing store in order, and only returns success once both have been updated. The cache is never a source of truth — it is always a copy — but it is always at least as fresh as the backing store on the write path.

Contrasted with:

  • Cache-aside (read-through by application) — writes go to the backing store only; the cache is populated on the next read miss. Cache contents can be arbitrarily stale relative to the backing store until a miss triggers a refresh.
  • Write-back — writes go to the cache; the cache flushes to the backing store asynchronously. Lowest write latency; highest risk of data loss if the cache fails before flush.
  • Write-around — writes skip the cache entirely, landing only in the backing store. Useful when written data is rarely re-read soon; bad for hot-write/hot-read patterns.

Why choose write-through

  • Read-freshness invariant: a cache hit is never more stale than a miss, because every write already updated the cache.
  • Backing store stays authoritative: the cache can die, be flushed, or be partitioned and the backing store still has the truth. Typed pairing: cache like ValKey / Redis in front of a persistent store like DynamoDB or RDBMS.
  • Eviction is safe: when the cache evicts a key (LRU, TTL), the next read hits the backing store and re-populates. No write-through cache needs an eviction protocol beyond "cache miss".

Known failure modes

  • Partial-failure ambiguity. If DynamoDB write succeeds and ValKey update fails (or vice versa), the cache and store diverge. Production systems must choose: retry cache write, fail the client, rely on next-write to repair, or accept bounded divergence. The production-hard part of "write-through" is not the happy path.
  • Write-amplification. Every write costs both a backing-store write and a cache write. For write-heavy workloads this doubles write IOPS against infrastructure it wouldn't otherwise touch.
  • Latency addition. Write latency = max(backing-store, cache) + RTT, not min. Write-through trades write latency for read freshness.

When to use it

  • Reads are many; writes are fewer (standard cache arithmetic) and
  • Stale reads have real cost — ranking quality, correctness, user experience — making cache-aside's after-the-next-miss model too weak.

Seen in

  • sources/2026-01-06-lyft-feature-store-architecture-optimization-and-evolution — Lyft's dsfeatures layers a ValKey write-through LRU cache on top of DynamoDB. DynamoDB is the persistent source of truth (with a GSI for GDPR deletion efficiency); ValKey holds the most-frequently-accessed (meta)data with a "generous TTL." The write-through shape is what lets the Feature Store guarantee "strongly consistent reads" across batch, streaming, and on-demand ingestion lanes — a cache-aside shape couldn't make that claim without an added invalidation protocol.
Last updated · 319 distilled / 1,201 read