CONCEPT Cited by 1 source
LRU cache eviction¶
Definition¶
Least Recently Used (LRU) is the industry-standard cache replacement policy. When the cache is full and a new entry needs a slot, evict the entry that has been unread for the longest time. Every cache hit promotes the hit entry back to "most recently used." (Source: sources/2025-07-08-planetscale-caching.)
Ben Dicken's framing:
Unlike FIFO, LRU always evicts the item that has least-recently been requested, a sensible choice to maximize cache hits. This aligns well with temporal locality in real-world data access patterns.
Why it's the default¶
LRU directly exploits temporal locality — recent use predicts future use. For most application workloads (session caches, feed caches, row caches, config caches) recent-is-hot is close enough to truth that LRU delivers near-optimal hit rates with minimal bookkeeping.
It's the default for:
- InnoDB buffer pool (with a midpoint-insertion refinement; see below).
-
Postgres
shared_buffers(clock-sweep approximation). - Redis
allkeys-lru/volatile-lrupolicies. - CloudFront / most CDN edge caches as the default tier policy.
- Linux page cache (multi-generational LRU since kernel 6.1).
- JVM
LinkedHashMapwithaccessOrder = true, Guava / Caffeine default eviction (Caffeine actually ships a W-TinyLFU but presents an LRU-ish API).
Implementation shape¶
Canonical structure: hash-map + doubly-linked list.
- Hit —
hash_map[key]to find the node → unlink from current position → splice to head.O(1). - Miss — fetch from backing store → allocate node →
splice to head → if full, drop tail node + remove from
hash map.
O(1).
Concurrency is the implementation tax: per-hit mutation of
the global ordering is a contention hotspot on multi-core
systems. Production caches (Caffeine, libcache, etc.)
trade exact LRU for approximations — segmented LRU, W-TinyLFU,
Clock-Pro — to eliminate the lock.
Midpoint-insertion LRU (InnoDB)¶
Classical LRU is vulnerable to scan pollution: a one-time sequential table scan pulls N new pages through the cache, each of which displaces a genuinely-hot page to become the new head — so after the scan, the cache is full of never-again-accessed pages.
InnoDB uses midpoint-insertion LRU to mitigate: new pages are inserted at the "3/8 from the tail" position, not the head. They only get promoted to the hot part of the list on a second access within a configurable window. This keeps single-pass scans from evicting the hot working set.
See concepts/innodb-buffer-pool for the database-engine framing.
Known failure modes¶
- Scan pollution — above.
- Adversarial workload — a deliberately-crafted access stream can force every useful entry to evict every time, collapsing hit rate to near zero.
- Per-instance ordering — in a distributed cache where the same key can route to different nodes, cache locality must be preserved upstream or LRU works per-node on a random slice of the keyspace and each node rebuilds its own hot set.
- Concurrent LRU is expensive — strict LRU requires per- hit list mutation; at high QPS the list lock becomes the bottleneck. Approximations win.
- Can't distinguish "hot frequent" from "hot recent." A rare item accessed now looks identical to a repeatedly- accessed item accessed now. LFU and LFRU (concepts/lfru-cache-eviction) are answers.
Variants on the wiki¶
- concepts/time-aware-lru-cache-eviction — LRU + per- entry TTL that evicts regardless of access order.
- concepts/lfru-cache-eviction — LRU on a high-priority queue with a demotion path to a low-priority queue.
- Midpoint-insertion LRU (InnoDB variant, described above).
- W-TinyLFU (Caffeine) — frequency sketch + LRU on a small window cache; not directly in the Dicken post but the state-of-the-art production LRU-family policy.
- Clock / Clock-Pro — pseudo-LRU approximations that avoid per-hit list mutation.
Seen in¶
- sources/2025-07-08-planetscale-caching — Ben Dicken's canonical introduction to cache replacement policies positions LRU as the industry default over FIFO because it matches temporal locality.
Related¶
- concepts/fifo-cache-eviction — the simpler alternative LRU improves on.
- concepts/time-aware-lru-cache-eviction
- concepts/lfru-cache-eviction
- concepts/temporal-locality-recency-bias — the access- pattern property LRU assumes.
- concepts/cache-hit-rate
- concepts/innodb-buffer-pool
- concepts/postgres-shared-buffers-double-buffering
- concepts/cache-locality