Skip to content

CONCEPT Cited by 1 source

FIFO cache eviction

Definition

First In, First Out (FIFO) is the simplest cache replacement policy. The cache is a queue ordered by insertion time. New items are added at one end; when the cache is full, the item that's been there the longest — regardless of how often it's been read — is evicted. (Source: sources/2025-07-08-planetscale-caching.)

[ newest ← … ← oldest → evict ]

How it works

  • On write / insert — push the new entry onto the head of the queue. If the cache is full, pop the tail first.
  • On hit — return the value. Do not change the queue position. This is the defining property that separates FIFO from LRU.
  • On miss — fetch from the backing store, insert at head.

Why it's simple

FIFO needs only a single queue and an insert-time index. The data structure is a standard deque or a linked list with a size counter. No per-hit bookkeeping — so write-path cost is O(1) and read-path cost is literally zero (beyond the lookup itself).

Why it's usually wrong for application caches

FIFO ignores access pattern entirely:

While simple to implement, FIFO isn't optimal for most caching scenarios because it doesn't consider usage patterns.

sources/2025-07-08-planetscale-caching

A popular item inserted early — say, the homepage feature list, or a lookup for your user session — will be evicted on schedule, regardless of how many times it's been served from cache since. The next request re-fetches it, and it reinserts at the head, evicting something else. The net effect is thrashing: hot items oscillate between cache and backing store.

This is precisely the problem [[concepts/temporal-locality- recency-bias|temporal locality]] says most workloads have — and the reason LRU is the industry default.

When FIFO is actually the right choice

  • Uniformly random access. When there's no temporal locality to exploit, FIFO ≈ LRU in hit rate, and FIFO is cheaper.
  • Streaming / scan workloads. Each item is read once and never again — LRU's promote-on-hit work is wasted.
  • Specialized hardware caches where the extra state LRU requires is expensive. Some CPU caches use FIFO or pseudo-FIFO variants at the L3 tier for cost reasons.
  • Queues that happen to be in-memory stores. A Kafka consumer offset, a Redis list used as a job queue — these aren't "caches" in the replacement-policy sense, but the data structure is FIFO by construction.

Variants

  • FIFO with one second-chance bit (a.k.a. SIEVE, Clock) — flip a "recently used" bit on each hit; on eviction, skip over items with the bit set (and clear it). Much better hit rate than plain FIFO, still very cheap.
  • Segmented FIFO — two-queue variants sometimes branded as FIFO-2Q; approximates LRU without the per-hit reordering cost.

These second-chance variants are usually what a production "FIFO cache" actually is — pure FIFO is rare in real systems.

Seen in

  • sources/2025-07-08-planetscale-caching — Ben Dicken's canonical pedagogical introduction to cache replacement policies; FIFO as the simplest baseline, explicitly critiqued for not considering usage patterns.
Last updated · 319 distilled / 1,201 read