Skip to content

CONCEPT Cited by 1 source

Hot key

A hot key is a single key whose request rate is disproportionately higher than the rest of the keyspace. Under any scheme that maps one key to one node (concepts/static-sharding / consistent hashing), a hot key pins to a single pod which then becomes the bottleneck — and the bottleneck is structural, not fixable by adding more pods to the fleet.

Why static sharding can't fix it

Static hashing is a pure function key → node. Adding pods changes the hash ring but doesn't change the fact that any given key maps to exactly one node. Popular tenants, celebrity users, frequently-read config keys, and autocomplete prefixes all manifest as hot keys and produce the same bottleneck regardless of fleet size.

Worst case: the hot-key pod saturates, tail latency grows, client retries amplify load, cascading failures ripple outward.

Fixes

  • Slice-level isolation + replication (patterns/shard-replication-for-hot-keys) — break the hot key into its own slice, assign that slice to multiple pods, round-robin across replicas. This is systems/dicer's canonical answer (Source: sources/2026-01-13-databricks-open-sourcing-dicer-auto-sharder).
  • Request coalescing — merge concurrent reads for the same key at the fronting tier (singleflight-style).
  • Read-through caching layer in front of the shard — but inherits the overread and network tax systems/dicer was designed to avoid.
  • Client-side caching with TTL + negative-cache on miss — application-level, doesn't help writes.

"Spreading the hot tenant"

Worth contrasting with a common anti-pattern seen in storage: trying to flatten hot tenants by widening their placement across the fleet. As the systems/aws-ebs retrospective observed, spreading a hot tenant across more nodes in a shared-resource multi-tenant system actually widens the noisy-neighbor blast radius (concepts/noisy-neighbor). Dicer's replication approach works because it replicates the hot slice onto dedicated replicas, not because it smears the tenant.

Seen in

Last updated · 200 distilled / 1,178 read