CONCEPT Cited by 1 source
Lazy leaky bucket update¶
Definition¶
A lazy leaky bucket data structure avoids a dedicated background thread to drain buckets or evict idle buckets. Instead, both operations are amortised into the arrival path:
- Drain on read. On every bucket-read (i.e. every admission decision), the bucket re-computes its current debt level from the stored
last_update_timestamp,stored_debt, and the configureddrain_rate. No continuous draining happens between accesses. - Evict on insert. When a new bucket must be allocated (because a query maps to a budget with no existing bucket in shared memory), the data structure evicts either any bucket already at zero debt, or (if all buckets are non-empty) the bucket expected to reach zero debt soonest.
Neither operation requires a separate sweeper goroutine, thread, or wake-up timer. The maintenance cost is borne by the query-arrival traffic itself.
Why it matters¶
For a rate limiter operating inside a microsecond-budget admission path (see Traffic Control), per-bucket maintenance cost matters. Spawning a dedicated sweeper would add:
- CPU overhead for idle polling.
- Lock contention against the arrival path.
- Latency pauses if the sweeper coalesces work into periodic bursts.
Lazy update avoids all three. The shared-memory footprint is tunable to a fixed cap; when the cap is reached, insertion triggers eviction.
Source framing¶
From sources/2026-04-21-planetscale-behind-the-scenes-how-database-traffic-control-works:
"There is no periodic task that drains debt from all buckets. Instead, each bucket is updated only when read. There is also no periodic task to evict buckets with a debt level of zero. Instead, adding a new bucket to the table evicts any that have already emptied, or whichever bucket is expected to become empty soonest."
Composition with reverse leaky bucket¶
Lazy update and the reverse (debt-accumulating) bucket topology compose naturally. Because a bucket at zero debt in the reverse topology is semantically equivalent to "no bucket allocated," eviction-at-zero is safe — the next arrival re-allocates a fresh zero-debt bucket with identical rate-limit semantics. The classical (credit-draining) topology can't do this: a bucket at max credits is semantically different from "no bucket allocated" for a newly-arriving entity.
"Empty-soonest" eviction¶
When every bucket is non-empty, the source says eviction picks "whichever bucket is expected to become empty soonest." This is a linear prediction: expected_empty_at = now + current_debt / drain_rate. The bucket with the smallest expected_empty_at is chosen. This biases eviction toward buckets that would have been naturally evictable within the smallest lookahead window — a reasonable approximation that preserves active workload state.
Caveats¶
- Pathological arrival patterns. A workload that rotates through many high-drain buckets can evict each bucket before the next arrival to that bucket, losing accumulated debt and effectively skipping rate-limit enforcement. In practice Traffic Control sizes the shared-memory pool to the typical active-set size; pathological workloads are the cost model for exceeding it.
- Clock dependence. Lazy drain computes
debt_now = max(0, stored_debt - (now - last_update) × drain_rate). Worker-process clock skew across concurrent readers (if the hash table is sharded) must be tolerable. - Observability cost. There is no global "total bucket state" snapshot; any observability tool asking "what's the current debt of bucket X?" pays a drain-compute per read.
Seen in¶
- sources/2026-04-21-planetscale-behind-the-scenes-how-database-traffic-control-works — canonical wiki source. Reynolds canonicalises the pattern as the operational-simplicity answer to "how does Traffic Control's shared-memory footprint stay bounded without a sweeper thread?".