PATTERN Cited by 2 sources
Optimise for common-case frequency asymmetry¶
Pattern¶
When a system has two classes of operations whose frequencies differ by orders of magnitude, optimise aggressively for the common class and accept higher per-operation cost on the rare class. The pattern generalises beyond consensus to any bifurcated workload where one path dominates the critical metric (latency, throughput, cost) and the other is a once-in-a-lifetime event for that path's resources.
Canonical statement — consensus requests vs elections¶
Sugu Sougoumarane's Part 3 framing:
"The most common operation performed by a consensus system is the completion of requests. In contrast, a leader election generally happens in two cases: taking nodes down for maintenance, or upon failure. Even in a dynamic cloud environment like Kubernetes, it would be surprising to see more than one election per day for a cluster, whereas such a system could be serving hundreds of requests per second. That amounts to many orders of magnitude in difference between a request being fulfilled and a leader election."
"This means that we must do whatever it takes to fine tune the part that executes requests, whereas leader elections can be more elaborate and slower."
(Source: sources/2026-04-21-planetscale-consensus-algorithms-at-scale-part-3-use-cases)
The same principle reappears in Part 4 with different phrasing:
"A typical cluster could be completing thousands of requests per second. In contrast, a software rollout is likely a daily event. In further contrast, a node failure may happen once a month or even less frequently. It is important that we optimize for the common case."
Mechanics¶
Identify the two paths¶
- Common path — runs at the workload's native rate (hundreds to thousands of events per second).
- Rare path — runs at an operational rate (daily, weekly, monthly).
The frequency ratio is typically 10⁴ or higher — the common path amortises far more engineering investment than the rare path.
Allocate engineering budget proportionally to criticality, not frequency¶
Somewhat counterintuitively, criticality budget goes the opposite way from frequency budget:
- Common path: performance-sensitive. Invest in tail-latency tuning, zero-allocation code paths, lock-free primitives, tight cache layouts. A 100 µs regression on the common path costs orders of magnitude more than a 100 ms regression on the rare path.
- Rare path: correctness-sensitive. Invest in comprehensive coverage, broad node reach, redundant safety checks, multi-round confirmation. Slow and thorough beats fast and risky.
Don't let the rare path impose costs on the common path¶
The anti-pattern is forcing the common path to pay per-operation for a guarantee the rare path needs. Majority-quorum consensus does this: every write pays for majority-ack latency because the election needs majority-intersection for safety. Part 3's intersecting-quorum generalisation breaks the pairing — the common path can run with minimum durability (k = 2, or even k = 1), and the rare path absorbs the wider scan cost.
Consensus-specific instances¶
Single-ack completion with wider election reach¶
YouTube's production choice: k = 1 on the request path (single ack = durable), wide election scan (election reaches "all possible nodes that could have acknowledged the last transaction"). See patterns/single-ack-completion-with-wider-election for the full pattern.
Graceful leader demotion (Vitess PRS)¶
Sugu's Part 4 application: software rollouts (daily, planned) should be zero-error-to-the-application via graceful demotion; crashes (monthly, unplanned) can tolerate the ERS-style fence-the-followers emergency path which loses in-flight transactions. Two different mechanisms for two different frequency classes. See patterns/graceful-leader-demotion.
Pluggable durability rules¶
The broader pattern Part 3 argues for: the durability predicate is set to whatever the common-case write path can afford, and the election path derives its reach from the predicate's intersection requirement — elections are expected to be rare, so they can afford to be thorough. See patterns/pluggable-durability-rules.
Cross-domain instances¶
Read-heavy databases with write-path overhead¶
Any OLTP database where reads outnumber writes by orders of magnitude invests in replica fan-out, query caches, materialised views — optimisations that amortise over reads. Writes pay per-operation for cache invalidation, but that's the rarer path.
Normal path vs exception path¶
Most HTTP server frameworks optimise the non-exception code path. Exception-handling, request logging under failure, structured crash reporting — all can afford to be slower and more thorough because they run rarely.
GC vs allocation¶
Most JVM tuning invests in allocation-path optimisation (bump-pointer, TLAB, generational assumptions). GC is comparatively rare and is allowed to pay for itself in STW pauses, concurrent tracing, cache pressure.
Cold-boot optimisation¶
AWS Lambda cold starts are rare (especially for warm functions). Cold-start optimisation does matter, but generally less than steady-state invocation latency.
When the pattern breaks¶
The pattern assumes the frequency asymmetry is genuinely large and predictable. Three failure modes:
- Frequency shifts under failure. A normally-rare failover event can become the dominant path during an outage. Systems tuned purely for the common path can become badly slow during the exact window where users need them most. Mitigation: have the rare path's cost stay bounded, not just slower-than-common.
- Adversarial workloads. An attacker who can trigger the rare path cheaply turns it into the common path. Mitigation: rate-limit the rare path, not just the common one.
- Common-path assumptions encode workload bugs. YouTube's k = 1 durability is fine until your workload has stricter durability requirements than you thought. The premise of the pattern is that the common-path expectations are right.
Budget heuristic¶
A rough way to calibrate: if the common path runs at rate R₁ and the rare path at rate R₂, engineer the rare path to cost (R₁ / R₂) × (common-path cost) without worry. A 1 s elaborate election algorithm serving a once-per-day event on a cluster with 1000 rps request rate is effectively free — request time spent on election overhead is ~1 / (86,400,000) of total compute.
Don't use this as a license to run the rare path unboundedly; the pattern still wants the rare path to be finite, correct, and bounded. It just tolerates cost the common path cannot.
Seen in¶
- sources/2026-04-21-planetscale-consensus-algorithms-at-scale-part-3-use-cases — canonical wiki introduction; request / election orders-of-magnitude asymmetry framing; foundational argument for reducing durability-setting-on-write-path.
- sources/2026-04-21-planetscale-consensus-algorithms-at-scale-part-4-establishment-and-revocation — second canonical instance with the same principle applied to software rollout (daily, optimise for zero-error) vs crash (monthly, optimise for correctness).
Related¶
- concepts/tail-latency-at-scale — the concept the common-path optimisation bias is serving.
- concepts/durability-as-use-case-dependent — the stance that makes the common-path / rare-path asymmetry explicit in consensus.
- concepts/intersecting-quorums — the arithmetic that lets the two paths be tuned independently.
- patterns/pluggable-durability-rules — the named architectural pattern that applies this principle to consensus.
- patterns/single-ack-completion-with-wider-election — the extreme application: k = 1 on the request path, election reaches all.
- patterns/graceful-leader-demotion — the rollout-is-daily / crash-is-monthly application from Part 4.
- systems/vitess — canonical production instance; PRS is the common-case path, ERS the rare-case path.