PATTERN Cited by 1 source
Client-proximal leader pinning¶
Pattern¶
On a multi-region cluster, pin each topic's partition leaders to the region where that topic's producer/consumer clients are concentrated, so that the client-facing hop (produce to leader, consume from leader for consumers that don't use follower fetching) stays intra-region. The partition still has followers in other regions, so strong consistency and cross-region Raft quorum are preserved — only the client-facing RTT is optimised.
This is the write-side locality pattern on a stretch cluster, dual to follower fetching's read-side locality.
Canonical framing¶
Leader pinning is Redpanda's name for the mechanism; this pattern is what the mechanism produces at the deployment level:
"Leader pinning ensures a topic's partition leaders are geographically closer to clients. This helps decrease networking costs and guarantees lower latency by routing produce/consume requests to brokers located in specific regions."
(Source: sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters)
Latency math¶
Without the pattern (default leader placement):
client_to_leader_RTT ≈ cross_region_RTT (60-80 ms regional, 150+ ms transoceanic)
+ acks=all_quorum_RTT = cross_region_RTT (60-80 ms)
+ leader_persist + follower_persist
= O(cross_region_RTT × 2) per write
With the pattern (leader pinned to client's region):
client_to_leader_RTT ≈ intra_region_RTT (< 1-5 ms)
+ acks=all_quorum_RTT = cross_region_RTT (60-80 ms)
+ leader_persist + follower_persist
= O(cross_region_RTT × 1) per write
The pattern halves the cross-region cost on the write path —
eliminating the client-side hop while preserving the replication-
side hop required for durability. Adding acks=1 on top eliminates
the replication-side wait too, yielding intra-region-only write
latency (at the cost of region-outage durability).
Per-topic configuration¶
The pattern is applied per topic because different workloads have different geographic client clustering:
- Regionally-scoped topics: e.g.,
orders-us-east,orders-eu- west— leaders pinned to respective regions; writers and readers in each region get intra-region latency on their own topic. - Multi-tenant topics with dominant-region clients: pin to the dominant region; pay cross-region cost only for the minority clients.
- Round-robin regional topics: several regional variants of the same topic, each pinned to its region, clients routed to the variant matching their region by app-layer logic.
Interaction with failover¶
On loss of the pinned region, leader pinning yields to Raft election. The partition re-elects a leader from an in-sync follower in another region; produce/consume traffic routes to the new leader (no longer proximal). Once the original region recovers, leader pinning re-asserts (via preferred-leader-election on Redpanda) and the leader migrates back. This is the expected behaviour — the pattern is a locality preference, not a hard placement constraint.
Licensing caveat¶
Leader pinning is an enterprise feature on Redpanda:
"Leader pinning is an enterprise feature that requires a valid license. It's available for both Self-Managed Redpanda clusters and Redpanda Cloud."
On OSS Redpanda or upstream Kafka, the closest equivalent is manual preferred-replica-election + rack-aware replica placement, which approximates the pattern but does not expose a first-class per-topic leader-locality dial.
When not to use¶
- Transoceanic stretch: even with client-proximal leaders,
acks=allwrites still pay 150+ ms transoceanic RTT on the replication side; leader pinning helps the client-side hop but not the replication-side hop, which is usually the write-SLA- breaker on transoceanic deployments. - Clients distributed uniformly across regions: leader pinning only pays off when a topic's clients are regionally clustered. For uniform-across-regions clients, leader pinning saves latency for the pinned region's share but costs the other regions the full cross-region hop.
- Small partition counts per topic: leader pinning assigns one region to each partition's preferred leader. With only a handful of partitions per topic, the load distribution across pinned brokers can become uneven.
Composes with¶
- patterns/multi-region-raft-quorum — leader pinning preserves the quorum property; it is a client-facing optimisation on top of the Raft-across-regions substrate.
- patterns/closest-replica-consume — the read-side analogue; the two patterns together yield intra-region client hops on both produce and consume paths.
acks=1— orthogonal; leader pinning preserves durability,acks=1degrades it. The two compose freely.
Seen in¶
- sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters — canonical framing as the first-line latency mitigation on a stretch cluster; per-topic configuration; enterprise feature flag.
Related¶
- systems/redpanda, systems/kafka
- concepts/leader-pinning — the mechanism this pattern uses.
- concepts/multi-region-stretch-cluster
- concepts/leader-follower-replication
- concepts/acks-producer-durability — the orthogonal producer-side durability dial.
- concepts/cross-region-bandwidth-cost — the cost leader pinning reduces.
- patterns/multi-region-raft-quorum — the pattern this optimises on top of.
- patterns/closest-replica-consume — the read-side analogue.
- patterns/leader-based-partition-replication — the intra-cluster replication shape.