PATTERN Cited by 1 source
Closest-replica consume¶
Pattern¶
On a multi-region (or multi-AZ) replicated cluster, route each consumer's fetch requests to the closest replica — leader or follower — rather than always to the partition's leader. The consumer's rack / region identifier is sent to the broker, which directs the consumer at the replica on the matching rack/region. Writes continue to go to the leader; replication continues under the cluster's consensus protocol. Only the consumer read path is localised.
This is the read-side locality pattern on a stretch cluster, dual to leader pinning's write-side locality.
Canonical framing¶
Follower fetching is the mechanism; this pattern is the deployment-level application.
"Follower fetching is a feature in Redpanda that allows consumers to fetch records from the closest replica of a topic partition, regardless of whether it's a leader or a follower. This feature is particularly beneficial for Redpanda clusters in multi-region or multi-AZ deployments."
"Follower fetching helps reduce latency and potential costs associated with multi-region deployments by allowing consumers to read from geographically closer followers. Follower fetching can also help reduce network transfer costs and lower end-to- end latency in multi-region deployments."
(Source: sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters)
Upstream Kafka: KIP-392¶
The Kafka-API substrate is KIP-392: Allow consumers to fetch
from closest replica (shipped in Kafka 2.4). The client sets
client.rack to its own rack/region identifier; the broker looks
up the replica-set's rack assignments and responds with redirects
(or serves the fetch) based on rack match. Redpanda's follower-
fetching implements this Kafka-API contract.
Trade-offs introduced — staleness¶
The pattern introduces read staleness bounded by replica lag. A follower can be behind the leader by the partition's replication lag; consumers reading from followers see the log prefix up to the follower's current high-water-mark, not the leader's. Records produced but not yet replicated are invisible to the follower- fetching consumer.
On a well-behaved stretch cluster with synchronous replication under bounded lag (Raft typically single-digit ms intra-cluster, ~1 cross-region RTT in the worst case), this is usually negligible for stream-processing consumers, which are already asynchronously processing an append-only log.
Read-your-writes hazard with acks=1 + follower fetching¶
When composed with acks=1:
- Producer writes to leader in region A with
acks=1. - Leader acknowledges immediately (before replicating to region-B follower).
- Consumer in region B fetches from region-B follower.
- Consumer does not see the just-acknowledged write until replication catches up.
The staleness window is bounded by cross-region replication latency (typically tens of ms). For a producer-consumer pair that are the same client expecting RYW semantics, this window is visible. The Redpanda post does not walk this hazard explicitly.
Composition with leader pinning¶
The two patterns are orthogonal and both target client-facing cross-region hops:
| Write path | Read path | |
|---|---|---|
| Baseline | Client → leader (cross-region possible) | Client → leader (cross-region possible) |
| Leader pinning (patterns/client-proximal-leader-pinning) | Client → intra-region leader | Client → leader (also intra-region if pinned correctly) |
| patterns/closest-replica-consume | unchanged | Client → intra-region replica |
| Both | Client → intra-region leader | Client → intra-region replica |
Both patterns together yield a stretch-cluster deployment where client-facing hops are intra-region on both produce and consume paths, even though the partition's replicas span regions and the Raft quorum still pays cross-region RTT on commit.
Distinction from remote read replica¶
concepts/remote-read-replica-topic is a stronger form of read-path decoupling: reads go to a separate cluster backed by the origin's object storage, imposing zero load on the origin cluster's brokers. Closest-replica consume still loads the origin cluster (the follower is a broker in the same cluster), but at lower operational overhead (no separate cluster to run) and lower staleness (replica lag, not object-storage-upload lag).
When not to use¶
- Read-after-write consistency required on a cross-region path: follower may lag; route these reads to the leader or use a session-pinning strategy.
- Single-AZ deployment: the intra-AZ latency is already single-digit ms; the complexity of rack-aware consumer configuration isn't worth the savings.
- Follower ISR membership unstable: if followers frequently fall in and out of the ISR (under replication lag pressure), a follower-fetching consumer may redirect frequently and experience inconsistent latency. Leader-only consume is more predictable under ISR churn.
Composes with¶
- patterns/multi-region-raft-quorum — this pattern optimises the read path of a stretch cluster without weakening the quorum property on the write path.
- patterns/client-proximal-leader-pinning — the write-side analogue; the two together eliminate client-facing cross-region hops on both paths.
- concepts/remote-read-replica-topic — the cross-cluster analogue for workloads where even the same-cluster-follower load is too much for the origin.
Seen in¶
- sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters — canonical framing as the consumer-side analogue of leader pinning on a stretch cluster.
Related¶
- systems/redpanda, systems/kafka
- concepts/follower-fetching — the mechanism this pattern uses.
- concepts/multi-region-stretch-cluster
- concepts/leader-pinning
- concepts/leader-follower-replication — KIP-392 retired the leader-only-reads assumption of this shape.
- concepts/in-sync-replica-set — a follower must be in the ISR to serve consistent reads.
- concepts/cross-region-bandwidth-cost — the cost this pattern reduces.
- concepts/remote-read-replica-topic — the cross-cluster alternative.
- patterns/multi-region-raft-quorum
- patterns/client-proximal-leader-pinning
- patterns/leader-based-partition-replication