CONCEPT Cited by 1 source
Follower fetching¶
Definition¶
Follower fetching is the mechanism by which a consumer reads records from the closest replica of a partition — leader or follower — rather than always reading from the leader. On a multi-region stretch cluster or a multi-AZ deployment, follower fetching eliminates cross-region / cross-AZ hops from the consumer read path by letting consumers read from the same-region/AZ replica they happen to be closest to.
This is the read-path locality optimisation on a stretch cluster, dual to leader pinning's write-path locality.
Canonical Redpanda framing¶
"Follower fetching is a feature in Redpanda that allows consumers to fetch records from the closest replica of a topic partition, regardless of whether it's a leader or a follower. This feature is particularly beneficial for Redpanda clusters in multi-region or multi-AZ deployments."
"Follower fetching helps reduce latency and potential costs associated with multi-region deployments by allowing consumers to read from geographically closer followers. Follower fetching can also help reduce network transfer costs and lower end-to-end latency in multi-region deployments."
(Source: sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters)
Why it exists — retiring the leader-only-reads assumption¶
Leader-follower replication historically treated the leader as the only replica serving reads — followers were write-only destinations kept in sync for failover purposes. On a stretch cluster, this forces every consumer in region A to either:
- Connect to region A's replica (impossible if the leader is in region B), or
- Connect to the leader in region B on every fetch (paying cross-region RTT + bandwidth on every read).
Follower fetching retires the leader-only-reads constraint: followers in region A can serve reads to consumers in region A directly, without the partition's leader needing to be in region A. The leader still handles writes — the replication topology is unchanged.
Upstream Kafka equivalence — KIP-392¶
Upstream Apache Kafka shipped follower fetching in Kafka 2.4 as
KIP-392: Allow consumers to fetch from closest replica. The
mechanism: the consumer sends its client.rack to the broker; the
broker maps the rack to a list of replicas; the consumer is
directed to the closest replica by rack-identity match. Redpanda's
follower-fetching is a direct productisation of this Kafka-API
feature on the Redpanda substrate.
Kozlovski's Kafka 101 framing that leader-follower replication canonicalises also names this:
"Writes can only go to that leader, which then asynchronously replicates the data to the N-1 followers … Starting Kafka 2.4 (KIP-392), however, it is possible to configure consumers to read from the closest replica in the network topology (not just the leader)."
Composition with leader pinning + acks¶
Follower fetching is orthogonal to both leader pinning and
producer acks:
- Leader pinning: write-path locality, preserves strong consistency.
acks=1: write-path durability relaxation.- Follower fetching: read-path locality, introduces read staleness bounded by replica lag.
The staleness dimension is load-bearing: a follower can be behind the leader by the partition's replication lag. A consumer reading from a follower sees the log prefix up to the follower's current high-water-mark; not-yet-replicated records are invisible. On a stretch cluster with bounded replica lag (Raft sync replication, lag typically ms), this is usually acceptable for stream-consumer semantics — consumers are already asynchronously processing the log, not synchronously read-after-writing it.
Composing with acks=1: the read-your-writes hazard¶
A producer using acks=1 on a leader in region A gets an
acknowledgement as soon as the leader persists — followers in
regions B and C may not yet have the record. A consumer in region
B reading from the region-B follower will not see the write until
replication catches up. If the producer and consumer are the
same client (read-your-writes semantics), this window is visible.
The Redpanda post does not walk this composition; the hazard is
bounded by follower replication latency plus consumer fetch
frequency.
Contrast with remote read replica topic¶
Follower fetching reads from a replica in the same cluster — the origin's own follower broker. Remote read replica topic reads from a separate cluster backed by the origin's tiered-storage bucket. Both optimise read paths but at different architectural granularity:
| Follower fetching | Remote read replica | |
|---|---|---|
| Scope | Same cluster | Separate cluster |
| Source | Origin follower broker | Object storage (S3/GCS) |
| Load on origin | Reduced (reads bypass leader) | Zero (reads bypass origin entirely) |
| Staleness | Replica lag (ms) | Object-storage segment-upload lag (seconds) |
| Scale-out | Limited by origin's replication factor | Independent — as many read clusters as needed |
Seen in¶
- sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters — canonical wiki definition and multi-region positioning; frames follower fetching as the consumer-side analogue of leader pinning.
Related¶
- systems/redpanda, systems/kafka
- concepts/multi-region-stretch-cluster — the shape follower fetching optimises.
- concepts/leader-pinning — the write-path analogue.
- concepts/leader-follower-replication — KIP-392 retired the leader-only-reads assumption of this shape.
- concepts/in-sync-replica-set — a follower must be in the ISR to serve consistent reads.
- concepts/remote-read-replica-topic — the object-storage- backed analogue.
- concepts/cross-region-bandwidth-cost — the cost follower fetching reduces.
- patterns/closest-replica-consume — the pattern this concept names.
- patterns/multi-region-raft-quorum — the stretch-cluster pattern this composes with.