CONCEPT Cited by 1 source
Cross-region bandwidth cost¶
Definition¶
Cross-region bandwidth cost is the per-byte cloud-provider
charge for transferring data between two geographic regions of the
same provider (e.g., AWS us-east-1 ↔ eu-west-1, GCP us-central1
↔ europe-west3). Unlike intra-region transfer (which is often
free or flat-rate) and intra-AZ transfer (free on every major
cloud), cross-region transfer is billed on a separate meter,
typically at several cents per GB, and accumulates quickly on
replication-heavy workloads.
Cross-region bandwidth cost is the load-bearing operational hazard of running a multi-region stretch cluster — every byte of cross-region Raft replication is billed egress, every byte of cross-region client fetch is billed egress.
Canonical Redpanda framing¶
"Transferring data between regions can incur significant bandwidth costs, especially in cloud environments where cross- region data transfer is billed separately. Some of these costs can be mitigated with Follower Fetching or Leadership Pinning."
Implications: - "Higher operational costs for multi-region deployments." - "Potential bottlenecks if bandwidth is limited."
(Source: sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters)
How the cost stacks up on a stretch cluster¶
On a stretch cluster with replication factor N spanning R
regions, every produced byte is replicated to N-1 other
replicas. If those replicas are in other regions, each cross-region
replica is a billed cross-region transfer.
Example arithmetic on a 3-region stretch cluster with RF=3, one replica per region:
- 1 GB of produce traffic from a client in region A
- Replicated to leader's follower in region B (1 GB cross-region)
- Replicated to leader's follower in region C (1 GB cross-region)
- Total cross-region egress: 2 GB per 1 GB of produce
At typical cloud cross-region rates ($0.02-$0.09/GB depending on provider and region pair), 1 TB/day of produce traffic implies ~2 TB/day of cross-region replication egress — $40-$180/day ($15k-$65k/year) on replication alone, before any client traffic.
Client-side traffic compounds: a consumer in region B reading from a leader in region A pays cross-region egress on every fetch, on top of the replication egress. This is where leader pinning + follower fetching matter — they cut the client-facing cross-region bytes to zero while leaving the replication-side bytes unchanged.
Four mitigations on a stretch cluster¶
From the Redpanda framing, four knobs reduce cross-region bytes on a stretch cluster:
- Leader pinning: bias leaders to client-proximal regions so client-to-leader fetches stay intra-region. Reduces client-side cross-region bytes to zero.
- Follower fetching: consumers read from the closest replica rather than the leader. Reduces consumer-side cross-region bytes for workloads where the partition leader is not in the consumer's region.
- Remote read replica topic: spin up a separate read-only cluster in the consumer's region backed by object storage; consumers fetch from that cluster; origin cluster serves zero read traffic to the consumer's region. Reduces read-side bytes to one segment-upload per segment (object-storage upload is intra-region on the origin side, and the remote cluster pulls from object storage; whether this is a cross-region pull depends on object-store region choice).
- Compression: Kafka/Redpanda producers can apply compression (gzip, snappy, lz4, zstd) before batching; batches are replicated already-compressed; receiving replicas store and replicate without re-encoding. Batch-level compression on replication egress can reduce bandwidth by 2-10× depending on payload shape.
The replication-side cross-region bytes (RF−1 cross-region copies per produce) are harder to eliminate — they are the price of multi-region RPO=0. Moving to async via MirrorMaker2 reduces the cross-region bytes to one replication stream per mirrored topic (MM2 only copies once per topic-partition across the WAN) at the cost of non-zero RPO.
Cross-provider rate rough orders¶
| Provider | Same-continent cross-region | Transoceanic cross-region |
|---|---|---|
| AWS | $0.02-$0.04 / GB | $0.08-$0.09 / GB |
| GCP | $0.02-$0.05 / GB | $0.08-$0.12 / GB |
| Azure | $0.02-$0.05 / GB | $0.08-$0.10 / GB |
(Illustrative orders of magnitude — exact pricing varies by source/destination region pair, commitments, and time; check provider pricing pages.) The Redpanda post references a separate "calculate cloud data transfer costs" post for actual numbers.
Also limits bandwidth as a bottleneck, not just cost¶
Cross-region throughput is also a potential bottleneck independent of billing. The post flags this verbatim: "Potential bottlenecks if bandwidth is limited." Cloud-provider cross-region links are not unlimited; sustained multi-GB/s cross-region replication can saturate inter-region capacity and cause replication lag even when the compute on both sides has headroom.
Seen in¶
- sources/2025-02-11-redpanda-high-availability-deployment-multi-region-stretch-clusters — canonical wiki framing as one of five enumerated performance hazards of multi-region stretch clusters; names leader pinning and follower fetching as first-line mitigations.
Related¶
- systems/redpanda, systems/kafka
- concepts/multi-region-stretch-cluster — the shape this cost is paid on.
- concepts/leader-pinning, concepts/follower-fetching — canonical mitigations on the client-facing side.
- concepts/remote-read-replica-topic — object-storage-backed read-fan-out mitigation.
- concepts/rpo-rto — the replication-side cost is the price of RPO=0.
- concepts/replication-lag — bandwidth bottlenecks manifest as replication lag.
- patterns/async-replication-for-cross-region — MM2 alternative reduces cross-region bytes per partition.