CONCEPT Cited by 1 source
Ranked rack preferences¶
Definition¶
Ranked rack preferences is the Redpanda 26.1 extension to leader pinning that turns the leader- placement hint from a single preferred location into an ordered list of preferred locations. Instead of saying "pin leaders of topic T in region us-east-1", the operator says "prefer us-east-1, then us-east-2, then us-west-2" — an explicit deterministic fallback order used when the top-ranked region is unavailable or loses a replica.
The feature converts leader pinning from a best-effort hint (where the cluster re-elects somewhere arbitrary on failover) into a deterministic preference vector that survives re-elections.
Canonical Redpanda framing¶
"Redpanda 26.1 introduces ranked rack preferences for Leader Pinning, allowing you to deterministically list which regions and AZs should host your partition leaders. It turns leader placement from a game of chance into a strategic advantage. You can actively shape traffic by pinning leaders to the specific locations where your producers live, eliminating much of all of the ingress costs for these topics."
Mechanism shape¶
The cluster tries rack positions in order:
- Try
us-east-1a— if a healthy replica is available there, elect it leader. - Else try
us-east-1b. - Else try
us-east-1c. - Only after exhausting the ranked list fall back to any other healthy replica.
On a leader re-election (node failure, node restart, topic reassignment), the same ranked list is re-consulted — leaders don't drift to arbitrary racks.
Distinguishing from unranked rack awareness¶
| Axis | Rack awareness | Leader pinning (pre-26.1) | Ranked rack preferences (26.1) |
|---|---|---|---|
| Replica placement | Spread across racks | Spread across racks | Spread across racks |
| Leader selection on election | Any in-sync replica | Pinned-region-preferred (binary) | First-rank-available |
| Leader placement after failover | Arbitrary | Arbitrary (re-elect anywhere) | Deterministic fallback order |
| Operator intent expressed | "Tolerate rack loss" | "Prefer this one location" | "Prefer in this order" |
Rack awareness is an availability feature (spread replicas so no single rack loss causes data loss). Leader pinning is a locality feature. Ranked rack preferences are locality with deterministic fallback — they preserve the cost/latency benefit across failure modes.
Cost framing¶
The post's load-bearing claim: ranked rack preferences "eliminate much of all of the ingress costs for these topics."
Ingress cost on major clouds is charged per byte crossing AZ / region boundaries from producer to leader. Stable leaders in producer-colocated racks mean:
- Within AZ: zero ingress cost (intra-AZ transfer is free on AWS / GCP / Azure).
- Cross-AZ same-region: free on some clouds, cheap on others.
- Cross-region: expensive (concepts/cross-region-bandwidth-cost), so cross-region producer → leader is a material cost line item.
An unranked leader-pinning hint that drifts to a cross-region rack on failover silently re-introduces cross-region ingress cost until an operator manually re-pins. Ranked preferences eliminate this drift by specifying the fallback order up-front.
Paired with Cross-Region RRR¶
The 26.1 launch post frames ranked rack preferences + Cross- Region Remote Read Replicas for AWS as a write-path / read- path pair:
- Write path: ranked rack preferences keep leaders close to producers → zero ingress cost from producer region.
- Read path: Cross-Region RRR (via S3) serves reads from consumer-local regions without hitting the production cluster.
Together they replace the older stretch cluster (expensive cross-region Raft quorum) and multi-cluster replication shapes for global-data-plane shaping.
Mechanism gaps (from the source)¶
The 26.1 launch post is high-altitude; undisclosed:
- Failover semantics when the top-ranked region is unavailable but recovering — does leadership bounce back to rank-1 on recovery, or hold at rank-2 to avoid thrashing?
- Cost of leader re-election on rank change — how is the re-election triggered and throttled?
- Compatibility with Raft quorum placement constraints — does a rank-1 region require RF/2+1 replicas there? How does the cluster handle rank-1 with RF=1?
- Rank granularity — AZ-level only, or sub-AZ rack labels?
Seen in¶
- sources/2026-03-31-redpanda-261-delivers-the-industrys-first-adaptable-streaming-engine — Redpanda 26.1 launch post. Canonical wiki source. Frames ranked rack preferences as the write-path complement to Cross-Region Remote Read Replicas for AWS. "Your data travels for business, not pleasure. Stop letting it run up expenses on the scenic route and fly direct."
Source¶
- Original: https://www.redpanda.com/blog/26-1-r1-cloud-topics
- Raw markdown:
raw/redpanda/2026-03-31-redpanda-261-delivers-the-industrys-first-adaptable-streamin-09255e05.md
Related¶
- concepts/leader-pinning — the parent primitive ranked rack preferences extends.
- concepts/multi-region-stretch-cluster — the shape ranked rack preferences displaces for global data planes.
- concepts/cross-region-bandwidth-cost — the cost axis ranked rack preferences attacks on the write path.
- systems/redpanda — the broker shipping ranked rack preferences in 26.1.
- patterns/client-proximal-leader-pinning — the named pattern ranked rack preferences is the 26.1 enhancement of.
- patterns/multi-region-raft-quorum — complementary stretch-cluster topology.