Skip to content

SYSTEM Cited by 2 sources

Atlas Global Clusters

Overview

Atlas Global Clusters is MongoDB Atlas's geographic-sharding feature, built on top of the MongoDB server's core sharding system. Applications declare zone-sharding rules that map shard-key values to named zones, and each zone is physically pinned to a region / cloud / jurisdiction. The sharded cluster is then the vehicle for data-residency + low-latency-per-user requirements without an application rewrite.

MongoDB's framing:

"Global Clusters and Zone sharding let you describe simple rules so data stays where policy requires and users are served locally, e.g., A rule to map 'DE', 'FR', and 'ES' to the EU_Zone can guarantee that all European customer data and order history physically reside within European borders, satisfying strict GDPR requirements out of the box. Because Zone Sharding is built into the core sharding system, you can add or adjust placement without app rewrites."

(Source: sources/2025-09-25-mongodb-carrying-complexity-delivering-agility)

Why it matters architecturally

Zone sharding is the first-class answer to three different but correlated forces:

  1. Data residency / sovereignty. Regulations (GDPR, LGPD, PDPL, industry- specific healthcare / finance) require that certain customer data never physically leave a jurisdiction. Application-level routing is brittle; zone sharding makes the storage layer enforce it structurally.
  2. Global latency. Users in Europe served from an EU zone pay single-region RTT, not transatlantic RTT; the cluster is a single logical database but the shards are regionally-pinned.
  3. Cross-cloud + cross-region portability. Since it is built on the core sharding system, zone definitions can move without reshaping the schema or the client application — the app continues to see "one cluster, one connection string."

Relationship to sharding primitives on this wiki

Combined with cross-cloud replica sets

The 2025-09-25 post pairs Global Clusters with Atlas's single replica set spanning AWS + GCP + Azure (see concepts/cross-cloud-architecture): zone sharding handles geographic placement, cross-cloud replica sets handle provider-failure / vendor-lock-in concerns. Both ride on top of the MongoDB server's consensus and replication machinery — including logless reconfiguration — so the Atlas control plane can add or move members across clouds / regions without queuing behind the data oplog.

Caveats

  • Cross-zone queries still hop. A query that needs data from multiple zones is a scatter-gather across regions; the latency win is specifically for users whose data lives in their own zone.
  • Zone boundaries must align with access patterns. If users in DE routinely query ES data, zone sharding helps residency-compliance but not latency for that traffic.
  • Shard-key choice is load-bearing. Zone routing is driven by shard-key values; a bad shard-key choice reproduces hot-key problems at the zone level. This is the same trade-off shard-key design faces generally, with residency stakes on top.
  • Operational surface not fully disclosed in the manifesto post; the technical details of how zone rules interact with chunk migration, backup snapshotting per-zone, cross-zone oplog lag are in the Atlas docs, not in this source.

Seen in

Last updated · 200 distilled / 1,178 read