CONCEPT Cited by 1 source
AWS partition¶
Definition¶
An AWS partition is a logically isolated group of AWS Regions with its own set of resources, including IAM. Partitions exist to meet country- or use-case-specific compliance, regulatory, and sovereignty requirements: the same AWS APIs and services appear inside each partition, but each partition is a physically and logically separate operational universe.
The four named partitions (as of 2026):
| Partition | Partition name | Since | Purpose |
|---|---|---|---|
| Standard global AWS | aws |
2006 | commercial Regions worldwide |
| AWS GovCloud (US) | aws-us-gov |
2011 | US public-sector, FedRAMP / ITAR |
| AWS China Regions | aws-cn |
~2014 | Chinese data-sovereignty law, local partnership |
| AWS European Sovereign Cloud | aws-eusc |
2026 | EU digital-sovereignty / data-residency |
(Source: sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty;
for the aws-eusc partition name and first Region eusc-de-east-1 see
the 2026-01-16 GA launch announcement skip in log.md.)
Why the partition boundary is hard¶
"Partitions act as hard boundaries. Credentials don't carry over, and services such as Amazon S3 and features like S3 Cross-Region Replication or AWS Transit Gateway inter-region peering cannot function across partitions. These limitations are intentional, providing operational isolation." (Source: sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty)
Three concrete consequences of the hard boundary:
- Identity doesn't cross. IAM users, roles, and short-lived credentials issued in partition A are not recognized in partition B. Cross-partition authentication is an explicit architecture problem solved by separate roles or federated identity.
- Cross-region primitives are partition-scoped. S3 Cross-Region Replication, Transit Gateway inter-region peering, Route 53 health checks against cross-partition endpoints, CloudFront origin failover, and most other built-in multi-region tooling presume a single partition.
- Service availability differs per partition. "Not all AWS services are available in every partition" — a workload that moves between partitions must be designed against the intersection.
Why partitions exist¶
Three reasons the post names:
- Compliance — country- and use-case-specific regulatory regimes (FedRAMP, ITAR, BSI C5, Chinese data-sovereignty law, EU digital sovereignty).
- Security — "complete isolation of resources" via physical, logical, and operational separation of the cloud infrastructure between partitions; valuable for sensitive workloads.
- Service availability scoping — each partition evolves its own service catalog; limits blast radius across regulatory regimes.
Relationship to Regions and Availability Zones¶
The AWS isolation-level stack, outermost to innermost:
- Partition (hard boundary; IAM / network / service catalog / billing isolated).
- Region within a partition.
- AZ within a Region.
A workload that wants resilience to sovereignty / geopolitical shifts picks the outermost level; a workload that only cares about datacenter failures picks the innermost. DR tier choice is orthogonal to which level you picked.
Cross-partition architecture¶
Because the boundary is hard, cross-partition architectures look different from cross-region ones: "environments must be pre-provisioned and kept in sync through internal or external tooling. Without such an architecture, failover between partitions is impractical. Cross-partition architectures make failover possible but require duplicate infrastructure, separate identity systems, and custom data synchronization." (Source: sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty)
The four design surfaces the post calls out as partition-aware:
- Networking — Transit Gateway isolated per partition; connect partitions via internet-over-TLS, IPsec VPN, or Direct Connect (gateway to on-prem, or PoP-to-PoP partner).
- Authentication — separate IAM roles, STS regional endpoints, resource-based policies, federated identity.
- PKI — per-partition Private CA; cross-partition mTLS requires cross-signed root CAs.
- Organizations — separate AWS Organizations per partition (mandatory for European Sovereign Cloud, optional for GovCloud which supports inviting accounts into a commercial Organization).
Vendor-independence framing¶
Partitions give you a path to vendor independence against geopolitical risk that doesn't require switching cloud providers. "Failing over to another AWS partition is simpler than switching cloud providers because you can reuse your infrastructure as code templates across partitions." (Source: sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty)
The claim is narrow but real: partition-to-partition reuses API shape, service semantics, IaC templates. It does not reuse IAM, network topology, PKI, or Organizations — the four items the post spends most of its architectural depth on.
Seen in¶
- sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty — defines partitions, names the four, enumerates the hard-boundary consequences (IAM / cross-region primitives / service availability), and prescribes the four partition-aware design surfaces.
Related¶
- concepts/digital-sovereignty — the demand curve
- concepts/disaster-recovery-tiers — DR ladder applied across partitions
- concepts/cross-partition-authentication — identity problem
- concepts/cross-signed-certificate-trust — PKI problem
- patterns/cross-partition-failover — overarching pattern
- systems/aws-iam, systems/aws-organizations, systems/aws-european-sovereign-cloud, systems/aws-govcloud