SYSTEM Cited by 1 source
Amazon EVS (Elastic VMware Service)¶
Amazon Elastic VMware Service (Amazon EVS) is AWS's managed offering for running a VMware Cloud Foundation (VCF) stack natively inside a customer's Amazon VPC, on Amazon EC2 bare-metal instances. EVS is the target for lift-and-shift VMware migrations and data-center exits that want to keep the operational semantics of vSphere, NSX, and vSAN without refactoring the applications.
Shape¶
The customer creates an EVS VPC. Inside it:
- Bare-metal EC2 nodes (minimum 4× i4i instances per cluster per the 2025-11-26 post) form the VCF host cluster.
- EVS VLAN subnets are the underlay carrying host management, vMotion, vSAN, and the NSX uplink segments.
- NSX overlay networks run on top and are what the guest VMs
actually sit on — e.g.
192.168.0.0/19in the 2025-11-26 demo. - Amazon VPC Route Server speaks BGP with the NSX edge and propagates the overlay CIDRs into AWS-native VPC route tables (the NSX uplink subnet RT and the EVS-VPC private subnet RT) so the rest of AWS-native networking can reach VMs on NSX overlay segments.
See the Concepts and components of Amazon EVS AWS doc for the full primitive list.
Integration points¶
EVS is a first-class citizen of AWS networking — the EVS VPC can be attached to AWS Transit Gateway like any other VPC, and once the NSX overlay routes are propagated by VPC Route Server, east-west traffic from NSX-hosted VMs is routable into / out of the broader AWS fabric.
The canonical 2025-11-26 topology for securing this:
- EVS VPC as one spoke of a hub-and-spoke TGW.
- AWS Network Firewall sitting behind the TGW as a centralised inspection point, reached via a TGW native attachment with Appliance Mode automatically enabled.
- Separate Workload VPCs (Workload / Ingress ALB / Egress NAT) and optional on-prem via Direct Connect all attach to the same TGW for unified policy.
Why this shape exists¶
EVS collapses the historical VMware-on-AWS integration work:
- VCF deployed natively into the customer VPC, not in a separate partner-managed account.
- NSX overlay is the VM network but integrates with AWS routing via Route Server BGP, not a hand-run BGP peering.
- Bare-metal EC2 under the hood lets NSX / vSAN / vMotion run at VCF-native performance characteristics.
The result is that VMware workloads migrate with zero application refactoring and participate fully in AWS-native networking, inspection, and IAM posture.
Stub page¶
Expand on EVS operational characteristics (node-level scaling, vSAN capacity provisioning, failure domains, cost model) as further sources are ingested. Present scope is limited to the networking-integration surface covered in the 2025-11-26 Network Firewall post.
Seen in¶
- sources/2025-11-26-aws-secure-amazon-evs-with-aws-network-firewall — EVS VPC as one of four TGW attachments in a centralised Network-Firewall inspection topology; NSX overlay CIDR propagated into AWS-native route tables by VPC Route Server; VM-to-VM, VM-to-VPC, VM-to-on-prem, and VM-to-internet flows all inspected at the central firewall.
Related¶
- systems/aws-vpc-route-server — overlay-to-underlay routing bridge used by EVS's NSX deployment.
- systems/aws-network-firewall — canonical inspection device in front of EVS workloads.
- systems/aws-transit-gateway — fabric connecting EVS to other VPCs / on-prem.
- concepts/centralized-network-inspection — the defensive architectural class EVS drops into.
- companies/aws