Secure Amazon Elastic VMware Service (Amazon EVS) with AWS Network Firewall¶
Summary¶
AWS Architecture Blog reference-architecture post on how to deploy a centralized network inspection topology for Amazon EVS (AWS's managed VMware Cloud Foundation stack running on EC2 bare-metal) using AWS Network Firewall and AWS Transit Gateway. The load-bearing architectural ideas: (1) Network Firewall is deployed as a bump- in-the-wire middlebox — inserted into traffic paths by updating VPC and Transit Gateway route tables, not by application changes; (2) the native Transit Gateway ↔ Network Firewall integration (GA July 2025) auto-provisions the firewall's inspection-VPC subnets / route tables / endpoints and creates a TGW attachment of resource-type Network Function with Appliance Mode automatically enabled, collapsing the historical setup overhead of this shape; (3) traffic flow through the firewall is implemented with a two-route-table split on the TGW — a pre-inspection route table holds VPC + Direct Connect Gateway attachments and has a default route pointing to the firewall attachment, a post-inspection route table is associated only with the firewall attachment and holds return routes back to each VPC / on-prem CIDR (see [[patterns/pre-inspection-post- inspection-route-tables]]); (4) this single topology covers east-west (EVS↔VPC, VPC↔VPC), north-south (VPC↔on-prem via Direct Connect, VPC↔internet via dedicated egress VPC with NAT, internet →VPC via dedicated ingress VPC with ALB), and on-prem↔internet — all through the same centralised firewall.
The post is a step-by-step implementation walkthrough with demo CIDRs and sample rule groups; no production scale numbers (RPS / bandwidth / connection counts / cost) are disclosed. It earns Tier-1 treatment because the TGW-native-attachment shape is a new canonical answer to the classic "how do I centralise network inspection across many VPCs" question — previously required hand-building the inspection VPC's subnet + route-table + endpoint topology — and because EVS specifically introduces an NSX overlay + VPC Route Server BGP underlay that requires understanding how the overlay routes propagate into AWS-native route tables before the centralised inspection will work correctly for east-west traffic from VMware VMs.
Architecture at a glance¶
On-prem data center (10.0.0.0/8)
│ Direct Connect
▼
Direct Connect Gateway ──┐
│
EVS VPC (10.0.0.0/16) ──┤ Egress VPC (172.23.0.0/16)
NSX overlay │ IGW + NAT Gateway
192.168.0.0/19 ──► VPC Route Server ──┐ ▲
│ │
Workload VPC (172.21.0.0/16) ────────────┤ │
▼ │
┌───────────────────────┐ │
│ AWS Transit Gateway │ │
│ │ │
│ Pre-inspection RT │ │
│ (all VPC + DXGW │ │
│ attachments) │ │
│ 0.0.0.0/0 → │ │
│ Firewall attach │ │
│ │ │
│ Post-inspection RT │ │
│ (firewall attach) │ │
│ routes back to │ │
│ each VPC / on-prem│ │
│ │ │
│ Firewall attachment │ │
│ Appliance Mode: ON │ │
└──────────┬────────────┘ │
│ │
▼ │
AWS Network Firewall │
(inspection VPC, │
managed by AWS) │
│ │
▼ │
Inspect │
│ │
Permit/Drop │
│ │
▼ │
Ingress VPC (172.24.0.0/16) ◄──────────┘ │
IGW + Application Load Balancer │
▼
Internet
Key connectivity facts the topology depends on:
- TGW native integration with Network Firewall is the vehicle — auto-provisions the inspection VPC subnets, route tables, firewall endpoints, and adds a TGW attachment of resource type Network Function with Appliance Mode enabled automatically (Appliance Mode is what makes TGW keep a given flow pinned to the same AZ's firewall endpoint for the flow's life, preserving stateful-inspection session state).
- Default route table association and propagation are explicitly deselected on the TGW so the operator defines the two inspection route tables by hand.
- All VPC attachments and the DXGW attachment go on the
pre-inspection RT. Its default route
0.0.0.0/0points to the firewall attachment — this is what forces every cross-VPC / cross-environment packet into the firewall. - The firewall attachment is associated with the post-inspection RT. After inspection + permit, that RT has return routes (10.0.0.0/16 → EVS, 172.21.0.0/16 → VPC01, 172.23.0.0/16 → Egress, 172.24.0.0/16 → Ingress, 10.0.0.0/8 → DXGW) to deliver the packet to its destination.
- EVS VPC route tables have a
0.0.0.0/0 → TGWdefault route to send egress into the centralised inspection path; same for the workload VPC. - Ingress / Egress VPCs hold RFC-1918 routes pointing back to TGW (marked green in the post's Table 1), so return traffic from the internet hits the firewall on its way to the workload.
- EVS's NSX overlay segments (192.168.0.0/19) are propagated into the NSX uplink subnet + EVS-VPC private subnet route tables automatically via Amazon VPC Route Server (BGP-speaking route server inside the VPC). Without Route Server, the AWS-native route tables would have no route to the NSX overlay CIDRs and east-west inspection of VM traffic would black- hole.
Key takeaways¶
-
AWS Network Firewall operates as a bump-in-the-wire — it is inserted into the traffic path by updating VPC or Transit Gateway route tables, allowing it to examine all packets without requiring any changes to the existing application flow patterns. (Source: sources/2025-11-26-aws-secure-amazon-evs-with-aws-network-firewall) This is the load-bearing architectural claim — no agents, no sidecar proxies, no application changes. The packet flow is re- routed through the firewall; application-layer code and network identity are unchanged.
-
Native TGW integration collapses the inspection-VPC setup work. "With the Transit Gateway native integration enabled, a Transit Gateway attachment is automatically created for the AWS Network Firewall, with the resource type shown as Network Function. In addition, the Appliance Mode is automatically enabled for the firewall attachment to make sure the Transit Gateway continues to use the same Availability Zone (AZ) for the attachment over the lifetime of a flow." (Source: sources/2025-11-26-aws-secure-amazon-evs-with-aws-network-firewall) This removes the historical hand-crafted inspection-VPC topology (subnets + route tables + endpoints + Appliance Mode enablement) that previous centralised-inspection blueprints documented — it's all auto-provisioned by the native attachment.
-
The two-route-table TGW split is the mechanism that forces traffic through the firewall. Pre-inspection RT (all VPC + DXGW attachments;
0.0.0.0/0→ firewall attachment) draws traffic in; post-inspection RT (firewall attachment; per-destination CIDR routes back to each VPC + DXGW) releases it after inspection. With default association + default propagation turned off on the TGW, this split is enforced structurally — a new VPC attached later that lands on the pre-inspection RT is inspected automatically. Canonical wiki reference for [[patterns/pre-inspection-post- inspection-route-tables]]. -
One topology covers east-west, north-south, and on-prem↔ internet. Named traffic flows inspected: (east-west) EVS↔ Workload VPCs, Workload↔Workload VPCs; (north-south) EVS/ Workload↔on-premises (via DXGW), EVS/Workload↔internet (via dedicated egress VPC with NAT), on-prem↔internet. The dedicated Ingress VPC (ALB) and Egress VPC (NAT) separation means return traffic for inbound connections and outbound flows from internal VMs both cross the firewall on their way to/from the workload — classic centralised- inspection shape.
-
EVS's NSX overlay requires VPC Route Server for AWS-native route-table awareness. The EVS deployment includes EVS VLAN subnets as the underlay, with NSX overlay networks (192.168.0.0/19 summarised) on top. [[systems/aws-vpc-route- server|Amazon VPC Route Server]] speaks BGP with the NSX uplink segments and propagates the overlay routes into AWS-native VPC route tables (NSX uplink subnet RT + EVS-VPC private subnet RT). Without this, the TGW would have no route to the VM CIDR inside the EVS VPC and east-west inspection would silently fail for VM traffic. This is the EVS-specific twist on the otherwise generic centralised-inspection pattern.
-
FQDN-based egress filtering is a stateful rule-group capability. The demo creates a stateful rule group with
Rule group format: Domain list,Domain names: .google.com,Source IPs: 192.168.0.0/19, 172.21.0.0/16,Protocols: HTTP & HTTPS,Action: Allow. This is a sibling primitive to the SNI-based egress filtering the wiki already documents: Network Firewall's Domain-list rule type matches against TLS ClientHello SNI for HTTPS and the HTTP Host header for cleartext HTTP, doing hostname-based allow/deny at the stateful-inspection layer. Blocked flows are logged to CloudWatch at/aws/network-firewall/alert/. -
East-west ICMP drop + ingress HTTP alert demonstrate 5-tuple-scoped stateful rules. The post's second rule-group example allows HTTP from the Ingress VPC ALB (172.24.0.0/16) to an EVS VM (192.168.12.10); the third drops ICMP from the workload VPC (172.21.0.0/16) to the NSX CIDR (192.168.0.0/19). Stateful rule groups combine protocol + 5-tuple + action, with optional Geo-IP filtering — closer to Suricata-style rules than AWS-native security-group shapes.
-
Logging goes to CloudWatch (alert + flow separately). Demo configuration creates two log groups —
/anfw-centralized/anfw01/ alertfor alert-action hits and/anfw-centralized/anfw01/flowfor every inspected flow. Alternative destinations Network Firewall supports are S3 and Amazon Data Firehose; the post picks CloudWatch for walkthrough convenience. Two log streams is the canonical shape: alert log has only rule-hit summaries (low volume, high signal for detection / IR workflows); flow log has every inspected 5-tuple (high volume, for traffic-pattern forensics).
Operational numbers¶
No production scale numbers in this post — it is a reference- architecture walkthrough, not a retrospective. Named CIDRs / quantities from the demo for orientation only:
- EVS VPC CIDR
10.0.0.0/16; NSX Segments192.168.0.0/19(summarised); minimum 4× i4i bare-metal nodes per EVS cluster. - Workload VPC
172.21.0.0/16. - Egress VPC
172.23.0.0/16(IGW + NAT Gateway). - Ingress VPC
172.24.0.0/16(IGW + ALB). - On-prem summary
10.0.0.0/8via Direct Connect Gateway. - Stateful rule-group demo hosts: EVS VM
192.168.12.10, ALB172.24.6.45, workload-VPC EC2172.21.128.4.
Systems introduced / extended¶
Introduced:
- systems/amazon-evs — managed VMware Cloud Foundation on EC2 bare-metal within a customer VPC.
- systems/aws-vpc-route-server — BGP-speaking route server primitive inside a VPC; bridges overlay networks (NSX) into AWS-native VPC route tables.
Extended:
- systems/aws-network-firewall — new native-TGW-attachment deployment shape; FQDN-based egress filtering via Domain-list stateful rule groups (sibling to the SNI-based egress case already documented); the dual alert/flow CloudWatch log-group convention.
- systems/aws-transit-gateway — new "hub for centralised Network-Firewall inspection via native attachment + two route tables + Appliance Mode" section covering the pre/post-inspection RT split; and the Network Function TGW attachment resource type.
- systems/aws-direct-connect — Direct Connect Gateway as a TGW attachment participating in the centralised-inspection pre-inspection route table (on-prem↔internet traffic also inspected).
Concepts and patterns introduced¶
- concepts/centralized-network-inspection — the hub-and-spoke architectural shape where a managed firewall / IDS/IPS sits between all traffic-producing VPCs and their destinations, rather than one firewall per VPC.
- concepts/bump-in-the-wire-middlebox — insertion into a traffic path by routing changes rather than endpoint changes; the application is unaware.
- concepts/tgw-appliance-mode — TGW-attachment property that pins a flow to a specific AZ's endpoint over the flow's life so stateful-inspection devices behind the attachment always see both directions on the same instance.
- patterns/pre-inspection-post-inspection-route-tables — TGW two-route-table design that forces all traffic through an inspection attachment and then releases it to its destination.
Caveats¶
- No production numbers. The post is a build-this walkthrough with demo CIDRs and sample rule groups; RPS, bandwidth, firewall endpoint scaling behaviour, and cost are not disclosed.
- Single-region only. The topology is described within one region; cross-region / cross-partition considerations (inter- region TGW peering, which doesn't extend across partitions per sources/2026-01-30-aws-sovereign-failover-design-digital-sovereignty) are out of scope.
- FQDN filtering caveats. Domain-list rules match SNI for HTTPS and Host header for HTTP — same fundamental caveats as SNI filtering (ECH / ESNI eventually breaks SNI matching; wildcards are coarse; Host spoofing isn't closed by the firewall alone).
- Rule-group format depth not covered. The post walks through one Domain-list and two Standard-Stateful rule-group examples but doesn't cover Suricata-IPS rule-group format, intrusion-prevention signature management, or rule-group-versioning operationals.
Source¶
- Original: https://aws.amazon.com/blogs/architecture/secure-amazon-elastic-vmware-service-amazon-evs-with-aws-network-firewall/
- Raw markdown:
raw/aws/2025-11-26-secure-amazon-elastic-vmware-service-amazon-evs-with-aws-net-3dd25c98.md
Related¶
- systems/aws-network-firewall — the inspection primitive.
- systems/aws-transit-gateway — the fabric; the two-RT split is a property of TGW routing.
- systems/amazon-evs — the workload substrate; its NSX overlay is why VPC Route Server shows up.
- systems/aws-vpc-route-server — bridges NSX overlay routes into AWS-native VPC route tables.
- systems/aws-direct-connect — on-prem tail of the inspection domain.
- concepts/centralized-network-inspection — architectural class.
- concepts/bump-in-the-wire-middlebox — insertion mechanism.
- concepts/tgw-appliance-mode — the property that makes stateful inspection behind TGW work.
- concepts/egress-sni-filtering — sibling FQDN-based egress- filtering pattern; Network Firewall's Domain-list rule type supports both HTTPS SNI and HTTP Host header matching.
- patterns/pre-inspection-post-inspection-route-tables — the TGW route-table split used here.
- companies/aws