Skip to content

SYSTEM Cited by 2 sources

AWS Network Firewall

AWS Network Firewall is a managed stateful network firewall deployable as a VPC endpoint: incoming and outgoing traffic routed through it is filtered against Suricata-compatible rules (5-tuple, stateful session tracking, and TLS SNI / SNI-FQDN matching for domain-based allow/deny). Stub page — expand on future Network Firewall internals sources.

Canonical architecture for outbound HTTPS filtering on EKS

The Generali / AWS topology referenced by the post is the idiomatic shape:

  EKS pods (private subnet, no IGW route)
  Network Firewall endpoint (public subnet)
     ── filter rule: SNI ∈ allow-list
  NAT Gateway (protected subnet)
  Internet

Three subnet roles (note the non-standard but deliberate ordering):

  • Private — where workloads (EKS pods) live. No direct internet route.
  • Public — where the Firewall endpoints sit. "Public" only in the routing-table sense (has the internet-bound default route to the firewall); the firewall is what decides whether traffic leaves.
  • Protected — where NAT Gateways sit, after the firewall has already permitted the flow. NAT happens on outbound-allowed traffic only, so compromised-pod egress to a non-allow-listed host never reaches NAT.

The SNI-based allow-list property

Filter rules match against the Server Name Indication field of the TLS ClientHello — the hostname the client is requesting. This is the load-bearing design choice because hostnames are stable, IPs are not:

  • SaaS providers rotate IPs on hours-to-days cadence; IP allow-lists bit-rot.
  • Hostnames (api.foo.com) are part of the customer contract; they rarely change.

See concepts/egress-sni-filtering for the broader pattern discussion including the known caveat (eSNI / ECH defeats this, but is not yet deployed at production scale across major SaaS).

Observability outputs

Generali specifically called out CloudWatch alert logs of observed hostnames as a secondary value. Network Firewall emits a log line per filtered connection (hostname + verdict + timestamp); piped into CloudWatch Logs, this becomes:

  • Traffic pattern analysis — what SaaS dependencies does the app actually have? Often a surprise.
  • Compliance evidence"applications can only access approved external services", with an auditable log.

Centralised-inspection shape with TGW native attachment

The second canonical deployment shape, beyond per-VPC egress filtering, is centralised inspection behind an AWS Transit Gateway via the native TGW ↔ Network Firewall integration (GA July 2025). The 2025-11-26 Amazon-EVS post is the canonical wiki reference.

"AWS Network Firewall operates as a 'bump-in-the-wire' solution, which transparently inspects and filters network traffic across Amazon VPCs. It is inserted directly into the traffic path by updating VPC or Transit Gateway route tables, allowing it to examine all packets without requiring any changes to the existing application flow patterns." (Source: sources/2025-11-26-aws-secure-amazon-evs-with-aws-network-firewall)

What the native attachment handles automatically:

  • Inspection-VPC plumbing. Subnets, route tables, and firewall endpoints inside the AWS-managed inspection VPC are provisioned by the service.
  • TGW attachment appears on the TGW with resource-type Network Function.
  • Appliance Mode is enabled automatically on the attachment — required for stateful inspection correctness, historically a manual-config landmine.

What the operator still does:

  • Designs the TGW's [[patterns/pre-inspection-post-inspection- route-tables|two route-table split]] (pre-inspection RT for VPC + DXGW attachments; post-inspection RT for the firewall attachment).
  • Associates the firewall attachment with the post-inspection RT and populates it with return routes to each VPC / on-prem CIDR.
  • Populates the pre-inspection RT with a default route to the firewall attachment.
  • Updates VPC-side RTs (default route to TGW on workload subnets; RFC-1918 return routes on Ingress / Egress VPCs).

This setup inspects east-west (VPC↔VPC, VM↔VPC), north-south (VPC↔on-prem via DXGW, VPC↔internet via dedicated egress VPC), and on-prem↔internet — all through a single firewall with one policy and one log stream.

FQDN-based egress filtering (Domain-list rule groups)

Beyond the TLS-ClientHello SNI matching documented for the per-VPC deployment below, Network Firewall's Domain-list stateful rule groups match on hostnames for both HTTPS (SNI) and HTTP (Host header). 2025-11-26 demo shape:

Rule group format: Domain list
Domain names:      .google.com
Source IPs:        192.168.0.0/19, 172.21.0.0/16
Protocols:         HTTP & HTTPS
Action:            Allow

Hits / denies are logged to an alert CloudWatch log group; flow logs (every inspected 5-tuple) go to a separate group. The Generali 2026-03-23 case used SNI allow-listing at VPC egress (canonical concepts/egress-sni-filtering reference); the 2025-11-26 EVS case uses the Domain-list variant for centralised FQDN allow-listing from VMware VMs — same underlying matching primitive, different deployment scale.

Logging conventions

Two log streams, separately configurable destinations (S3 / CloudWatch Logs / Amazon Data Firehose):

  • Alert log — only rule-hit summaries. Low volume, high signal — feeds detection / IR workflows, evidentiary traffic for compliance.
  • Flow log — every inspected 5-tuple. High volume — feeds traffic-pattern forensics, dependency discovery, baseline drift detection.

Demo convention in the 2025-11-26 post: one log group per log type (/anfw-centralized/anfw01/alert, /anfw-centralized/ anfw01/flow).

Seen in

Last updated · 200 distilled / 1,178 read