PATTERN Cited by 1 source
NAT router for static-IP ingress¶
NAT router for static-IP ingress is the composite AWS architecture for giving an external SaaS (or any external caller with an outbound firewall allowlist requirement) access to a private-VPC resource whose IP address may change — typically a database that can fail over.
The shape combines four primitives:
- Network Load Balancer in public subnets, with static Elastic IPs — the stable IP the external caller allowlists.
- EC2 NAT-router instances as NLB targets — each runs
iptablesNAT rules that forward traffic from the NLB to the private-subnet target. - Connection pool / failover proxy (e.g. RDS Proxy) in front of the backend, providing IAM auth and transparent failover.
- Private-subnet backend (database or internal service) with a dynamic IP — Aurora writer, internal API endpoint.
External SaaS ──TLS──▶ NLB (static EIPs, public subnet)
│
▼
EC2 routers (iptables NAT)
│
▼
RDS Proxy (private subnet)
│
▼
Aurora writer (private IP; may change on failover)
Why each tier exists¶
- NLB with static EIPs provides the stable external address the SaaS firewall encodes. Without it, every downstream IP change breaks the allowlist.
- EC2 NAT routers insulate the external IP from backend IP
churn. When Aurora fails over, its private IP changes;
iptablesDNAT rules on the routers point at the new address (refreshed by the connection-pool proxy below), but the external-facing Elastic IP stays put. - RDS Proxy adds the connection-pooling + IAM-auth + automatic-failover layer on top of the routers; burst traffic from CDC streaming doesn't exhaust Aurora's native connection budget.
- Private-subnet backend is never reachable from the public internet — the only path is through the chained security groups.
Security-group chain¶
Defense-in-depth is explicit:
- NLB SG: inbound TCP/443 from the SaaS vendor's CIDR only.
- Router SG: inbound from NLB's SG only.
- RDS Proxy SG: inbound from Router SG only.
- Aurora SG: inbound from RDS Proxy SG only.
No tier is reachable from the public internet even if one hop misconfigures.
Trade-offs vs alternatives¶
- vs Direct Connect — this pattern is faster to deploy and cheaper at low/moderate bandwidth. DX wins at sustained high bandwidth with fixed long-term cost.
- vs VPN — this pattern supports higher sustained throughput and avoids the tunnel management overhead, at the cost of public ingress exposure (mitigated by the SG chain).
- vs AWS PrivateLink / VPC peering — neither works when the external caller is a managed SaaS outside the customer's AWS accounts.
Operational burden¶
The EC2 NAT routers are the customer's responsibility — the post doesn't address:
- Patching / AMI lifecycle for the router fleet.
- HA — multiple routers in multiple AZs behind the NLB, but no disclosed details on health-check thresholds.
- Monitoring —
iptablescounters, conntrack table size, NAT port exhaustion under high connection rates. - Scaling when the ingress rate grows.
This is the structural trade-off of the pattern: two managed services (Infor, Aurora) with a self-managed NAT tier in the middle.
Canonical production instance¶
sources/2026-04-21-aws-oldcastle-infor-aurora-quicksight-real-time-analytics — Oldcastle APG uses this exact composition to bridge Infor Cloud ERP's Data Fabric Stream Pipelines (NDJSON CDC over HTTPS) into Aurora PostgreSQL for operational analytics. 50+ dashboards; 100+ concurrent users; "static Elastic IPs remain constant" across Aurora failovers.