PATTERN Cited by 1 source
Pre-create all network slots at boot¶
Shape: instead of creating network namespaces, tap/veth/bridge devices, and tunnels on demand during per-request handling, create all N slots at worker initialization — even if most will sit unused most of the time. Pay the expensive setup cost once at boot, at constant cost independent of load, and serve every subsequent request from the pre-created pool.
This is the network-substrate specialization of the broader warm-pool / zero-work create path pattern, driven by concepts/constant-work-pattern as its first-principles justification.
Canonical instance: AWS Lambda¶
Lambda's SnapStart topology originally created a per-slot isolated network (namespace + tap + veth + bridge + tunnel) on demand during invoke. At 4,000-slot density with constant micro-VM turnover, the on-demand cost had two failure modes:
- Linux device-creation cost grows with N — the kernel traverses existing device lists for each creation; the N+1-th is more expensive than the N-th. With constant VM turnover, the overhead "never stopped accumulating."
- The RTNL lock serialized parallel creation attempts. Intended-to-be-seconds became minutes.
Fix: pre-create all 4,000 networks before the worker ever starts a request, at a cost of ~3 minutes of worker boot time. (Source: sources/2026-04-22-allthingsdistributed-invisible-engineering-behind-lambdas-network.)
Ravi Nagayach's explicit framing: "absorbing the cost once at boot rather than paying it continuously during operation."
The load-bearing number: Lambda workers cycle infrequently compared to micro-VMs. Because the substrate (worker) reboots at a much lower rate than the consumer (micro-VM slots), the amortization math works — you pay the 3-minute cost once per worker-lifetime, serving thousands of VMs across that lifetime.
When it applies¶
- Creation cost is non-trivial (kernel device operations, IPC setup, filesystem mount) and ideally scales with N.
- Maximum N is bounded at boot (workers know they will host ≤ 4,000 VMs).
- Substrate churn ≪ consumer churn (worker lifetime ≫ VM lifetime).
When it doesn't¶
- N is unbounded — you can't pre-allocate ∞.
- The substrate reboots frequently — amortization collapses if the worker reboots before the pool is consumed.
- Memory cost of unused slots is prohibitive — for very large per-slot overhead, pooling becomes wasteful.
Orthogonal optimizations usually co-ship¶
The Lambda disclosure shows the pattern is almost never a single change. Alongside pre-creation:
- RTNL-lock-friendly ordering: pool namespaces first, create veth pairs inside the namespace before moving them to root, batch eBPF attachments.
- Stateless rather than stateful NAT — see concepts/stateless-nat-via-ebpf.
- Per-slot iptables rules in the slot's own namespace — see patterns/per-slot-iptables-in-namespace.
Together these produced: 200 → 4,000 slots per worker (20×), 3-min boot cost, no CPU drain during invokes, −1% fleet-wide CPU.
Seen in¶
- sources/2026-04-22-allthingsdistributed-invisible-engineering-behind-lambdas-network — canonical wiki instance of the pattern at AWS Lambda scale, explicitly tied to concepts/constant-work-pattern (Colm MacCárthaigh's Builders' Library article cited).