Skip to content

CONCEPT Cited by 1 source

Network namespace benchmarking

Definition

A technique for running reproducible network-stack experiments on a single Linux host by using network namespaces (ip netns), virtual ethernet pairs (veth), and the network emulator qdisc (tc qdisc ... netem). The host stands in for both client and server, with a controllable synthetic link between them that can be shaped for delay, jitter, packet loss, or bandwidth constraints.

This avoids the noise and cost of real distributed benchmarks while preserving the real Linux kernel network stack — unlike a loopback-only test, traffic actually traverses TCP, IP, and driver (virtual) paths with genuine softirq and interrupt handling.

Canonical recipe

# 1. Create veth pair
ip link add veth0 type veth peer name veth1

# 2. Create isolated namespace and move one end into it
ip netns add db
ip link set veth1 netns db

# 3. Assign addresses on each side
ip addr add 10.0.0.10/24 dev veth0
ip netns exec db ip addr add 10.0.0.1/24 dev veth1

# 4. Bring interfaces up
ip link set veth0 up
ip netns exec db ip link set veth1 up
ip netns exec db ip link set lo up

# 5. Emulate realistic WAN latency + jitter
tc qdisc add dev veth0 root netem delay 1ms 0.1ms distribution normal

Server runs in the db namespace, client in the root namespace; traffic crosses the veth pair with the configured delay.

Why it matters for benchmarking

  • Isolates network effects — link shape is controllable and reproducible, unlike real-world Kubernetes / cloud noise.
  • Exposes kernel-level effects — softirq, IRQ affinity, RSS, NAPI polling all run, so HT softirq effects are visible.
  • Allows CPU-placement experiments — all components pinned via taskset or cpuset while observing real network stack behaviour.
  • Single-host, reproducible — no cloud billing, no neighbour noise, no DNS, fits on a laptop.

Caveats

  • veth is a virtual interface pair; it lacks real NIC drivers, DMA, interrupt coalescing, RSS queue shaping, offloading (TSO/GRO/GSO). Effects that depend on actual NIC hardware (driver polling, interrupt moderation, RSS flow steering) are absent or replaced by synthetic equivalents.
  • netem delay is added at the qdisc layer; it does not emulate TCP congestion dynamics from real distance — use tc qdisc add ... tbf or netem rate for bandwidth shaping too.
  • Results establish shapes of effects, not absolute magnitudes. Production numbers still need production-class testbeds.

Seen in

  • systems/pgbouncer — the workload benchmarked via this recipe in the canonical reference.
Last updated · 476 distilled / 1,218 read