PATTERN Cited by 1 source
eBPF map for local attribution¶
A userspace control-plane daemon writes identity state into an eBPF map; the kernel-resident data-plane program (attached to a tracepoint, kprobe, XDP, LSM hook) reads the map in-kernel to resolve the subject (socket, packet, task) to an owning workload without a userspace round-trip.
Why this is the right shape for local attribution¶
- No RPC on the hot path. Kernel BPF can read/write eBPF maps directly — no syscall back to userspace to look up the owner.
- Low tail latency. A BPF_MAP_TYPE_HASH lookup is bounded in time and doesn't block.
- Clean control-plane / data-plane separation. The daemon is the only writer; the BPF program is the reader. Any mismatch between what the daemon knows and what the BPF program can attribute is observable.
- Small surface. The map shape is trivial: IP → workload-id
(or
(IP, port) → workload-idfor NAT'd sockets, etc.).
Canonical example — Netflix IPMan + FlowExporter¶
On Titus hosts, many container workloads run on one host with different identities. IPManAgent writes the per-container IP → workload-ID mapping into an eBPF map at container start and removes entries at stop. FlowExporter's BPF program reads the map on every TCP tracepoint event to resolve the socket's local IP to the owning workload identity, before emitting the flow record.
A second map — written by Titus on each intercepted connect
syscall — handles the NAT64-free IPv6-to-IPv4 case where many
containers share a single host IPv4. Its key is
(local IPv4 address, local port), not IP alone.
Generalising the pattern¶
Same shape applies beyond flow logs:
- Cgroup → rate-limit state (per-cgroup egress limits; BPF socket-send hooks).
- PID → task metadata (pid-keyed BPF maps in scheduler observability; compare the Netflix run-queue-latency monitor).
- IP set → allow/deny (dynamic firewalls).
What matters is the control-plane / data-plane split: a daemon that knows the current state writes it into a map; the BPF program reads the map at the kernel hook.
Related¶
- systems/ebpf · systems/netflix-ipman · systems/netflix-flowexporter · systems/netflix-titus
- concepts/workload-identity
- patterns/sidecar-ebpf-flow-exporter
Seen in¶
- sources/2025-04-08-netflix-how-netflix-accurately-attributes-ebpf-flow-logs — canonical wiki instance (IPMan + NAT64 port-keyed map).
- sources/2024-09-11-netflix-noisy-neighbor-detection-with-ebpf — sibling instance at the scheduler layer (PID-keyed + cgroup-keyed maps in BPF for run-queue-latency attribution).