PATTERN Cited by 1 source
BPF filter for API event source¶
Intent¶
Manufacture a control-plane event that a kernel / SDK / library doesn't expose natively, by attaching a BPF filter to the data-plane stream and treating every filter match as the missing event.
Motivation¶
Many kernel subsystems expose rich configuration RPCs but zero "thing happened" event streams. The classical work-around is polling ("did the thing happen yet?") which is expensive and coarse. A BPF packet filter is the opposite: zero cost on the non-matching stream, signals only on the thing you care about, event-timed rather than poll-timed.
Shape¶
- Identify the packet that corresponds 1:1 with the
event you want. It must be:
- Observable on a network interface / socket your process can attach to.
- Uniquely identifiable by a cheap filter — ideally a small number of header-field or payload-byte comparisons.
- Write a BPF filter that matches exactly that packet class. Classical BPF via libpcap is enough for simple packet-type filtering; eBPF is needed for richer stateful filters or high-volume streams where kernel-side drop is needed.
- Attach the filter to a packet socket (simple case) or a kernel hook (XDP, TC, socket filter, raw tracepoint — see systems/ebpf).
- Every filter match is the event. Feed matches into the control plane as if they arrived from a native subscription API.
Canonical instance — Fly.io JIT WireGuard¶
"The Linux kernel's interface for configuring WireGuard is Netlink ... Note that there's no API call to subscribe for 'incoming connection attempt' events. That's OK! We can just make our own events. WireGuard connection requests are packets, and they're easily identifiable, so we can efficiently snatch them with a BPF filter and a packet socket." (Source: sources/2024-03-12-flyio-jit-wireguard-peers)
The filter:
Three primitives:
udp— transport.dst port 51820— WireGuard's default UDP port.udp[8] = 1— handshake-initiation type byte (WireGuard's packet-type field is a cleartext byte at payload offset 8).
Every match is an "incoming WireGuard connection attempt" event, feeding Fly.io's JIT peer provisioning path.
WebSocket variant¶
Fly.io defaults customers to WireGuard-over-WebSockets because of end-to-end UDP difficulties. Raw UDP packets aren't on the wire in that case, but they are on the wire inside the Fly-owned WireSockets daemon: "We own the daemon code for that, and can just hook the packet receive function to snarf WireGuard packets." (Source: sources/2024-03-12-flyio-jit-wireguard-peers)
Different hook point (userspace function rather than kernel BPF), same event semantics.
Cost profile¶
- On non-matching packets: marginal — the kernel evaluates the filter per packet, BPF is JITed, arithmetic-cheap comparisons.
- On matching packets: one user-space wakeup per match, plus whatever work the consumer does.
- On absence of events: zero.
Compare to polling Netlink for "any new peer activity?" — polling is expensive relative to event rate on sparse-event streams.
Anti-patterns / failure modes¶
- Filter-payload byte offsets that depend on L2/L3 framing.
udp[N]is relative to the UDP payload, but raw sockets see the IP header too; classical BPF bytecode often needs explicit offset arithmetic. Libpcap filters abstract this; hand-written BPF doesn't. - Load-balanced receive queues without consistent hashing. If the NIC splits the packet stream across multiple receive queues and only one has the filter attached, events are missed. Attach per-queue or at a higher-level hook.
- Encrypted identification. If the event you care about is identifiable only post-decryption, a cleartext-header filter can't do the job alone — at best filter on envelope, do the identification in user-space. (Fly.io's Noise-unwrap step is the user-space follow-on to their BPF filter.)
- Rate pathologies. Retry-heavy protocols (like WireGuard handshakes) generate multiple matches per logical connection attempt. Control-plane needs a rate-limited cache so N matches ⇒ at most 1 backend call per logical event.
Contrast with [in-kernel¶
filtering](<../concepts/in-kernel-filtering.md>)
Both use BPF in kernel context, but the goal is opposite:
- In-kernel filtering drops high-volume events in the kernel so user-space only sees the important few. Example: Datadog FIM dropping ~94% of security-relevant syscalls.
- BPF filter for API event source extracts rare events from a stream that otherwise produces no signal at all. Example: Fly.io sniffing handshake initiations out of ordinary WireGuard traffic.
Both are variants of "use BPF to bridge user-space intent and kernel-context visibility"; the design axis differs.
Seen in¶
- sources/2024-03-12-flyio-jit-wireguard-peers — canonical wiki instance; BPF filter + WebSocket-daemon hook as the event source for JIT peer provisioning.
Related¶
- systems/ebpf.
- systems/wireguard.
- systems/linux-netlink — the API whose missing event surface this pattern fills in.
- concepts/packet-sniffing-as-event-source — the concept.
- concepts/jit-peer-provisioning — canonical consumer.
- concepts/in-kernel-filtering — adjacent, contrasting.
- patterns/jit-provisioning-on-first-packet.