CONCEPT Cited by 1 source
eBPF profiling¶
eBPF profiling is the practice of collecting profile samples (CPU stacks, memory allocations, latency buckets, off-CPU time, custom events) from production hosts by attaching small verifier-gated eBPF programs to kernel hooks — perf events, kprobes / uprobes, tracepoints, scheduler hooks — and streaming the samples through ring-buffer maps to a user-space consumer.
It is distinct from traditional profiling mechanisms along three axes:
| Axis | perf / ptrace / DTrace |
eBPF profiling |
|---|---|---|
| Safety | custom kernel modules risk the kernel | verifier-gated sandbox |
| Overhead | context-switch heavy | in-kernel, ring-buffer-out |
| Customisation | ship-and-hope or wait for a distro | write a small program, attach |
The load-bearing property for profiling specifically is the custom-action-at-sample-time ability: an eBPF program can read thread-local storage, look up cgroup IDs, check PMU counter values, or filter/enrich samples before they cross into user-space — see concepts/runtime-metadata-attach and concepts/in-kernel-filtering.
Why hyperscalers adopted it¶
Quote from Meta's Strobelight post (2025-01-21):
"Strobelight's profilers are often, but not exclusively, built using eBPF... eBPF allows the safe injection of custom code into the kernel, which enables very low overhead collection of different types of data and unlocks so many possibilities in the observability space that it's hard to imagine how Strobelight would work without it."
(Source: sources/2025-03-07-meta-strobelight-a-profiling-service-built-on-open-source-technology)
eBPF is the kernel primitive that makes fleet-wide continuous profiling economically feasible — the cost per profiler-host-hour is low enough that patterns/default-continuous-profiling can run on every production machine without perturbing the workloads being profiled.
Canonical instances¶
- systems/strobelight — Meta's fleet-wide profiling orchestrator. 42+ profilers, many eBPF-backed. Handles CPU cycles, memory (via systems/jemalloc), off-CPU, service request latency, AI/GPU, Python / Java / Erlang event-based profilers.
- Ad-hoc profilers via systems/bpftrace — engineers ship new eBPF-based profilers as bpftrace scripts; see concepts/ad-hoc-profiler and patterns/ad-hoc-bpftrace-profiler.
- systems/pyroscope-2 — the open-source ecosystem sibling, feeding the OTLP Profiles signal.
Seen in¶
- sources/2025-03-07-meta-strobelight-a-profiling-service-built-on-open-source-technology — canonical Meta instance; eBPF-for-profiling joins eBPF-for-security (Datadog Workload Protection) and eBPF-for-networking (Cloudflare DDoS / Lambda data plane) as the third major production use-case family of eBPF on this wiki.
Related¶
- systems/ebpf — the kernel primitive.
- systems/strobelight — the canonical hyperscale instance.
- systems/bpftrace — the ad-hoc-profiler DSL.
- concepts/stack-trace-sampling-profiling — the generic sampling technique eBPF accelerates.
- concepts/flamegraph-profiling — the dominant output format.
- concepts/continuous-profiling — the signal class eBPF makes affordable.
- concepts/runtime-metadata-attach — the "actions at sample time" capability eBPF uniquely provides.
- patterns/profiler-orchestrator — the platform pattern around eBPF-based profilers at scale.
- patterns/default-continuous-profiling — the operational posture eBPF economics unlock.