CONCEPT Cited by 1 source
Continuous profiling¶
Continuous profiling is the telemetry signal class that tells you which function, on which line, is burning CPU or memory in production — continuously, at low enough overhead to keep enabled on every host, all the time.
It sits alongside the three established observability signals:
| Signal | Question it answers |
|---|---|
| Metrics | Is something wrong? (CPU usage is high.) |
| Logs | What happened? (This request was slow.) |
| Traces | Where is it slow? (Service X added 200 ms.) |
| Profiles | Why is it slow? (Function foo() line 42 burns 150 ms.) |
Quote from Grafana Labs' Pyroscope 2.0 launch post:
"It's the only signal that tells you why your code is slow or expensive, not just that it is."
(Source: sources/2026-04-22-grafana-introducing-pyroscope-2-0)
The three payoffs¶
The Pyroscope 2.0 post frames continuous profiling's business case as three concrete payoffs:
- Infrastructure cost reduction via targeted optimisation. When you can see exactly which functions burn CPU/memory across every service in prod, you optimise hot paths instead of overprovisioning. Replaces "add hardware" with "fix the regex that's re-compiling on every request."
- Faster incident root cause via profile diffing. Compare a profile from before and after a regression, diff them, see exactly which code paths changed. No staging repro, no ad-hoc logging, no guessing.
- Code-level latency attribution complementing traces. A trace shows which service added 200 ms; a profile shows which function/line inside that service. Especially useful for p99 tail-latency spikes that are hard to reproduce.
(Source: sources/2026-04-22-grafana-introducing-pyroscope-2-0)
Distinct storage/query traits¶
Continuous-profiling data differs from metrics and logs in three architecturally significant ways:
- Large payloads. A profile is a call-tree / flame-graph structure, not a single sample.
- Heavy symbolic information. Function names, file paths, line numbers — the symbolic side-table often dominates storage.
- Bursty query patterns. Profiles are queried heavily during incidents, barely at all between them.
These drive bespoke storage design — a straight port of a metrics DB won't perform. Pyroscope 2.0 is the current OSS example of a DB built specifically for these traits.
Related signals¶
- OTLP Profiles signal — the OpenTelemetry Profiles signal reached alpha status concurrent with the Pyroscope 2.0 launch, legitimising profiling as a first-class observability signal with a standard wire format.
- Instrumented vs. sampling profile — the two main profile-collection techniques; sampling is the one compatible with "on by default, all the time" because overhead is tunable.
Prior art¶
- Meta's Strobelight is the canonical hyperscaler-scale instance of default continuous profiling — every host profiled, always, as a "flight recorder." Pyroscope 2.0 aims to bring that posture to teams that aren't Meta.
Related¶
- systems/pyroscope-2 — the OSS continuous-profiling DB rearchitected for scale.
- systems/strobelight — fleet-wide continuous profiling at Meta.
- concepts/otlp-profiles-signal
- concepts/diff-profile-regression-analysis
- concepts/symbolic-information-payload
- concepts/observability
- concepts/instrumented-vs-sampling-profile
- patterns/default-continuous-profiling