CONCEPT Cited by 1 source
Runtime metadata attach¶
Runtime metadata attach is the practice of attaching dynamic context (request IDs, service endpoint names, latency buckets, user IDs, tenant IDs, …) to profile samples at sample time by reading it from a source the running process maintains — typically thread-local storage — and recording it alongside the stack.
It turns a profile from "here is where the CPU went" into "here is where the CPU went for p99-latency requests on endpoint /search only". The result is request-context-aware profiling without a post-hoc join to trace data.
The Strobelight "Strobemeta" canonical form¶
"Strobemeta is another mechanism, which utilizes thread local storage, to attach bits of dynamic metadata at runtime to call stacks that we gather in the event profiler (and others). This is one of the biggest advantages of building profilers using eBPF: complex and customized actions taken at sample time. Collected Strobemeta is used to attribute call stacks to specific service endpoints, or request latency metrics, or request identifiers. Again, this allows engineers and tools to do more complex filtering to focus the vast amounts of data that Strobelight profilers produce."
— Meta Engineering, 2025-01-21 Strobelight post (Source: sources/2025-03-07-meta-strobelight-a-profiling-service-built-on-open-source-technology)
Why eBPF is the enabling substrate¶
Traditional perf-event profilers capture (pc, stack) — not
much else. eBPF programs can execute
custom code at sample time: read TLS, dereference
task_struct fields, look up cgroup IDs, format request
identifiers. That custom-action-at-sample-time capability is
the load-bearing property for metadata attach:
- The process writes request metadata into a known TLS slot at request entry.
- The eBPF program reads that slot at sample time via the kernel's user-memory-read helpers.
- The resulting record is
(pc, stack, request_id, endpoint, latency_bucket)— ready for query-time filtering.
See concepts/ebpf-profiling for the parent pattern.
Use-case examples (from the Meta post)¶
- Per-endpoint profiling. Attribute CPU spend to the service endpoint handling each request. Answers "which endpoint burns most CPU?" without instrumenting every caller.
- Tail-latency profiling. Filter stacks to
latency_bucket == p99, ask "what is this service doing on its slowest 1% of requests?" - Request-tracing / correlation. Ship the request ID with the sample so a profile can be joined post-hoc to trace or log data for the same request.
Why this matters architecturally¶
Before runtime metadata attach, the answers to those questions required:
- Capture a distributed trace and a profile
- Ship both to a common store
- Post-hoc join on request ID / timestamp
Metadata attach collapses steps (1)–(3) into a single sample-time write. The resulting record is already self-joined.
Contrast with stack tag enrichment¶
| Property | Runtime metadata attach | concepts/stack-tag-enrichment |
|---|---|---|
| When applied | Sample time | Query time |
| Adds new data | Yes (request ID, endpoint) | No |
| Reshapes existing data | No | Yes |
| Requires kernel-hook capability | Yes (eBPF + TLS read) | No |
They are complementary: attach the context at capture, reshape the view at query.
Seen in¶
- sources/2025-03-07-meta-strobelight-a-profiling-service-built-on-open-source-technology — canonical Meta instance; Strobemeta is the TLS-based runtime metadata pipeline used with the Strobelight event profiler. Makes endpoint + latency + request-ID filtering possible at profile-query time.
Related¶
- systems/strobelight — the canonical production instance.
- systems/ebpf — the enabling kernel substrate.
- concepts/ebpf-profiling — the parent class.
- concepts/stack-trace-sampling-profiling — the capture primitive this concept augments.
- concepts/stack-tag-enrichment — the query-time sibling.
- patterns/profiler-orchestrator — the system shape that carries captured metadata through to the user.