CONCEPT Cited by 1 source
Context propagation (OpenTelemetry)¶
Context propagation is the OpenTelemetry mechanism that carries trace/span identifiers across service boundaries — so a request that hops through many services produces a single narrative trace rather than per-service fragments.
From Fly.io's token-system retrospective:
"From the moment a request hits our API server through the moment
tkdbresponds to it, oTel context propagation gives us a single narrative about what's happening." (Source: sources/2025-03-27-flyio-operationalizing-macaroons.)
How it works (informal)¶
- The entry-point service starts a root span; OpenTelemetry
SDK injects trace-id / span-id / trace-state into outgoing
request headers (e.g., W3C
traceparent). - Downstream services extract the headers, continue the same trace, attach their spans.
- The observability backend (Honeycomb, Tempo, Datadog APM, etc.) stitches everything into one view keyed by trace-id.
Architectural value¶
- Cross-service debugging is tractable. A single query by trace-id returns the whole request chain instead of N separate log searches.
- Latency attribution is precise. The full call tree with span durations shows exactly which hop dominated.
- Sampling decisions propagate. The head sampler's decision rides along, so samples are consistent end-to-end.
Fly.io's endorsement¶
Ptacek's retraction of earlier skepticism:
"I was a skeptic about oTel. ... Once, I was an '80% of the value of tracing, we can get from logs and metrics' person. But I was wrong."
The tkdb-specific win: token-system errors are rare, and when
they do occur, context propagation makes them fall-out-of-the-
trace obvious. "The tkdb code is remarkably stable and
there hasn't been an incident intervention with our token
system in over a year."
Seen in¶
- sources/2025-03-27-flyio-operationalizing-macaroons — canonical wiki instance; OTel context propagation named as load-bearing for the token system's operational posture.