CONCEPT Cited by 1 source
Real User Measurement¶
Real User Measurement (RUM) is performance data collected from real end-user browsers / devices in production traffic, as opposed to synthetic measurements from dedicated test infrastructure. A small piece of JavaScript (or native instrumentation) executes in the user's environment, performs one or more measurements, and reports back.
Cloudflare's framing (Source: sources/2026-04-17-cloudflare-agents-week-network-performance-update):
"This gives us performance data directly from the user's browser, in their real-world network conditions. It's the difference between testing a car's top speed on a track versus watching how people actually drive on the highway."
Why RUM instead of synthetic¶
- Real network paths. A synthetic probe from a cloud VM in us-east-1 doesn't traverse residential ISP last-mile, flaky home Wi-Fi, mobile carrier NATs, or the specific BGP paths a user's packets take. RUM does.
- Real client populations. User devices, OS versions, browser versions, and their actual TLS-stack behaviours are reflected in the data; a synthetic probe uses one fixed client stack.
- Real time-of-day / congestion distributions. RUM samples follow the traffic curve; synthetic probes miss peak-hour congestion in the right proportion.
- Cost-scale advantage. Millions of free samples a day at the cost of a JS bundle and some beacon bandwidth; synthetic-equivalent coverage across tens of thousands of ASNs would be infrastructure-prohibitive.
What RUM is bad at¶
- Cold-start / less-visited paths. A route nobody takes generates zero RUM samples. Synthetic is the only way to cover those.
- Controlled comparison. The user's network changed between sample A and sample B. Synthetic probes are reproducible; RUM is statistical.
- Measurement-bias gremlins. Which users get the measurement, when they get it, what else is in their browser all shape the distribution. See concepts/benchmark-methodology-bias.
- Privacy surface. RUM telemetry sees the user's IP, User-Agent, and timing; must be scoped and retained carefully.
Comparative RUM: measuring competitors from your users' browsers¶
Cloudflare's method (Source: sources/2026-04-17-cloudflare-agents-week-network-performance-update):
"When users encounter a Cloudflare-branded error page, a small speed test runs silently in the background. The browser retrieves small files from multiple providers including Cloudflare, Amazon CloudFront, Google, Fastly, and Akamai and records how long each exchange takes."
This is a comparative-RUM pattern (see patterns/comparative-rum-benchmarking). The user's own browser becomes the probe, and all five CDNs see the same client on the same network at the same moment — which controls for the exact variables that make single-provider benchmarks unfair. The RUM probe runs only on error pages (so the test never delays a successful request).
RUM cohort biases to design around¶
- Which pages carry the probe? Cloudflare's probe runs on error pages, so the cohort skews toward users whose requests already failed (or were blocked). That's not the same as the median successful-request user. Trade-off: non-intrusive (no instrumentation on the success path) vs. potentially biased cohort.
- Which browsers execute it? Script-blocked / bot traffic / privacy-extension users drop out. The surviving population is browser-normal-JS users.
- When does it run? Cloudflare's implementation runs on error-page load — implicitly weights sample frequency by error-page volume, which correlates with traffic volume but not identically.
- Warm-cache vs. cold-connection state. If the user's browser has already connected to one of the five providers in this session (connection pooling, HTTP/2 stream reuse), that provider's measured connection time will be biased low. Careful implementations force a fresh connection or measure from a cold state.
- Geographical coverage follows your own footprint. Measurements of "who's fastest in ASN X" are only as good as your own traffic coverage of ASN X.
Aggregation¶
Raw per-sample RUM is noisy; aggregating to a trimean per (provider, network, day) is Cloudflare's pipeline shape. See concepts/trimean-aggregation for the choice of robust-central-tendency statistic and the tails it discards.
Related telemetry concepts¶
- concepts/customer-driven-metrics — metrics that track customer behaviour (QPS, clients, data volume); RUM is the browser-side analog for client-perceived latency telemetry.
- concepts/monitoring-paradox — the probe is itself subject to the condition it's measuring (a failing browser emits fewer samples).
Seen in¶
- sources/2026-04-17-cloudflare-agents-week-network-performance-update — canonical wiki instance. Cloudflare's comparative RUM on Cloudflare-branded error pages feeds the trimean-of- connection-time ranking across five CDN providers (Cloudflare, Amazon CloudFront, Google, Fastly, Akamai) across APNIC's top-1,000-by-population networks; produces the "60 % fastest in top networks, 6 ms faster than next-fastest" headline numbers.