Skip to content

CLOUDFLARE 2026-04-17 Tier 1

Read original ↗

Agents Week: network performance update

Summary

Cloudflare's Agents Week 2026 performance update reports that between September 2025 and December 2025 Cloudflare moved from being the fastest provider in 40 % of the top-1,000 networks by population to 60 %+40 countries, +261 networks, and +54 US ASNs where it is now #1. Averaged across December, Cloudflare's connection time was 6 ms faster than the next-fastest competitor. The post explains both the measurement methodology (APNIC-top-1,000-by-population denominator, Real User Measurement from Cloudflare-branded error pages, trimean of connection time against Cloudflare, Amazon CloudFront, Google, Fastly, Akamai) and the two orthogonal improvement axes driving the gain: (1) new points of presence closer to users (Constantine/Algeria, Malang/Indonesia, Wroclaw/Poland — Wroclaw free-tier RTT dropped 19 → 12 ms, 40 %, Malang enterprise 39 → 37 ms, 5 %), and (2) software efficiency in the connection-handling hot pathHTTP/3 adoption, congestion-window tuning, and CPU/memory wins in SSL/TLS termination, traffic management, and the core proxy. The post's toll-booth analogy frames the two axes cleanly — adding booths (PoPs) vs. making existing booths faster (software).

Key takeaways

  • Headline shift: 40 % → 60 % fastest in top-1,000 networks in ~3 months (Sept Birthday Week → Dec 2025). "Between September and December, we became the fastest in 40 additional countries and in 261 additional networks. We saw the biggest increase in the United States, where we are the fastest in 54 more ASNs."
  • Denominator: top-1,000 networks globally by estimated population per APNIC. Population-weighted so "real users in nearly every geography"; not throughput-weighted or traffic-weighted.
  • Metric: connection time — time for end-user device to complete the handshake with the endpoint. Chosen as "closest to what users actually perceive as 'Internet speed'"; captures real-world congestion and distance but more precise than throughput.
  • Aggregate: trimean — a weighted average of Q1, median, Q3. "Smooths out noise and outliers, giving us a cleaner signal about the typical user experience rather than an extreme case."
  • Data source: Real User Measurement from Cloudflare-branded error pages. A silent JS speed test retrieves small files from Cloudflare, Amazon CloudFront, Google, Fastly, Akamai in parallel and records each exchange's duration — "the difference between testing a car's top speed on a track versus watching how people actually drive on the highway."
  • December 2025 average gap: 6 ms faster than next-fastest provider. "The line representing Cloudflare's latency, or connection time, is consistently lower throughout December than the next fastest provider."
  • Improvement axis 1 — new points of presence. Three recent deployments cited:
  • Wroclaw, Poland — free-tier users 19 ms → 12 ms RTT (−40 %)
  • Malang, Indonesia — Enterprise traffic 39 ms → 37 ms RTT (−5 %)
  • Constantine, Algeria — new location (no specific RTT reported) Cloudflare's frame: "adding new locations alone doesn't fully explain how we went from being #1 in 40 % of networks to #1 in 60 % of networks."
  • Improvement axis 2 — software efficiency on the connection-handling hot path. The post enumerates the load-bearing code sites: "software that handles fundamental actions like establishing connections, SSL/TLS termination, traffic management, and the core proxy that all requests flow through." Named wins: protocol upgrades (HTTP/3), congestion-window management, CPU + memory efficiency across the fleet. All of this is the Pingora / FL2 proxy world from adjacent posts, though this post doesn't name the frameworks.
  • Toll-booth analogy"Lines can build up at toll booths if there aren't enough toll booths, or if the booths themselves aren't efficient at processing cars going through them. We've been constantly working to improve not only how our toll booths process incoming cars (the software improvements in connection handling), but also at improving how we send cars between available booths so that we can keep lines short and latency low." Clean two-axis framing that separates capacity/distance (more booths, closer booths) from throughput-per-booth (software efficiency).
  • Stated posture: 60 % is not the ceiling. "There are still networks where we're number two, sometimes by the smallest of margins. We see those gaps clearly, and we're working on them. We're committed to being the fastest provider across every network in the world."

Numbers extracted

Metric Sept 2025 Dec 2025 Delta
Fastest in top-1,000 networks 40 % 60 % +20 pp
Additional countries #1 +40
Additional networks #1 +261
Additional US ASNs #1 +54
Avg gap to next-fastest (Dec) 6 ms faster
Wroclaw free-tier RTT 19 ms 12 ms −7 ms / −40 %
Malang Enterprise RTT 39 ms 37 ms −2 ms / −5 %

Systems / concepts / patterns extracted

Caveats

  • No absolute latency numbers beyond the PoP anecdotes — the headline "60 % fastest" is a relative ranking (who won each network on trimean connection time), not an absolute latency distribution. The real user experience depends on per-region absolute latency; "fastest of the five tested providers" could still be slow in absolute terms.
  • RUM bias — Cloudflare-branded error pages. The measurement runs in browsers that loaded a Cloudflare error page, so the population is Cloudflare-customer users hitting an error path. Cloudflare has written elsewhere (linked from the post) about why they prefer this over synthetic tests, but the cohort is not identically-distributed with typical user browsing. Error- page cohort may over-represent users whose requests already hit a failure mode.
  • Top-5 provider comparison only — Cloudflare, Amazon CloudFront, Google, Fastly, Akamai. Other regional / specialised networks are not in the comparison.
  • Methodology choice of trimean — the trimean specifically discounts extreme values. A provider whose tail latency (p95/p99) is much worse but whose Q1/median/Q3 are competitive will look equal on trimean; the wiki tail-latency-at-scale page argues tails matter a lot at scale, so trimean-based ranking and tail-based ranking are not interchangeable.
  • No breakdown of the 40 %→60 % gain between the two axes. The post explicitly says software improvements are a major driver alongside new PoPs but doesn't quantify how many of the +261 networks flipped because of each axis. "Adding new locations alone doesn't fully explain" is qualitative.
  • Software wins named, not quantified. HTTP/3, congestion-window tuning, CPU/memory efficiency in SSL/TLS / traffic management / core proxy are listed as the code sites, but the post doesn't publish before/after CPU cost-per-connection or µs/handshake numbers — those live in separate hot-path performance posts (e.g. trie-hard).
  • Scope filter note: borderline on the wiki's architectural- depth bar — this is a performance-update / marketing-adjacent post. Ingested because the methodology surface (RUM, trimean, connection-time-as-metric, population-weighted APNIC denominator) and the two-axis framing (PoP densification vs. connection-handling software) are reusable systems-design primitives that appear repeatedly in CDN / edge literature. The numbers themselves are the less interesting part.

Source

Last updated · 200 distilled / 1,178 read