PATTERN Cited by 2 sources
PoP densification¶
Problem¶
A global edge / CDN / DNS / DDoS network wins on latency by being close to users — specifically, close in Internet topology to the last-mile network each user is on. But users are distributed across hundreds of countries, tens of thousands of ASNs, and the latency you can achieve is floored by the physical RTT to the nearest PoP you operate. Adding PoPs is the infrastructure-axis lever on user-perceived latency.
Pattern¶
Continuously deploy new points of presence into regions / metros / countries that today lack nearby presence, measured by the RTT improvement for users whose nearest-PoP shifts to the new site. Pair each deployment with local peering / Internet-exchange membership so the new PoP is actually topologically near the nearby users, not just geographically near.
Mechanism¶
- Identify underserved regions via comparative RUM data — places where trimean connection time from real users is high relative to the fleet average, or where the competitor won on the ranking.
- Site selection: commercial data-centre / co-lo space near Internet exchanges or major ISP core facilities so local peering is viable.
- Deploy the standard fleet stack: CDN / proxy / DNS / DDoS mitigation. In an anycast-first architecture, the new PoP advertises the same IPs as every other PoP — BGP path selection automatically routes nearby users to it once it's announcing (see concepts/anycast).
- Peer locally with the major ISPs and the local Internet exchange. PoP proximity without local peering still routes users through a transit provider far away; the RTT win only fires when peering is right.
- Measure the before/after RTT for users whose nearest-PoP shifted to the new site. Publish the anecdote.
Shape of the wins¶
From Cloudflare's 2026-04-17 performance post (Source: sources/2026-04-17-cloudflare-agents-week-network-performance-update):
| New PoP | User cohort | Before RTT | After RTT | Δ |
|---|---|---|---|---|
| Wroclaw, Poland | Free-tier users | 19 ms | 12 ms | −7 ms / −40 % |
| Malang, Indonesia | Enterprise traffic | 39 ms | 37 ms | −2 ms / −5 % |
| Constantine, Algeria | (no RTT disclosed) | — | — | — |
Notes:
- Wroclaw's −40 % is a big improvement because the previous nearest PoP was far (presumably Frankfurt or Warsaw); adding local presence captures a large geographic share.
- Malang's −5 % is small because there was already reasonable presence in Jakarta / Singapore; the new PoP is a finer-grained improvement.
- General rule: diminishing returns. The first PoP in a country is a big latency step; the 10th PoP is a smaller one. The curve inflects on how close the previous nearest PoP was.
Two-axis framing with software efficiency¶
Cloudflare's own framing: PoP densification is only one of two orthogonal axes on latency. The other is software efficiency in the connection-handling hot path (HTTP/3, congestion-window tuning, CPU/memory wins in SSL/TLS, traffic management, and core proxy — see systems/pingora, systems/cloudflare-fl2-proxy).
The 2026-04-17 post is explicit:
"Adding new locations alone doesn't fully explain how we went from being #1 in 40 % of networks to #1 in 60 % of networks."
The two axes compose multiplicatively on user-perceived page load:
- Infrastructure axis (PoP densification) → reduces per-RTT wall-clock time
- Software axis (HTTP/3, cwnd, hot-path code) → reduces the number of RTTs per transaction
A 1.5× improvement on each axis yields a 2.25× improvement on some fraction of user flows.
Cost / trade-offs¶
- Capex + ops cost per PoP. Each new PoP is rack-space + hardware + ops staff + commercial peering agreements. At fleet scale, the marginal PoP is a decision of where the next marginal dollar wins the most user-RTT.
- Global-rollout risk surface grows. More PoPs → more change-management surface, more chances for a bad config to land on one PoP and misbehave.
- Latent-misconfiguration / topology risk. The 1.1.1.1 2025-07-14 incident (Source: sources/2025-07-16-cloudflare-1111-incident-on-july-14-2025) showed how a service topology misconfig can interact with the PoP fleet to produce a global outage. More PoPs under the same config surface → bigger blast radius for a single config bug. Progressive-config-rollout discipline matters.
- DDoS defence composes. Under anycast, more PoPs = more per-PoP capacity to absorb a geographic share of a flood (see the 7.3 Tbps attack at 477 DCs / 293 locations — sources/2025-06-20-cloudflare-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos-attack). Latency and defence both benefit from the same densification; capex is amortised.
- Peering is the load-bearing detail. A PoP without local peering barely helps; a PoP with excellent local peering helps more than adding two without.
When to reach for it¶
- You operate an anycast edge / CDN / DNS / DDoS network and see measurable RTT floor from geographic distance in specific regions.
- You have comparative RUM data showing a competitor consistently wins a specific geography, and software improvements alone won't close the physical-RTT gap.
- You want DDoS-defence-capacity and latency improvements from the same capex.
When not to reach for it¶
- Your workload is server-side compute bound, not network bound — a new PoP with the same slow backend doesn't help user-perceived latency.
- Your win is from software efficiency: if the connection-handling hot path dominates your p50 handshake time, shrinking that beats a new PoP for less cost.
- You don't have the operational / peering maturity to light up new PoPs well; a badly-peered new PoP can be slower than routing to a well-peered farther one.
Seen in¶
- sources/2026-04-17-cloudflare-agents-week-network-performance-update — canonical wiki instance. Three recent Cloudflare PoP deployments (Wroclaw / Malang / Constantine) reported as part of the Sept → Dec 2025 ranking shift (40 % → 60 % fastest in top 1,000 networks, APNIC population-weighted). Wroclaw −40 % RTT / Malang −5 % RTT. Explicitly framed as one of two axes, with software efficiency as the orthogonal lever.
- sources/2025-06-20-cloudflare-how-cloudflare-blocked-a-monumental-7-3-tbps-ddos-attack — densification-as-defence instance: the 477 data centres / 293 locations each absorbed a geographic share of the 7.3 Tbps flood. Same fleet, dual-use capex.
Related¶
- concepts/point-of-presence
- concepts/anycast
- concepts/connection-time
- concepts/network-round-trip-cost
- concepts/hot-path
- concepts/service-topology
- concepts/http-3
- concepts/congestion-window
- patterns/comparative-rum-benchmarking
- systems/pingora
- systems/cloudflare-fl2-proxy
- systems/magic-transit
- companies/cloudflare