Skip to content

SYSTEM Cited by 1 source

Cloudflare WebSockets

Cloudflare WebSockets is the long-lived bidirectional connection primitive exposed to Workers and (more commonly for real-time fan-out) Durable Objects inside Cloudflare's global network. WebSocket connections terminate at a Cloudflare POP, giving access to the full "Cloudflare global network ... within ~50ms of 95% of the world's internet-connected population" for the client-edge leg. (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive.)

Role

WebSockets is the canonical fast-notification channel in the authoritative-vs-fast-notification split — best-effort push delivery of state changes to every connected client, with a durable transactional store (e.g. Postgres) as the adjacent authoritative layer.

The 2026-02-19 PlanetScale × Hyperdrive demo names the contract explicitly: "WebSocket messages can be delayed or dropped due to normal network behavior, but database writes are still durable. So the contract for clients should be: 'updates are immediate most of the time, and eventually correct all of the time.'"

Canonical architectural shape

WebSockets pair with Durable Objects for coordinated fan-out:

  • Durable Object holds the subscriber list. One DO instance maintains the set of open WebSocket connections to all currently-connected clients for a given "room" / "market" / "channel" — the DO's single-writer single- location property is positive here (single connection list, no coordination) even though it would be negative on the write path (patterns/single-region-do-fanout-from-distributed-writers).
  • Workers write, DO broadcasts. Write-path traffic goes Worker → Hyperdrive → Postgres (bypassing the DO). After commit, the Worker "pings the Durable Object via the WebSocket connection to fan out that update to all other connected browsers."
  • Each DO is one location. Globally distributed clients all open WebSocket connections routed to the one DO instance; scaling is horizontal via sharding by key (per-market DO, per-room DO).

Scaling caveats

The 2026-04-21 post is explicit about the single-DO bottleneck: "I'm still relying on a single Durable Object to send WebSocket updates to all users. Cloudflare's own documentation gives guidance that you can scale Durable Objects horizontally — sharded with a key — should you come close to exhausting their allocated resources."

Each DO is CPU-bound on connection count × broadcast rate. When a single DO runs out of budget, the sharding axis is caller-identified: e.g. one DO per prediction market, per document, per chat room.

Production-hardening gaps

The fast-notification channel is best-effort by default. The PlanetScale demo post names three hardening moves that upgrade reliability without changing the channel type:

  • Replay on reconnect — client reconnects with a cursor, DO replays events since that cursor. Requires the DO to retain recent events in embedded SQLite.
  • Queue-backed fanout — a Cloudflare Queue sits between "write committed" and "DO broadcast" for durability + retries. "Cloudflare has a Queueing service, too!"
  • Polling reconciliation — periodic poll against the authoritative store catches missed WebSocket updates.

All three leave the DB as the authority and add reliability to the notification layer.

Non-uniformity of delivery

A known artefact of the DO-ping-after-commit path: "the user that sends the transaction will get their feedback faster than users who are only listening. It's a small trade-off I'm considering acceptable in this demo." The writer's UI updates on transaction ack; listeners wait for the DO-broadcast round-trip. Accepting this gap is workload-dependent.

Seen in

Last updated · 347 distilled / 1,201 read