PATTERN Cited by 1 source
Single-region DO fan-out from distributed writers¶
Intent¶
Separate the write path from the real-time broadcast path: let globally-distributed stateless compute write to a central transactional DB, then have a single, co-located Durable Object coordinate the fan-out of the resulting event to all currently-connected WebSocket clients. The DO holds no authoritative state — only the list of open subscriber connections.
Context¶
A naïve first-draft real-time architecture puts the Durable Object on the write path: client sends the transaction to the DO, the DO applies it, then the DO broadcasts. This is wrong at scale, and the 2026-02-19 PlanetScale × Hyperdrive post names the trade-off directly (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive):
"If we're going to use a Durable Object to broadcast updates over WebSockets, it may be tempting to make the Durable Object the write path to the database. However, this negatively impacts performance. Durable Objects are single-threaded and hosted in a single location, making them a bad candidate for the write path. Instead the Workers will send transactions to the database via the Hyperdrive connection."
Two properties of a DO that are good for broadcast coordination are simultaneously bad for the write path:
- Single-threaded → serialised subscriber list updates are free (good for broadcast); all writes queue behind each other (bad for write throughput).
- Single-location → one authoritative connection list to manage (good); every write pays a cross-region RTT from the user-adjacent Worker to the DO's POP (bad).
Relevant when:
- Clients are globally distributed.
- Writes come from a stateless compute tier that can scale horizontally (Workers, Lambdas, containerised servers).
- There's a central transactional DB already serving the authoritative state.
- Real-time broadcast is needed, but the broadcast set is bounded (e.g. one DO per market / room / document).
Solution¶
- Writers bypass the DO.
- Stateless compute tier (e.g. Workers) receives client requests where they land (user-adjacent by default).
-
Writes go Worker → pooled DB connection → Postgres (via Hyperdrive or equivalent).
-
After-commit ping.
- When the transaction commits, the Worker sends a small notification message to the DO via the same WebSocket hub or direct DO RPC.
-
The payload is a state-change description, not the authoritative state.
-
DO broadcasts only.
- DO holds the set of open WebSocket connections.
- On receiving the Worker's ping, DO pushes the notification payload to every connected subscriber.
-
DO never applies a write to the DB; it is not on the authoritative path.
-
Subscription side.
- Clients open WebSockets directly to the DO (Cloudflare routes them to the correct DO instance via ID).
-
Disconnections trim the subscriber list; reconnects add back with no DO state rebuild needed.
-
Horizontal scaling via key-sharded DOs.
- One DO per market / room / document. The DO ID is the sharding key.
- Writes still go through the central DB; the DO the Worker pings is the one whose subscribers care about that write.
Why this specific split¶
The 2026-04-21 post frames it as a matter of picking the right workload for the right substrate:
- The write path needs parallelism + locality: many writers, writes scale horizontally, each write goes to the DB tier (which already handles concurrency at scale).
- The broadcast path needs serialization + shared state: the subscriber list is one thing that must be consistent; a single-writer actor is the natural fit.
Attempting to make the DO the write path forces: - Every write serialised through one thread (DO isolation guarantees). - Every write a cross-region RTT to the DO's single POP. - The DB becoming a write-behind sink rather than an authority, undoing the authoritative- vs-fast-notification split.
Consequences¶
Positive
- Writes scale with the stateless compute tier — no single-writer bottleneck.
- DO CPU spent on broadcast, not on write-side business logic.
- DB remains the source of truth; broadcast is purely a notification layer.
- Clear sharding axis — when one DO runs out of budget, split by a business-level key.
Negative
- Two network hops after the write: Worker → DB (commit), Worker → DO (ping). Both must succeed for the broadcast to happen, though only the first must succeed for correctness.
- DO is still a single choke point per shard; broadcast rate × subscriber count is bounded by one DO's CPU.
- Writer-observes-before-listener: the writing client gets their ack directly from the Worker; listeners wait for the DO-broadcast round-trip. A known, accepted UX asymmetry.
- If the post-commit ping to the DO is lost, listeners miss the update until a reconcile / poll — the broadcast layer is best-effort. Mitigated by the hardening moves in patterns/db-authoritative-with-websocket-notify (replay-on-reconnect, queue-backed fanout, polling reconciliation).
Known uses¶
- Cloudflare + PlanetScale prediction-market demo (2026-02-19) — canonical wiki instance. Globally-distributed Workers take purchase requests user-adjacent, write to PlanetScale Postgres via Hyperdrive, ping the single DO which fans out price updates over WebSockets to all currently-connected browsers. The post explicitly names the "DO-not-on-write-path" decision as an early architectural choice and the "single DO is the scaling ceiling, shard by key to grow" caveat as the known horizontal- scale lever. (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive.)
Relationship to adjacent patterns¶
- patterns/db-authoritative-with-websocket-notify — the higher-level architectural pattern this one specialises on Cloudflare.
- patterns/kafka-broadcast-for-shared-state — parallel shape at the service-fleet scale: instead of one DO fanning out to N WebSocket clients, one Kafka topic fans out shared state to N service instances. Same "one place holds the subscriber list" primitive, different transport.
- patterns/caching-proxy-tier — the Hyperdrive-shaped connection plumbing that makes the write path fast enough that this split is worth doing.
Related¶
- systems/cloudflare-durable-objects
- systems/cloudflare-workers
- systems/cloudflare-websockets
- systems/hyperdrive
- concepts/single-writer-assumption — the property of the DO that is a positive for the broadcast role.
- concepts/authoritative-vs-fast-notification — the correctness framing.
- patterns/db-authoritative-with-websocket-notify