PATTERN Cited by 1 source
DB-authoritative with WebSocket notify¶
Intent¶
Serve real-time UIs by splitting correctness and latency into two channels: a durable transactional database is the source of truth for every state change; a best-effort WebSocket notification channel pushes the change to connected clients within milliseconds. Clients treat the WebSocket push as a hint and the DB as authority, resulting in the contract: "immediate most of the time, eventually correct all of the time."
Context¶
Real-time UI workloads (prediction markets, live auctions, collaborative documents, ticket-sale inventory, ride-hail surge pricing) face a tension:
- Users expect sub-second visibility of state changes made by other users.
- The application's correctness requires durable, transactional commits — dropped messages cannot silently corrupt state.
Single-channel designs cannot satisfy both:
- Poll the DB — durable, but latency is bounded by the polling interval.
- WebSocket IS the state — fast, but lossy under normal network behaviour; dropped messages produce incorrect UI.
Relevant when:
- The application has a central, relational source of truth (Postgres / MySQL / similar).
- Clients need multi-update-per-second visibility of state changes.
- Concurrent writers exist (other users, trading bots, background jobs).
- Dropped notifications are acceptable short-term as long as the underlying state is consistent.
Solution¶
Wire two channels with a clear authority hierarchy:
- Authoritative write path.
- Client issues a write (carrying stale-quote validation data if applicable).
- Request hits a stateless compute tier (Worker / server) which opens (or reuses) a pooled DB connection.
-
DB validates + commits atomically. On failure, the client is notified via the request response — no WebSocket involvement.
-
Fast-notification path.
- On successful commit, the compute tier emits a notification to a fan-out coordinator (e.g. a Durable Object, Redis Pub/Sub instance, NATS subject).
- Coordinator broadcasts the change over WebSocket to all currently-connected clients.
-
Notification payload is enough to drive UI update but is not authoritative — clients re-read from DB if they want to double-check.
-
Client contract.
- Apply WebSocket updates immediately for UX.
- Treat the DB as ground truth for any action derived from observed state — submit the observation back with the action, let the DB reject if stale.
-
On reconnect / periodic intervals, re-read from DB to reconcile any dropped messages.
-
Production hardening layers (optional, ordered by typical priority).
- Replay on reconnect — client reconnects with a known cursor; coordinator replays missed events from a short-retention event log.
- Queue-backed fanout — a durable queue sits between "write committed" and "coordinator broadcasts" for retry + durability.
- Polling reconciliation — clients periodically re-read a digest (row count, last-modified cursor) to detect and catch up on missed updates.
Canonical wiring (Cloudflare + PlanetScale)¶
From the 2026-02-19 demo (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive):
- DB / authority: PlanetScale Postgres Metal (smallest cluster, $50/month for the demo).
- Connection tier: Cloudflare Hyperdrive — pooled + cached connections shield Workers from the per-query handshake + RTT tax.
- Compute tier (write path): Cloudflare Workers — user-adjacent placement; transaction goes Worker → Hyperdrive → Postgres.
- Fan-out coordinator: a single Durable Object, chosen precisely because it is single-threaded + single- location (good for holding the connection list) even though that same property makes it a bad write-path choice.
- Notification transport: Cloudflare WebSockets between clients and the DO.
Write flow: "Worker sends the transaction to the database via the Hyperdrive connection. When that transaction completes it pings the Durable Object via the WebSocket connection to fan out that update to all other connected browsers."
Consequences¶
Positive
- Durable state under all failure modes; dropped WebSocket messages cannot corrupt data.
- Sub-second end-to-end latency when the notification channel is healthy.
- Concurrent writers are trivially supported — all writes serialise through the DB, notifications fan out after the fact.
- Hardening upgrades are incremental (add replay, then a queue, then polling) without changing the base architecture.
- Pairs with stale-quote rejection to preserve correctness even when the WebSocket tier is actively pushing incorrect (outdated) prices.
Negative
- Two code paths to maintain + observe (write path + notify path).
- Fan-out coordinator is usually a single-region / single- instance choke point — sharding by key is the horizontal- scale lever (patterns/single-region-do-fanout-from-distributed-writers).
- Writer-observes-its-own-write faster than other clients — a known UX asymmetry that may or may not matter for the workload.
- Without production hardening, "eventually correct" is aspirational; real-world drop rates will push clients out of sync until polling / replay is added.
- Operators must decide what counts as authoritative at design time; retrofitting the contract later is painful.
Known uses¶
- Cloudflare + PlanetScale prediction-market demo (2026-02-19) — canonical wiki instance. Smallest PlanetScale Metal cluster + Workers + Hyperdrive + single DO + WebSockets. Each option purchase carries expected price + slippage; DB rejects stale quotes; successful writes ping the DO which broadcasts to all connected browsers. Three hardening layers explicitly deferred. (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive.)
Relationship to adjacent patterns¶
- patterns/single-region-do-fanout-from-distributed-writers — the write-path vs fan-out-path split this pattern codifies on Cloudflare. Distributed Workers write; a single DO fans out.
- patterns/caching-proxy-tier — what Hyperdrive contributes to the write side; collapses DB round-trips under repeat reads.
- patterns/explicit-placement-hint — NOT used in the canonical demo; the author keeps Workers user-adjacent and lets Hyperdrive absorb the DB RTT cost. The demo is a datapoint for "when not to place".
- patterns/partner-managed-service-as-native-binding — how the authoritative DB (PlanetScale Postgres) is wired into the Workers platform.
Related¶
- concepts/authoritative-vs-fast-notification — the concept this pattern operationalises.
- concepts/stale-quote-rejection — the client-side correctness discipline that goes with it.
- systems/cloudflare-websockets
- systems/cloudflare-durable-objects
- systems/hyperdrive
- systems/planetscale-metal
- patterns/single-region-do-fanout-from-distributed-writers