Skip to content

PATTERN Cited by 1 source

DB-authoritative with WebSocket notify

Intent

Serve real-time UIs by splitting correctness and latency into two channels: a durable transactional database is the source of truth for every state change; a best-effort WebSocket notification channel pushes the change to connected clients within milliseconds. Clients treat the WebSocket push as a hint and the DB as authority, resulting in the contract: "immediate most of the time, eventually correct all of the time."

Context

Real-time UI workloads (prediction markets, live auctions, collaborative documents, ticket-sale inventory, ride-hail surge pricing) face a tension:

  • Users expect sub-second visibility of state changes made by other users.
  • The application's correctness requires durable, transactional commits — dropped messages cannot silently corrupt state.

Single-channel designs cannot satisfy both:

  • Poll the DB — durable, but latency is bounded by the polling interval.
  • WebSocket IS the state — fast, but lossy under normal network behaviour; dropped messages produce incorrect UI.

Relevant when:

  • The application has a central, relational source of truth (Postgres / MySQL / similar).
  • Clients need multi-update-per-second visibility of state changes.
  • Concurrent writers exist (other users, trading bots, background jobs).
  • Dropped notifications are acceptable short-term as long as the underlying state is consistent.

Solution

Wire two channels with a clear authority hierarchy:

  1. Authoritative write path.
  2. Client issues a write (carrying stale-quote validation data if applicable).
  3. Request hits a stateless compute tier (Worker / server) which opens (or reuses) a pooled DB connection.
  4. DB validates + commits atomically. On failure, the client is notified via the request response — no WebSocket involvement.

  5. Fast-notification path.

  6. On successful commit, the compute tier emits a notification to a fan-out coordinator (e.g. a Durable Object, Redis Pub/Sub instance, NATS subject).
  7. Coordinator broadcasts the change over WebSocket to all currently-connected clients.
  8. Notification payload is enough to drive UI update but is not authoritative — clients re-read from DB if they want to double-check.

  9. Client contract.

  10. Apply WebSocket updates immediately for UX.
  11. Treat the DB as ground truth for any action derived from observed state — submit the observation back with the action, let the DB reject if stale.
  12. On reconnect / periodic intervals, re-read from DB to reconcile any dropped messages.

  13. Production hardening layers (optional, ordered by typical priority).

  14. Replay on reconnect — client reconnects with a known cursor; coordinator replays missed events from a short-retention event log.
  15. Queue-backed fanout — a durable queue sits between "write committed" and "coordinator broadcasts" for retry + durability.
  16. Polling reconciliation — clients periodically re-read a digest (row count, last-modified cursor) to detect and catch up on missed updates.

Canonical wiring (Cloudflare + PlanetScale)

From the 2026-02-19 demo (Source: sources/2026-04-21-planetscale-faster-planetscale-postgres-connections-with-cloudflare-hyperdrive):

  • DB / authority: PlanetScale Postgres Metal (smallest cluster, $50/month for the demo).
  • Connection tier: Cloudflare Hyperdrive — pooled + cached connections shield Workers from the per-query handshake + RTT tax.
  • Compute tier (write path): Cloudflare Workers — user-adjacent placement; transaction goes Worker → Hyperdrive → Postgres.
  • Fan-out coordinator: a single Durable Object, chosen precisely because it is single-threaded + single- location (good for holding the connection list) even though that same property makes it a bad write-path choice.
  • Notification transport: Cloudflare WebSockets between clients and the DO.

Write flow: "Worker sends the transaction to the database via the Hyperdrive connection. When that transaction completes it pings the Durable Object via the WebSocket connection to fan out that update to all other connected browsers."

Consequences

Positive

  • Durable state under all failure modes; dropped WebSocket messages cannot corrupt data.
  • Sub-second end-to-end latency when the notification channel is healthy.
  • Concurrent writers are trivially supported — all writes serialise through the DB, notifications fan out after the fact.
  • Hardening upgrades are incremental (add replay, then a queue, then polling) without changing the base architecture.
  • Pairs with stale-quote rejection to preserve correctness even when the WebSocket tier is actively pushing incorrect (outdated) prices.

Negative

  • Two code paths to maintain + observe (write path + notify path).
  • Fan-out coordinator is usually a single-region / single- instance choke point — sharding by key is the horizontal- scale lever (patterns/single-region-do-fanout-from-distributed-writers).
  • Writer-observes-its-own-write faster than other clients — a known UX asymmetry that may or may not matter for the workload.
  • Without production hardening, "eventually correct" is aspirational; real-world drop rates will push clients out of sync until polling / replay is added.
  • Operators must decide what counts as authoritative at design time; retrofitting the contract later is painful.

Known uses

Relationship to adjacent patterns

Last updated · 347 distilled / 1,201 read