Skip to content

CONCEPT Cited by 2 sources

Single-writer assumption

Definition

An architectural posture in which a system assumes exactly one writer is active against a given piece of state at any moment, and refuses to attempt conflict resolution between concurrent writers. The assumption is usually enforced by (a) social / operational discipline, (b) a lease / lock primitive (e.g. CASAAS), or (c) the shape of the consumer (single-process-owns-the-database).

SQLite itself is a canonical single-writer system (one writer per database). Distributed SQLite products that preserve the assumption (LiteFS, Litestream) inherit it; products that relax it (multi-writer distributed SQLite, Turso embedded replicas with write-forwarding) pay coordination costs to do so.

Canonical wiki statement

Ben Johnson's 2026-01-29 shipping post for Litestream writable VFS:

"In write mode, we don't allow multiple writers, because multiple-writer distributed SQLite databases are the Lament Configuration and we are not explorers over great vistas of pain. So the VFS in write-mode disables polling. We assume a single writer, and no additional backups to watch."

(Source: sources/2026-01-29-flyio-litestream-writable-vfs)

The "Lament Configuration" framing — a Hellraiser reference — is the wiki's most load-bearing warning about multi-writer distributed SQLite: the complexity cost is so high the design is treated as categorically the wrong choice rather than a design trade-off.

Why it's load-bearing for Litestream

The writable-VFS mode specifically disables polling to enforce the assumption:

  • Read-only VFS mode polls L0 for remote writers' LTX files — near-realtime replica behaviour (patterns/near-realtime-replica-via-l0-polling).
  • Write-mode VFS does not poll — there is no remote writer emitting LTX files to observe.
  • Running two write-mode VFS instances against the same object-store URL would corrupt history — each writer emits LTX files in its own TXID sequence, no reconciliation exists.

The single-writer assumption is what makes the write-buffer-with-async-sync mechanism sound. Relax it and the pattern collapses.

How Litestream (the Unix program) enforces it

Litestream itself uses CASAAS — S3's conditional writes (If-None-Match) — to implement a time-based single-writer lease at the object-store tier. At most one Litestream process holds the lease; others automatically defer. This is the operational enforcement layer the writable VFS does not (yet?) compose with.

The VFS-side assumption is therefore stricter: even if you're running a lease-acquiring Litestream sidecar, you must not point a second writable-VFS-mode SQLite at the same object-store URL — the VFS doesn't acquire the lease itself, it just assumes it's the only writer.

Contrast: distributed-SQLite that tries multi-writer

Several projects in the distributed-SQLite space attempt multi-writer:

  • rqlite — Raft-replicated SQLite; all writes go through a leader, so there is still one writer at any moment, just with consensus-backed failover.
  • Turso / libSQL embedded replicas — write-forwarding from replicas to a designated primary; still one-writer.
  • Cloudflare Durable Objects — each object is its own single-writer partition, horizontal scale via partition-count.

Genuinely multi-writer SQLite — where two replicas simultaneously commit and later merge — requires either CRDT-structured SQLite (systems/cr-sqlite) or application-level conflict resolution. Both shift the consistency burden up the stack.

Why it composes with eventual durability

Single-writer + eventual durability is a natural pairing:

  • Single writer → no cross-writer conflicts to reconcile even at sync time.
  • Eventual durability → the writer's in-flight writes are locally visible immediately; durability is a separate concern from visibility.
  • Object storage is the authoritative tier; the writer's pending buffer is the version of truth until sync.

This is the shape of the Sprite block-map: one writer per Sprite (the Sprite's own JuiceFS stack), eventual durability to object storage, no multi-writer coordination ever needed.

When the assumption breaks

  • Accidental dual-writer. Operator runs two writer processes against the same store — silent corruption ensues.
  • Failover without lease coordination. Old primary comes back up after a network partition; both it and the new primary are writers.
  • Bug in the application that opens two writable-mode VFS instances in the same process. SQLite would flag a local file-based database with SQLITE_BUSY; the VFS path may not.

The operational discipline of "enforce single-writer externally" (via CASAAS, via deployment invariant, via single-tenant substrate like Sprites) is essential.

Seen in

Last updated · 319 distilled / 1,201 read