Skip to content

PATTERN Cited by 1 source

Writable VFS with buffered sync

Pattern

Extend a read-side object-storage-backed VFS (patterns/vfs-range-get-from-object-store) to serve writes too, by:

  1. Accepting writes into a local temporary write buffer.
  2. Disabling any polling loop that watches remote writers (single-writer assumption).
  3. Periodically syncing the write buffer to object storage (cadence ~1s, plus on clean shutdown) as new files the reader half can index.
  4. Accepting an eventual-durability contract: writes are not truly durable until the next sync completes.
Application (unmodified SQLite)
    │  Write(page N, bytes)
VFS (write-enabled)
    │  1. Append to local write buffer
    │  2. Every ~1s: sync buffer → object storage as LTX file(s)
    │  3. L0 polling DISABLED (no remote writer to observe)
Object storage (authoritative durability tier)
    │  Reads on same VFS: Range GETs against object storage
    │  for pages not in write buffer; LRU-cached

Canonical instance: Litestream Writable VFS

From the 2026-01-29 shipping post:

"the VFS in write-mode disables polling. We assume a single writer, and no additional backups to watch. Next, we buffer. Writes go to a local temporary buffer ('the write buffer'). Every second or so (or on clean shutdown), we sync the write buffer with object storage. Nothing written through the VFS is truly durable until that sync happens."

(Source: sources/2026-01-29-flyio-litestream-writable-vfs)

Activation: LITESTREAM_WRITE_ENABLED=true alongside the read-side vfs=litestream URI parameter.

Why single-writer is load-bearing

Without a single-writer constraint, the pattern collapses into multi-writer distributed SQLite — a shape Ben Johnson calls "the Lament Configuration" that the writable VFS explicitly refuses to explore. Reasons:

  • Conflict resolution on SQLite pages is brain surgery. A cross-writer conflict on the same page is not reconcilable at the page-byte layer without application- semantic knowledge.
  • Multiple writers emitting LTX files concurrently breaks the compaction ladder's sortedness invariant.
  • The write buffer is per-process local. There's no cross-process visibility into another writer's uncommitted writes.

Single-writer is enforced socially ("we assume"), not cryptographically — an application running two write-enabled VFS instances against the same object-store URL will corrupt its own history. CASAAS could in principle gate this, but the writable VFS doesn't compose it in this shipping post.

Eventual durability is the price

The buffered-sync cadence defines the crash-loss window. Concretely:

  • Sync cadence ~1s → up to 1s of writes lost on crash.
  • Clean shutdown forces a final sync → zero loss on graceful stop.
  • Process crash / kernel panic / network partition mid-sync → partial loss (last sync's work discarded or reconstituted on recovery, mechanism not disclosed).

This is the concepts/eventual-durability contract — fine for workloads already operating under it (e.g. Fly.io Sprites' storage stack shares "this eventual durability property"), not fine for workloads that expect POSIX-ish fsync-to-the-wire semantics.

Pairs with: background hydration

Writable-VFS mode solves the write path; it doesn't make steady-state reads fast (still Range GETs against object storage until the LRU cache warms). Pair with patterns/background-hydration-to-local-file — pull the full database into a local file in the background, transitioning reads over once it's complete — to eliminate steady-state read-side Range GETs. The two patterns are complementary features on the same writable VFS instance.

Trade-offs

  • No multi-writer. Fundamental design limit; applications with write fan-out must use a different mechanism.
  • Durability is cadence-bounded. Faster sync → tighter loss window → higher object-store PUT rate + cost.
  • Write buffer is an in-process resource. Sizing, eviction, and back-pressure under write bursts need implementation-specific handling (the post doesn't disclose Litestream VFS's policy).
  • No read-your-writes guarantee across processes. A second VFS reader (read-only) opened against the same object-store URL won't see the writer's pending buffer until sync completes.
  • PITR interaction is unclear. PRAGMA litestream_time in write mode (if it's permitted) has no documented semantics. The single-writer-no-polling posture suggests the two modes are mutually exclusive.
  • Crash recovery semantics undocumented. What the writer observes on restart after an unclean shutdown (where does the next LTX TXID start, does the buffer survive, etc.) is not disclosed.

When it's the wrong shape

  • Write-heavy OLTP with strong durability needs. 1-second loss windows aren't acceptable for financial ledgers or anything with audit-trail obligations.
  • Multi-writer architectures. Any horizontal write scale-out needs a consensus or conditional-write lease layer — not this pattern.
  • Applications that never bounce. The cold-boot speed win of writable-VFS-plus-hydration matters for ephemeral-server deployments (Sprites, FaaS, short-lived sandboxes); a long-running writer-owns-the-database process doesn't benefit — regular Litestream-as-sidecar with local SQLite is strictly better.
  • Environments where FUSE / LiteFS is an option. LiteFS's writable-VFS surface is more mature; reach for it if your deployment tolerates FUSE.

Seen in

  • sources/2026-01-29-flyio-litestream-writable-vfs — canonical wiki instance. Writable mode activated via LITESTREAM_WRITE_ENABLED=true; writes buffered locally and synced every ~1s; L0 polling disabled; durability class matches Fly.io Sprites' eventual-durability envelope. Motivating consumer: Sprites' JuiceFS-lineage block-map metadata store that must serve writes milliseconds after a Sprite boots.
Last updated · 319 distilled / 1,201 read