CONCEPT Cited by 2 sources
Eventual durability¶
Definition¶
A durability class in which a write is acknowledged to the application before it reaches the authoritative durability tier (typically object storage). A periodic sync moves recent writes from a local buffer to durable storage; writes between the last sync and a crash are lost.
The loss window is bounded by the sync cadence (e.g. ~1 second for Fly.io's Litestream writable VFS) — not by zero.
Contrast with strict durability (traditional
fsync-to-the-wire / WAL-flush-before-ack): the write is
acknowledged only after it is persistent. Contrast with
best-effort-no-durability (in-memory only, process-
restart loses everything).
Canonical wiki statement¶
Ben Johnson's 2026-01-29 shipping post for Litestream VFS writable mode:
"Writes go to a local temporary buffer ('the write buffer'). Every second or so (or on clean shutdown), we sync the write buffer with object storage. Nothing written through the VFS is truly durable until that sync happens. […] All storage on a Sprite shares this 'eventual durability' property, so the terms of the VFS write make sense here. They probably don't make sense for your application."
The Sprite substrate shares this property¶
The Sprite storage stack is eventual- durability-end-to-end:
- Data chunks live on object storage. Writes to Sprite storage flow through a JuiceFS-lineage stack where chunks are buffered locally before being uploaded.
- Metadata lives in a SQLite database made durable by Litestream. Writes to the metadata DB land in local SQLite; Litestream ships LTX files to object storage on a sub-second cadence.
- Sparse NVMe cache sitting in front of the object-store root is "not a durability tier" — worker loss is a cache-miss event.
Every tier of the stack operates under the same durability class. The writable-VFS mode for Litestream is the mechanism by which that class is preserved when the Litestream VFS is used as the metadata-DB substrate itself.
Why it's an acceptable class for Sprites¶
Three things make eventual durability the right choice here:
- Sprites are single-tenant developer / agent sandboxes, not multi-tenant production services with audit-trail obligations. Losing the last second of work on a crash is acceptable if the cold-boot-to-first-write budget is the binding constraint.
- Object storage is 11-nines durable. Once a write reaches object storage, it is as durable as anything on the Internet. The loss window is ≤1 second; the post- sync durability is enterprise-grade.
- Checkpoint / restore gives users an orthogonal recovery mechanism. An application state corrupted by a partial- sync crash can be rolled back to a checkpoint, not just patched forward.
Why it's not acceptable for many workloads¶
The writable-VFS post flags this directly: "They probably don't make sense for your application." Classes of application where eventual durability breaks:
- Financial ledgers, payment systems, billing. Losing the last second of writes is a compliance violation; write-ahead-logging to strictly-durable storage is non-negotiable.
- Audit trails. Missing records around a crash undermine the trail's completeness claim.
- Strong read-after-write across processes. A second process reading the object-store tier won't see the writer's pending buffer until the next sync; eventual durability ⇒ eventual visibility.
- Workloads assuming POSIX
fsyncsemantics. Any app written under the assumption thatfsync()means "on the disk" will over-trust the durability tier.
Sync cadence as the loss-window knob¶
The cadence is the design parameter of the class. Faster cadence:
- tighter loss window (less data at risk on crash);
- higher PUT rate on object storage (cost);
- more LTX files in the L0 layer before compaction.
Litestream VFS's choice of ~1 second matches its L0 compaction-level cadence — the write path and the near-realtime-replica read path both operate on the same time granularity.
Relationship to strict durability¶
| Class | Ack-before-durable? | Loss window | Example |
|---|---|---|---|
| Strict durability | No | 0 | Postgres synchronous_commit = on + synchronous_standby_names |
| Eventual durability | Yes | sync-cadence-bounded | Litestream VFS writable mode, Sprite storage |
| Best-effort | Yes | process lifetime | PRAGMA synchronous = OFF + no replication |
Strict durability trades latency for zero-loss guarantees. Eventual durability trades loss-window for cold-boot latency and write-path simplicity. They are not substitutable — workload characteristics pick one.
Seen in¶
- sources/2026-01-29-flyio-litestream-writable-vfs — canonical wiki statement of the term. Writable VFS mode buffers writes locally + syncs to object storage ~every second; "truly durable" is used to distinguish this class from strict durability. Explicitly flagged as matching Sprite substrate + not general-purpose.
- sources/2026-01-14-flyio-the-design-implementation-of-sprites — Sprite substrate's durability class implicit in the JuiceFS-on-object-storage + Litestream-for-metadata stack design; the 2026-01-29 post names it explicitly for the first time.
Related¶
- concepts/object-storage-as-disk-root — the authoritative-tier-is-object-store posture eventual durability composes with.
- concepts/single-writer-assumption — typically paired; multi-writer + eventual durability is usually untenable.
- concepts/read-through-nvme-cache — the sibling "local-is-cache-not-durability" posture on the read side.
- systems/litestream-vfs — the canonical implementation.
- systems/fly-sprites — the canonical substrate.
- patterns/writable-vfs-with-buffered-sync — the mechanism.
- companies/flyio.