Skip to content

CONCEPT Cited by 2 sources

SQLite Virtual Filesystem (VFS)

Definition

A SQLite Virtual Filesystem (VFS) is a plugin layer inside SQLite that abstracts the operations SQLite performs on the underlying OS — open, read, write, lock, sync, size, truncate, delete — into a set of function pointers an extension can override. See the upstream docs: sqlite.org/vfs.html.

Where SQLite normally writes to a real file via the OS VFS, a custom VFS can transparently substitute any storage backend without the application knowing — in-memory, networked, object storage, another SQLite instance, etc.

Why it's the FUSE alternative

Both LiteFS and (post-2025-05-20) Litestream need to intercept SQLite's I/O to do their job — LiteFS to capture transactions for replication, Litestream to serve pages from object storage in a read-replica. Two integration points exist:

  1. FUSE filesystem. LiteFS's primary approach — present a fake filesystem at the OS level; SQLite's normal file operations go through the FUSE layer, which captures them. "enough that users could use SQLite replicas just like any other database." But: "installing and running a whole filesystem (even a fake one) is a lot to ask of users." (Source: sources/2025-05-20-flyio-litestream-revamped)

  2. SQLite VFS. Load an extension into the application that overrides SQLite's storage layer directly — no FUSE, no kernel module, no root. LiteFS ships LiteVFS for this exact use case: "LiteFS can function without the FUSE filesystem if you load an extension into your application code, LiteVFS. LiteVFS is a SQLite Virtual Filesystem (VFS). It works in a variety of environments, including some where FUSE can't, like in-browser WASM builds."

The trade is surface area — FUSE intercepts every I/O transparently; VFS requires the application link the extension — but the deployment story is dramatically simpler.

Use in revamped Litestream

From the 2025-05-20 post:

"What we're doing next is taking the same trick and using it on Litestream. We're building a VFS-based read-replica layer. It will be able to fetch and cache pages directly from S3-compatible object storage."

Operationally: the application links the Litestream-VFS extension; SQLite reads go through the extension; pages come from a local cache, or on miss from Tigris or S3. No FUSE mount, no separate replica process, no local WAL handling on the replica side.

Caveat the post explicitly names: "this approach isn't as efficient as a local SQLite database. That kind of efficiency, where you don't even need to think about N+1 queries because there's no network round-trip, is part of the point of using SQLite."

2025-12-11: Litestream VFS ships

The proof-of-concept teased in 2025-05-20 and explicitly flagged as "not yet shipped" in the 2025-10-02 v0.5.0 post is now live as Litestream VFS (Source: sources/2025-12-11-flyio-litestream-vfs). Loadable via the standard SQLite extension mechanism:

sqlite> .load litestream.so
sqlite> .open file:///my.db?vfs=litestream

Concrete disclosures the shipping post adds to the VFS concept:

  1. Only the read side is overridden"Litestream VFS handles only the read side of SQLite. Litestream itself, running as a normal Unix program, still handles the 'write' side." The existing write path is untouched.
  2. Page lookup via LTX EOF index trailer"LTX trailers include a small index tracking the offset of each page in the file. By fetching only these index trailers from the LTX files we're working with (each occupies about 1% of its LTX file), we can build a lookup table of every page in the database." See concepts/ltx-index-trailer.
  3. Range GET against S3-compatible storage — page reads are resolved by HTTP byte-range GETs against the object store. Canonical instance of patterns/vfs-range-get-from-object-store.
  4. LRU cache of hot pages"Most databases have a small set of 'hot' pages — inner branch pages or the leftmost leaf pages for tables with an auto-incrementing ID field." The 2025-05-20 "isn't as efficient as a local SQLite database" caveat is partially mitigated via the LRU cache.
  5. SQL-level PITRPRAGMA litestream_time = '5 minutes ago'; redirects reads in the current session to the database state at the chosen timestamp. See concepts/pragma-based-pitr.

Environments where VFS is the only option

  • In-browser WASM SQLite builds — no FUSE available in the browser; VFS is the only surface.
  • Restricted FaaS / PaaS environments — no kernel module loading, no privileged filesystem mount, but user code can link an extension.
  • Platforms with tight sandboxing — same reasoning: you control the process, not the kernel.

Seen in

  • sources/2025-05-20-flyio-litestream-revamped — canonical wiki introduction; VFS as the replacement for FUSE in the Litestream read-replica design; LiteVFS as the precedent from LiteFS.
  • sources/2025-12-11-flyio-litestream-vfsshipping disclosure. Litestream VFS loads as a standard SQLite extension (.load litestream.so), overrides only the read side, resolves page reads via HTTP Range GETs against LTX files in object storage using the ~1%-sized EOF index trailer per file, fronted by an LRU cache of SQLite's hot B-tree pages. Adds SQL-level PITR (PRAGMA litestream_time) and L0-polling-based near-realtime replica behaviour. Canonical wiki instance of VFS-range-GET-from-object-store composition.
Last updated · 200 distilled / 1,178 read