Skip to content

PATTERN Cited by 1 source

Mountable persistent storage

Mountable persistent storage is the pattern of presenting a durable external store (object storage, network filesystem, blob service) as a local filesystem partition inside an ephemeral-compute environment (container, sandbox, execution environment), so application code that uses read() / write() / open() / readdir() keeps working without a storage-API rewrite.

It's the ergonomic answer to concepts/container-ephemerality and similar shapes in serverless or sandbox tiers: the platform owns durability; the application owns "what does my working directory look like".

Mechanics (typical)

  • A platform SDK call or runtime config associates a bucket/volume with a path inside the compute environment at start-time.
  • Reads and writes through the path are translated by the platform into object-store PUT/GET/DELETE (or network-fs operations), with semantic fidelity that ranges from S3-object-semantics-leaked- through to full POSIX filesystem semantics (concepts/file-vs-object-semantics).
  • The lifecycle of the mount is decoupled from the lifecycle of the compute instance — scheduling, scaling, eviction, replacement all preserve the data.

Why this pattern over "call the storage API directly"

  • Unmodified existing applications — containerised agents, Docker-packaged services, off-the-shelf tools assume /data/... exists and survives restarts. Re-writing them against a new object-store client is infeasible or politically expensive.
  • Keeps the ergonomic simplicity of local files — shell tools like tar, rsync, ffmpeg, sqlite work out of the box.
  • Platform owns durability — applications don't get to corrupt their own state by misusing a storage client.

Caveats

  • Not a drop-in for all file semantics: atomic renames, fsync guarantees, random-write performance, directory listing consistency, and locking behaviour vary widely by backend. Agents that rely on POSIX-strict semantics can still hit surprises.
  • Cost and latency: every read/write traverses the backing store. Hot paths may want an in-memory cache or a dedicated ephemeral scratch area on top.
  • Visibility across mounts: concurrent writers sharing the same bucket may see stale reads depending on the backing store's consistency model.

Instances

Seen in

Last updated · 200 distilled / 1,178 read