PATTERN Cited by 1 source
Mountable persistent storage¶
Mountable persistent storage is the pattern of presenting a
durable external store (object storage, network filesystem, blob
service) as a local filesystem partition inside an
ephemeral-compute environment (container, sandbox, execution
environment), so application code that uses read() / write() /
open() / readdir() keeps working without a storage-API rewrite.
It's the ergonomic answer to concepts/container-ephemerality and similar shapes in serverless or sandbox tiers: the platform owns durability; the application owns "what does my working directory look like".
Mechanics (typical)¶
- A platform SDK call or runtime config associates a bucket/volume with a path inside the compute environment at start-time.
- Reads and writes through the path are translated by the platform into object-store PUT/GET/DELETE (or network-fs operations), with semantic fidelity that ranges from S3-object-semantics-leaked- through to full POSIX filesystem semantics (concepts/file-vs-object-semantics).
- The lifecycle of the mount is decoupled from the lifecycle of the compute instance — scheduling, scaling, eviction, replacement all preserve the data.
Why this pattern over "call the storage API directly"¶
- Unmodified existing applications — containerised agents,
Docker-packaged services, off-the-shelf tools assume
/data/...exists and survives restarts. Re-writing them against a new object-store client is infeasible or politically expensive. - Keeps the ergonomic simplicity of local files — shell tools
like
tar,rsync,ffmpeg,sqlitework out of the box. - Platform owns durability — applications don't get to corrupt their own state by misusing a storage client.
Caveats¶
- Not a drop-in for all file semantics: atomic renames, fsync guarantees, random-write performance, directory listing consistency, and locking behaviour vary widely by backend. Agents that rely on POSIX-strict semantics can still hit surprises.
- Cost and latency: every read/write traverses the backing store. Hot paths may want an in-memory cache or a dedicated ephemeral scratch area on top.
- Visibility across mounts: concurrent writers sharing the same bucket may see stale reads depending on the backing store's consistency model.
Instances¶
- Cloudflare Sandbox SDK —
sandbox.mountBucket()presents an R2 bucket as a filesystem partition inside a Cloudflare Container. The canonical wiki instance; see sources/2026-01-29-cloudflare-moltworker-self-hosted-ai-agent. - AWS S3 Files — mounts an S3 bucket as an NFS filesystem on EC2 / ECS — same pattern at a different layer (sources/2026-04-07-aws-s3-files-mount-any-s3-bucket-as-a-nfs-file-system-on-ec2-ecs).
Seen in¶
- sources/2026-01-29-cloudflare-moltworker-self-hosted-ai-agent —
canonical wiki instance. Moltworker uses
sandbox.mountBucket()so Moltbot's ephemeral Cloudflare Container has a durable working directory for session memory, conversations, and agent-generated assets — zero Moltbot code changes required.
Related¶
- concepts/container-ephemerality — the problem shape this pattern addresses.
- systems/cloudflare-sandbox-sdk — the specific SDK exposing
mountBucket(). - systems/cloudflare-r2 — typical durable backing store.
- systems/s3-files — parallel instance at AWS's layer.
- concepts/file-vs-object-semantics — the semantic-gap caveat around filesystem-over-object-storage.