CONCEPT Cited by 1 source
Transparent cluster code distribution¶
The runtime property where code (module definitions, function bodies, local bindings) written on one node of a cluster becomes callable on every other node of the cluster without an explicit build, package, or deploy step. The user writes code in one place; it executes anywhere.
Definition¶
In the canonical BEAM realisation, any module
defined on a connected node is shipped — lazily, on first reference —
across the cluster. A Livebook cell that defines
a new Elixir module can be called from a
Flame.call block running on a remote
executor without deploying that module to the executor first.
"Livebook will automatically synchronize your notebook dependencies as well as any module or code defined in your notebook across nodes. That means any code we write in our notebook can be dispatched transparently out to arbitrarily many compute nodes, without ceremony."
— (Source: sources/2024-09-24-flyio-ai-gpu-clusters-from-your-laptop-with-livebook)
A second property falls out of the same primitive: remote introspection for auto-completion. When a Livebook attaches to a remote running Elixir application, completion results come from modules defined on the remote node, exposed through BEAM's introspection APIs. The notebook becomes a thin UI over the cluster's existing introspection surface.
What this requires¶
- Runtime-level code shipping, not just message passing. Many distributed runtimes can pass data across nodes but require code to be statically built into every node's image. Transparent code distribution requires the runtime itself to ship module bytecode on demand.
- A single, homogeneous runtime across the cluster. Code shipped from node A only executes on node B if B runs the same VM; mixing runtimes breaks the property.
- A cluster-identity story. Some form of node-addressing + authentication so nodes accept code from the "right" peers. In BEAM's case this is distributed Erlang cookies + node names.
Why it matters (wiki framing)¶
The 2024-09-24 Fly.io post makes the claim explicit: this property is
what makes notebook-driven elastic compute feel like local code. If
every Flame.call had to ship a deployment artifact instead of just
calling a locally-defined function, the illusion that the notebook
is the program — and that adding GPU nodes is just a resource knob —
would collapse.
The delivery-timeline evidence Fly.io cites — Livebook/FLAME shipped in 4 months part-time, remote dataframes + distributed GC in "a weekend" — is Fly.io's framing of what's available for free when you're building on a runtime that already has this primitive vs. building it yourself.
Seen in¶
- sources/2024-09-24-flyio-ai-gpu-clusters-from-your-laptop-with-livebook — canonical wiki instance; Livebook exposes BEAM's code distribution for notebook-driven elastic GPU workflows.
Related¶
- systems/erlang-vm — BEAM's distributed-Erlang primitive is the load-bearing instance of this property.
- systems/livebook — notebook client that surfaces the primitive to end users.
- systems/flame-elixir — framework that consumes the primitive to dispatch arbitrary code to a pool of executors.
- concepts/actor-model — the concurrency model BEAM implements; code distribution is a separate dimension but they compose.
- patterns/notebook-driven-elastic-compute — the end-user pattern this concept enables.