SYSTEM Cited by 1 source
FLAME (Elixir)¶
FLAME is an Elixir framework — introduced in Fly.io's Rethinking
Serverless with FLAME
post — that lets an application treat itself as elastically scalable
without being rewritten into a serverless architecture. The
framework manages a pool of executor nodes; any arbitrary block of
code wrapped in Flame.call runs on an executor from that pool. The
rest of the program is written as if everything were local.
Key architectural properties¶
Flame.callas the unit of remote execution. Mark off any section of code withFlame.call; FLAME dispatches it to an executor in the configured pool and returns the result inline. "It's the upside of serverless without committing yourself to blowing your app apart into tiny, intricately connected pieces." (Source: sources/2024-09-24-flyio-ai-gpu-clusters-from-your-laptop-with-livebook)- Pool configured by min/max/concurrency. The user declares minimum and maximum executor instance counts and per-instance concurrency; FLAME handles the rest — provisioning, placing work, shutting down idle executors.
- Idle-timeout shutdown per node, full termination on disconnect. Each executor shuts down after an idle timeout; if the controlling Livebook runtime disconnects, the whole cluster terminates. This is what gives notebook-driven workflows scale-to-zero economics.
- Substrate-agnostic. The original implementation targets Fly Machines; v0.14.1 Livebook + FLAME now also runs on Kubernetes (Michael Ruoss's contribution). The pattern is generic — patterns/framework-managed-executor-pool — and the substrate is pluggable.
- Composes natively with Livebook code distribution. Because
FLAME executors run in the same BEAM cluster, a module defined in
a Livebook cell is callable from a
Flame.callblock on a remote executor without any deploy step — see concepts/transparent-cluster-code-distribution.
Use-case shape¶
FLAME's own motivating example was inline ffmpeg calls written as
normal code but dispatched to remote executors. The 2024-09-24 post
extends the pattern into AI workflows:
- Llama-over-video-stills batch.
ffmpegextracts stills; each still is sent to Llama on a GPU Fly Machine viaFlame.call; results stream back. As nodes finish their per-video work, the framework dispatches the next video until the bucket is drained. - 64-node BERT hyperparameter tuning. FLAME provisions 64 L40S Fly Machines; each compiles a different BERT variant and fine-tunes on the same patent corpus; loss curves stream back to the driving Livebook in real time.
Seen in¶
- sources/2024-09-24-flyio-ai-gpu-clusters-from-your-laptop-with-livebook — canonical wiki instance; covers both the video-processing and GPU-cluster use cases and the framework's pool semantics.
Related¶
- systems/livebook — the typical driver of FLAME pools in notebook workflows.
- systems/erlang-vm — FLAME is possible because BEAM already handles clustering, code distribution, and messaging.
- systems/fly-machines — original executor substrate.
- systems/kubernetes — alternative substrate (v0.14.1+ on Livebook side).
- systems/nx-elixir — AI workloads commonly dispatched through FLAME to GPU executors.
- concepts/scale-to-zero — FLAME's pool termination on disconnect is a clean scale-to-zero realisation.
- patterns/framework-managed-executor-pool — the general architectural pattern.
- patterns/notebook-driven-elastic-compute — the end-to-end shape Livebook + FLAME + Fly Machines instantiate.