PATTERN Cited by 1 source
Multi-runtime function platform¶
Multi-runtime function platform is the platform-design pattern of shipping multiple native language runtimes within a single function product, where the customer selects the runtime per deployment via config — without emulation or compatibility shims.
The pattern¶
One deployment artefact (typically a codebase + package.json
or equivalent manifest) can run on runtime A or runtime B on
the same platform. The platform:
- Runs each runtime natively — no emulation layer; "code runs exactly as it does locally". Runtime-specific APIs are not polyfilled into a unified surface.
- Exposes runtime selection as a config axis — a field in
a manifest file (e.g.
vercel.json'sbunVersion) picks the runtime. Absence of the field falls back to a default. - Integrates observability, logging, and monitoring uniformly across runtimes — customers don't need separate pipelines for each runtime's output.
- Provides the same substrate under each runtime — billing model, concurrency handling, egress controls apply regardless of the runtime choice.
Canonical instance (2026-04-21 Vercel)¶
Vercel Functions ships Node.js (default)
and Bun (public beta) natively on the same
Fluid compute substrate.
Switching runtime is a single-line change to vercel.json:
No code changes required. Vercel's verbatim framing:
Because Vercel runs native Node.js and Bun runtimes, code runs exactly as it does locally. No emulation or compatibility layers, just full access to each runtime's native capabilities. Switching between them is a configuration change in
vercel.json.
(Source: sources/2026-04-21-vercel-bun-runtime-on-vercel-functions)
Contrast with peer approaches¶
- Single-runtime-per-platform (classical AWS Lambda circa 2015, classical Cloudflare Workers) — one runtime baked in; custom runtimes possible but unsupported edge case.
- Emulation / compatibility shim (Cloudflare Workers'
nodejs_compat) — one native runtime (V8 isolate) + a shim layer making Node.js APIs available. Preserves portability but pays emulation tax; not all Node APIs work (Cloudflare 2026-01-29 measured 98.5 % compatibility on top 1,000 NPM packages after filtering). - Per-function runtime-lock (AWS Lambda's per-function
Runtimesetting) — multi-runtime platform but per-function, not switchable by config within the same function. No performance-axis test-and-revert workflow.
Multi-runtime function platform differs from all three by: (a) running both runtimes natively; (b) making selection a per-project config decision, not a per-function deployment decision.
Requirements on the platform¶
- Each runtime must be packaged for the platform — separate binaries / bootstrap paths. Not trivial; runtime maintenance cost is superlinear in N runtimes.
- Billing model must not privilege one runtime — Active CPU pricing is runtime-agnostic and aligns cost with the measurable runtime-dependent variable (CPU time).
- Observability integration must be uniform — customers switching runtimes shouldn't also need to switch telemetry pipelines.
- Framework support must be tracked per runtime — Vercel's 2026-04-21 launch supports Next.js, Express, Hono, Nitro on Bun; "support for additional frameworks coming soon".
Why it's a non-trivial pattern¶
- Runtime-maintenance cost. Each runtime gets its own upgrade cadence, security-patch schedule, and dependency-graph surface. N runtimes = N parallel tracks.
- Testing surface. Customer's dependencies must be tested under each runtime; API-compatibility drift between runtimes is a real cost surface (concepts/runtime-choice-per-workload). Vercel's explicit advice: "Test your dependencies under Bun before migrating production traffic."
- Customer-education cost. Customers need to know which runtime wins for which workload shape; rules of thumb need documentation and worked examples. See patterns/workload-aware-runtime-selection.
When to apply¶
- Platform has workloads with divergent performance profiles (CPU-bound SSR + I/O-bound APIs + cold-start-sensitive low- volume endpoints).
- At least one runtime has materially different performance characteristics on a subset of workloads (Bun: 28 % TTLB advantage on CPU-bound Next.js SSR vs Node.js).
- Platform's billing model rewards performance differences linearly (active-CPU or CPU-time billing; not pure wall-clock).
- Customer base is large enough that the per-runtime maintenance cost amortises across enough workloads.
When not to apply¶
- Single dominant workload shape (pure API gateway, pure webhook receiver) — runtime choice doesn't differentiate.
- Small customer base — maintenance cost of multiple runtimes outweighs customer benefit.
- Strict compatibility requirement — a single canonical runtime keeps the support matrix manageable.
Seen in¶
- sources/2026-04-21-vercel-bun-runtime-on-vercel-functions
— canonical wiki instance. Vercel Functions adds Bun as
second runtime alongside Node.js, switchable per project via
bunVersioninvercel.json, both running natively on Fluid compute.
Related¶
- systems/vercel-functions — the product instantiating the pattern.
- systems/bun — second runtime in the launch.
- systems/nodejs — default runtime.
- systems/vercel-fluid-compute — substrate shared by both runtimes.
- concepts/active-cpu-pricing — billing model that rewards runtime-choice performance differences.
- concepts/runtime-choice-per-workload — the design axis this pattern opens for customers.
- patterns/workload-aware-runtime-selection — the customer-side pattern.