PATTERN Cited by 1 source
Workload-aware runtime selection¶
Workload-aware runtime selection is the customer-side pattern that pairs with multi-runtime function platforms: choose the runtime for each workload based on its dominant cost axis, not a blanket platform-wide runtime commitment.
The pattern¶
- Classify workloads by dominant cost axis:
- CPU-bound rendering (SSR frameworks, template evaluation, JSON transforms, crypto)
- I/O-bound APIs (database reads, downstream HTTP calls)
- Cold-start-sensitive endpoints (low-traffic, strict p99 latency SLA)
- Compatibility-locked (workload depends on a specific runtime's native APIs or a mature library's runtime assumptions)
- Match runtime to workload:
- CPU-bound → fastest runtime on the platform (e.g. Bun per 2026-04-21 Vercel data: 28 % TTLB win on Next.js SSR).
- I/O-bound → either runtime (choice doesn't differentiate; both spend time waiting).
- Cold-start-sensitive → mature runtime (Node.js has better cold-start latency than Bun per Vercel 2026-04-21).
- Compatibility-locked → the runtime supporting the dependency surface (typically Node.js given ecosystem depth).
- Test dependencies under target runtime before cutover — "Test your dependencies under Bun before migrating production traffic to confirm expected behavior. While Bun implements Node.js APIs, some edge cases may behave differently."
- Measure in production — benchmark numbers (1 vCPU / 2 GB, same-region client) don't fully capture customer traffic shape. TTLB on real workload is the ground truth.
- Switch via config — keep the rollback path open; a bad runtime choice should be revertible by a single-line config change, not a redeploy.
Canonical rules of thumb (per 2026-04-21 Vercel)¶
| Workload shape | Recommended runtime | Rationale |
|---|---|---|
| CPU-bound Next.js SSR | Bun | 28 % TTLB improvement |
| Streaming SSR with Web-Streams transforms | Bun | Avoids Web Streams transform bottleneck |
| Low-traffic webhook receiver | Node.js | Cold-start wins matter more than peak CPU |
| Legacy Node-native dependency | Node.js | Compatibility risk |
| New edge-rendered API, CPU-modest | Either | Choice is noise-level |
Economic reasoning (Active CPU)¶
Under Active CPU pricing, runtime choice has direct billing consequence, not just throughput consequence. A 28 % latency reduction on CPU-bound workloads yields ~28 % billed-active-CPU reduction. On wall-clock-billed platforms (classical Lambda), the reasoning is more indirect (throughput per dollar). The economic lens is one input to the decision; user-facing latency is the other.
Test-before-migrate discipline¶
Vercel's explicit advice is the operational discipline the pattern requires:
Test your dependencies under Bun before migrating production traffic to confirm expected behavior.
In practice:
- Runtime-compatibility CI — CI matrix that runs tests under each target runtime for the project.
- Canary deploy — 1 %, 5 %, 25 %, 100 % traffic shifts with active-error-rate monitoring between stages.
- Production-parity benchmark — a repeated TTLB measurement against real endpoint shapes, not just synthetic workloads.
- Rollback rehearsal — ensure the config change to revert runtimes has been exercised.
Anti-patterns¶
- Blanket migration on hype. Switching an entire fleet because a benchmark looked good on one workload — the 28 % Next.js win doesn't apply to SvelteKit, React SSR, or vanilla JS workloads.
- Ignoring cold-start tail. A faster runtime on hot paths can regress overall availability if cold-start latency blows p99 budgets on sporadic traffic.
- Testing only happy-path deps. Edge-case API differences bite in error paths; runtime compatibility tests need negative-path coverage.
- Running benchmark TTFB to decide runtime. The wrong metric for streaming SSR — see concepts/ttfb-vs-ttlb-ssr-measurement.
Seen in¶
- sources/2026-04-21-vercel-bun-runtime-on-vercel-functions — canonical wiki introduction. Vercel's launch post pairs the multi-runtime platform with explicit test-before-migrate guidance and workload-specific performance framing.
Related¶
- concepts/runtime-choice-per-workload — the design axis this pattern operationalises.
- systems/bun — the performance-axis runtime.
- systems/nodejs — the compatibility + cold-start-axis runtime.
- systems/vercel-functions — the platform enabling the per-workload choice.
- patterns/multi-runtime-function-platform — the platform- design pattern this customer-side pattern pairs with.