CONCEPT Cited by 1 source
Promise allocation overhead¶
Promise allocation overhead is the per-call CPU + memory
cost of creating, resolving, and garbage-collecting Promise
objects in a JavaScript hot path. At single-call latency the
cost is invisible (~nanoseconds). In streaming pipelines at
request rates in the thousands/second, it becomes the dominant
fraction of runtime.
What actually costs¶
Each Promise involves:
- Object allocation — the
Promiseinstance itself, plus its[[PromiseFulfillReactions]]/[[PromiseRejectReactions]]internal slots. - Callback closure allocation — for each
.then/await, a new closure capturing the continuation. - Microtask queue entry — resolution schedules a microtask, not free.
- Short-lived object churn — result objects like
{ value, done }created and discarded perread(). - Internal coordination promises — spec-mandated hidden
promises for queue management,
pull()coordination, backpressure signaling in Web streams.
All five land short-lived objects in V8's young generation, which gets scavenged (copying-GC) frequently. High rates of scavenge in a hot request loop register as measurable GC CPU time.
The cost compounds¶
Per-chunk, per-transform, per-pipeline-stage costs multiply. A streaming SSR pipeline rendering 100 small HTML fragments through 3 transforms creates ~300 enqueues, 300 reads, and their associated coordination promises — all on every request.
The Vercel measurement¶
Malte Ubl (Vercel), in
"We Ralph Wiggum'd WebStreams",
benchmarked ReadableStream.pipeThrough() at ~630 MB/s for
1 KB chunks vs Node.js stream.pipeline() with the same
passthrough transform at ~7,900 MB/s — a 12× gap,
attributed:
"Each chunk passes through a full Promise chain: read, write, check backpressure, repeat. An
{ value, done }result object is allocated per read. Error propagation creates additional Promise branches. […] That is a 12× gap, and the difference is almost entirely Promise and object allocation overhead."
The Cloudflare Workers measurement¶
Snell's internal fix to a Cloudflare Workers data pipeline "reduced the number of JavaScript promises created in certain application scenarios by up to 200×. The result is several orders of magnitude improvement in performance in those applications." (Source: sources/2026-02-27-cloudflare-a-better-streams-api-is-possible-for-javascript).
The GC pressure consequence¶
In the 2025-10 Cloudflare benchmark (sources/2025-10-14-cloudflare-unpacking-cloudflare-workers-cpu-performance-benchmarks), GC accounted for 10-25 % of total request processing time in streaming SSR through React + Next.js + OpenNext on Workers. The 2026-02-27 post pushes the upper bound further:
"I've seen SSR workloads where garbage collection accounts for a substantial portion (up to and beyond 50%) of total CPU time per request. That's time that could be spent actually rendering content."
Mitigations¶
Four orthogonal responses, in increasing radicality:
- Amortize with batching. Yield
Uint8Array[]per iteration instead of one chunk per iteration. One promise per batch of N, not N promises. (systems/new-streams default shape.) - Remove unnecessary promises from hot paths. Vercel's
proposed Node.js
fast-webstreamsrework eliminates promises on read paths when data is immediately available. - Synchronous fast paths. When source + transforms are
all sync, skip promise machinery entirely.
Stream.pullSync/Stream.textSyncin systems/new-streams complete pipelines "in a single call stack. No promises are created, no microtask queue scheduling occurs, and no GC pressure from short-lived async machinery." - Change the API's mandate. The spec ordering constraints on promise resolution prevent many optimizations. "A well-designed streaming API should be efficient by default, not require each runtime to invent its own escape hatches."
When promises are still right¶
Promises are the correct primitive for actual waiting — I/O boundaries where execution must pause until external state changes. The critique is specifically against the cost of promises used where waiting is not actually happening (synchronous transforms, immediately-available buffers, pure CPU work).
"Promises are fantastic for cases in which waiting is actually necessary, but they aren't always necessary." — 2026-02-27 Cloudflare post
Seen in¶
- sources/2026-02-27-cloudflare-a-better-streams-api-is-possible-for-javascript — canonical wiki instance: promise overhead pinned as the root cause of Web streams' 10-25× performance gap; Vercel 12× measurement + Workers 200× internal fix + 50 %+ SSR GC all cited.
- sources/2025-10-14-cloudflare-unpacking-cloudflare-workers-cpu-performance-benchmarks
— sibling instance: 10-25 % request CPU in GC on OpenNext +
Next.js + React, attributed to
Bufferallocation +pipeThrough()allocations + value-orientedReadableStreamdefaulthighWaterMark: 1.
Related¶
- systems/v8-javascript-engine — the engine whose young-generation scavenger experiences the churn.
- systems/web-streams-api — the canonical wiki instance of promise-heavy streaming API design.
- systems/new-streams — the POC response with batched chunks + synchronous fast paths.
- concepts/hot-path — why per-call allocation cost matters at per-request scale.
- concepts/async-iteration — the JS-language primitive that makes iteration cost visible; batching amortizes it.
- concepts/garbage-collection — the downstream consequence of allocation churn.
- concepts/v8-young-generation — where short-lived promise objects live and get scavenged.