CONCEPT Cited by 1 source
Synchronous fast path streaming¶
Synchronous fast path streaming is the streaming-API
optimisation technique where an implementation returns
an already-resolved Promise (or equivalent) when data is
already in the buffer, rather than scheduling resolution
through the event loop. The API surface — "read() returns
a Promise" — is preserved, but the cost of the event-loop
hop is eliminated for the common hot case.
What the default looks like¶
A naive Web Streams implementation, responding to
reader.read():
- Allocate a
ReadableStreamDefaultReadRequestwith three callback slots. - Enqueue the request into the stream's internal queue.
- Allocate and return a new Promise.
- When data arrives (or is already present), resolve via the microtask queue.
Four allocations and a microtask hop for data that might already be sitting in the buffer.
The optimisation¶
If buffered data is available at read time, skip steps
1-2 entirely and return Promise.resolve({value, done})
directly. The caller awaits the resolved Promise; V8
still schedules the callback on the microtask queue to
preserve Promise semantics, but no new request object,
no pending-Promise allocation, no listener registration.
Canonical instance inside fast-webstreams:
// Inside ReadableStreamDefaultReader.read():
const chunk = nodeReadable.read();
if (chunk !== null) {
return Promise.resolve({ value: chunk, done: false });
}
// Buffer empty — register listener, return pending Promise.
Measured throughput on a read loop (1 KB chunks, Node v22): ~12,400 MB/s vs native ~3,300 MB/s — 3.7× improvement. (sources/2026-04-21-vercel-we-ralph-wiggumed-webstreams-to-make-them-10x-faster)
Why it's spec-compliant¶
The WHATWG Streams spec requires read() to return a
Promise. It does not require the Promise to be
pending. A pre-resolved Promise satisfies the spec —
it just resolves faster than a round-trip through the
reactor.
This is a canonical instance of concepts/spec-compliant-optimization: preserving observable behaviour (returning a Promise) while eliminating allocation and scheduling that the spec permits but doesn't require.
Upstream landing¶
Matteo Collina's Node.js PR nodejs/node#61807 — "stream: add fast paths for webstreams read and pipeTo" — applies this exact idea directly to Node's native WebStreams implementation. Measured: ~17-20 % faster buffered reads, available to every Node.js user free. This is the spec-compliance → upstream landing path of patterns/upstream-contribution-parallel-to-in-house-integration.
When this pattern doesn't apply¶
- Data not buffered. Empty buffer must return a pending Promise and register a listener. The fast path is specifically for the hot, in-buffer case.
- Side-effecting thenable on result. Because
Promise.resolve(obj)always checks for a.thenproperty onobj, the implementation must be careful not to hand synthetic objects that would trigger thenable interception unexpectedly — WPT explicitly tests this behaviour. - Cancellation-in-flight edge cases. The spec's
ReadableStreamDefaultReadRequestmachinery exists because cancellation during reads, locked-stream error identity, and thenable interception are real edge cases. The fast path must preserve these behaviours;pipeline()does not, which is why Vercel's library only falls through topipeline()when the whole chain is fast-stream.
Compounding with batching¶
In combination with batch reads —
pipeTo() draining multiple items without per-chunk
request objects — synchronous fast paths compound: one
buffered batch → zero per-chunk Promises → one
resolved Promise at the batch boundary. This is the
basis of the ~11 % pipeTo improvement in Node PR
61807.¶
Cross-runtime generality¶
Synchronous fast paths for streaming are a general optimisation target, not specific to Web Streams:
- systems/new-streams (Cloudflare POC) ships
Stream.pullSync/Stream.textSync— "complete pipelines in a single call stack. No promises are created, no microtask queue scheduling occurs, and no GC pressure from short-lived async machinery." - Node.js's older
stream.*API already uses synchronousread()when buffered — which is what fast-webstreams bridges to. - Rust's
Iterator::next()(always synchronous) is the cross-language analogue; async iteration bolts the Promise layer on.
The Web Streams-specific version of this concept is the
one that surfaced here because read() returning a
Promise — even when data is immediately available — was
the per-call cost driving the 12.5× pipeThrough gap.
Seen in¶
- sources/2026-04-21-vercel-we-ralph-wiggumed-webstreams-to-make-them-10x-faster
— canonical wiki instance. Fast path returns
Promise.resolve({value, done})when data is buffered. 3.7× read-loop throughput vs native. Ideas landed upstream in Node PR #61807 with ~17-20 % buffered read improvement.
Related¶
- systems/fast-webstreams — the library where the pattern is canonicalised.
- systems/web-streams-api — the spec surface the optimisation preserves.
- systems/nodejs — the runtime whose
stream.Readable.read()already exposes the synchronousnullreturn shape the fast path exploits. - concepts/promise-allocation-overhead — the parent cost this technique reduces.
- concepts/microtask-hop-cost — the per-read scheduling cost the fast path avoids.
- concepts/spec-compliant-optimization — the design discipline.
- concepts/pull-vs-push-streams — the axis along which synchronous-when-buffered most cleanly lives (pull semantics allow the consumer to sample the buffer state).