Skip to content

CONCEPT Cited by 1 source

Synchronous fast path streaming

Synchronous fast path streaming is the streaming-API optimisation technique where an implementation returns an already-resolved Promise (or equivalent) when data is already in the buffer, rather than scheduling resolution through the event loop. The API surface — "read() returns a Promise" — is preserved, but the cost of the event-loop hop is eliminated for the common hot case.

What the default looks like

A naive Web Streams implementation, responding to reader.read():

  1. Allocate a ReadableStreamDefaultReadRequest with three callback slots.
  2. Enqueue the request into the stream's internal queue.
  3. Allocate and return a new Promise.
  4. When data arrives (or is already present), resolve via the microtask queue.

Four allocations and a microtask hop for data that might already be sitting in the buffer.

The optimisation

If buffered data is available at read time, skip steps 1-2 entirely and return Promise.resolve({value, done}) directly. The caller awaits the resolved Promise; V8 still schedules the callback on the microtask queue to preserve Promise semantics, but no new request object, no pending-Promise allocation, no listener registration.

Canonical instance inside fast-webstreams:

// Inside ReadableStreamDefaultReader.read():
const chunk = nodeReadable.read();
if (chunk !== null) {
  return Promise.resolve({ value: chunk, done: false });
}
// Buffer empty — register listener, return pending Promise.

Measured throughput on a read loop (1 KB chunks, Node v22): ~12,400 MB/s vs native ~3,300 MB/s3.7× improvement. (sources/2026-04-21-vercel-we-ralph-wiggumed-webstreams-to-make-them-10x-faster)

Why it's spec-compliant

The WHATWG Streams spec requires read() to return a Promise. It does not require the Promise to be pending. A pre-resolved Promise satisfies the spec — it just resolves faster than a round-trip through the reactor.

This is a canonical instance of concepts/spec-compliant-optimization: preserving observable behaviour (returning a Promise) while eliminating allocation and scheduling that the spec permits but doesn't require.

Upstream landing

Matteo Collina's Node.js PR nodejs/node#61807"stream: add fast paths for webstreams read and pipeTo" — applies this exact idea directly to Node's native WebStreams implementation. Measured: ~17-20 % faster buffered reads, available to every Node.js user free. This is the spec-compliance → upstream landing path of patterns/upstream-contribution-parallel-to-in-house-integration.

When this pattern doesn't apply

  • Data not buffered. Empty buffer must return a pending Promise and register a listener. The fast path is specifically for the hot, in-buffer case.
  • Side-effecting thenable on result. Because Promise.resolve(obj) always checks for a .then property on obj, the implementation must be careful not to hand synthetic objects that would trigger thenable interception unexpectedly — WPT explicitly tests this behaviour.
  • Cancellation-in-flight edge cases. The spec's ReadableStreamDefaultReadRequest machinery exists because cancellation during reads, locked-stream error identity, and thenable interception are real edge cases. The fast path must preserve these behaviours; pipeline() does not, which is why Vercel's library only falls through to pipeline() when the whole chain is fast-stream.

Compounding with batching

In combination with batch reads — pipeTo() draining multiple items without per-chunk request objects — synchronous fast paths compound: one buffered batch → zero per-chunk Promises → one resolved Promise at the batch boundary. This is the basis of the ~11 % pipeTo improvement in Node PR

61807.

Cross-runtime generality

Synchronous fast paths for streaming are a general optimisation target, not specific to Web Streams:

  • systems/new-streams (Cloudflare POC) ships Stream.pullSync / Stream.textSync"complete pipelines in a single call stack. No promises are created, no microtask queue scheduling occurs, and no GC pressure from short-lived async machinery."
  • Node.js's older stream.* API already uses synchronous read() when buffered — which is what fast-webstreams bridges to.
  • Rust's Iterator::next() (always synchronous) is the cross-language analogue; async iteration bolts the Promise layer on.

The Web Streams-specific version of this concept is the one that surfaced here because read() returning a Promise — even when data is immediately available — was the per-call cost driving the 12.5× pipeThrough gap.

Seen in

Last updated · 476 distilled / 1,218 read