Skip to content

SYSTEM Cited by 1 source

new-streams

new-streams (github.com/jasnell/new-streams) is James Snell's proof-of-concept alternative to the WHATWG Streams Standard (Web Streams API), published 2026-02-27 alongside the Cloudflare blog post "We deserve a better streams API for JavaScript". It is not a finished standard, not a production-ready library, not even necessarily a concrete proposal for something new — the stated purpose is to demonstrate that the fundamental problems with Web streams are not inherent to streaming, but consequences of specific design choices made 2014-2016 (before async iteration landed in ES2018).

Design foundations

The POC is built around six foundations, each one a deliberate inversion of a Web-streams design choice:

  1. Readable streams are just AsyncIterable<Uint8Array[]>. No custom ReadableStream class, no getReader() / locks / controllers. Consume with for await…of; stop consuming by stopping iteration.

  2. Pull-based, lazy evaluation (patterns/lazy-pull-pipeline). Transforms don't execute until the consumer iterates. No eager background pumping; no cascading intermediate-buffer fill.

  3. Explicit backpressure policies chosen at stream-creation time — strict (default), block, drop-oldest, drop-newest. Required, not advisory.

  4. Batched chunks. Streams yield Uint8Array[] (arrays of chunks) per iteration step rather than one chunk at a time, amortizing async overhead across multiple chunks.

  5. Structural writers. Any object implementing { write, end, abort } (plus optional sync variants writeSync, endSync, abortSync) is a writer. No base class, no UnderlyingSink protocol, no controller coordination.

  6. Parallel synchronous APIs. Stream.pullSync, Stream.bytesSync, Stream.textSync, Stream.fromSync skip promise creation entirely when source + all transforms are synchronous. Web streams has no such escape hatch — you pay for the microtask scheduling even when data is already in hand.

Core API surface

import { Stream } from 'new-streams';

// Create (push side)
const { writer, readable } = Stream.push({
  highWaterMark: 10,
  backpressure: 'strict',  // or 'block' | 'drop-oldest' | 'drop-newest'
});

// Consume (iterate)
for await (const chunks of readable) {   // chunks is Uint8Array[]
  for (const chunk of chunks) {
    process(chunk);
  }
}

// Convenience: collect entire stream
const text = await Stream.text(readable);
const bytes = await Stream.bytes(readable);

// Pull transforms — lazy, on-demand
const output = Stream.pull(source, compress, encrypt);

// Synchronous parallel
const result = Stream.bytesSync(
  Stream.pullSync(
    Stream.fromSync([inputBuffer]),
    zlibCompressSync,
    aesEncryptSync,
  ),
);

// Multi-consumer (replaces tee()'s unbounded buffering)
const shared = Stream.share(source, {
  highWaterMark: 100,
  backpressure: 'strict',
});
const consumer1 = shared.pull();
const consumer2 = shared.pull(decompress);

Measured performance

Benchmark results from the reference implementation (Node.js v24.x, Apple M1 Pro, 10-run average) — pure TypeScript vs. native Web streams implementations:

Scenario new-streams Web streams Ratio
1 KB × 5000 ~13 GB/s ~4 GB/s ~3×
100 B × 10 000 ~4 GB/s ~450 MB/s ~8×
Async iteration 8 KB × 1000 ~530 GB/s ~35 GB/s ~15×
Chained 3× transforms ~275 GB/s ~3 GB/s ~80-90×
64 B × 20 000 ~7.5 GB/s ~280 MB/s ~25×

The chained-transform result is the headline: pull-through semantics eliminate the intermediate buffering that plagues Web streams pipelines. Async iteration in Chrome/Blink shows the largest gap (~40-100×). "The new API's reference implementation has had no performance optimization work; the gains come entirely from the design."

Bridging to Web streams

The async-iterable foundation bridges naturally to existing Web streams:

// From a ReadableStream (bytes) → new-streams pipeline
const input = Stream.pull(readable, transform1, transform2);

// Back to a ReadableStream
async function* adapt(input) {
  for await (const chunks of input) {
    for (const chunk of chunks) yield chunk;
  }
}
const readable = ReadableStream.from(adapt(input));

No reader acquisition, no lock management, no { value, done } protocol — the async-iteration contract on both ends makes the adapter layer ~5 lines.

Posture and limitations

  • POC, not product. Snell is explicit it is not a proposal, not production-ready, not necessarily the right answer.
  • Pure-JS implementation. The measured gains come entirely from design choices; a native implementation should go further. Conversely, Node.js has not yet significantly invested in Web streams perf — the Node gap will narrow.
  • Single-author POC. One core maintainer's opinion on what JS streaming "should" look like; depends on a broader community conversation ("I'm publishing this to start a conversation").

Seen in

Last updated · 200 distilled / 1,178 read