Skip to content

CONCEPT Cited by 2 sources

Backpressure

Backpressure is the control-plane primitive by which a slow consumer in a streaming pipeline signals a fast producer to slow down. Without it, a producer that outruns its consumer either (a) accumulates data unboundedly in memory, (b) drops data, or (c) blocks the producer. Backpressure is the mechanism for choosing which, explicitly and per-stream.

Why it matters

A streaming pipeline is not correct unless all three of these questions have answers:

  1. What happens when the producer is faster than the consumer?
  2. What happens when the consumer stops reading (crash / cancellation / network drop)?
  3. What happens at each intermediate stage in a multi-hop pipeline?

Without backpressure, the default answer to (1) is "buffer until OOM", to (2) is "leak the producer forever", and to (3) is "each stage quietly accumulates independently".

The four responses

When a bounded buffer fills, there are only four possible responses. Any other answer is either a variation of these or domain-specific logic that doesn't belong in a general streaming primitive:

Response Behaviour Use case
Reject Refuse to accept more data (throw / error) Catches fire-and-forget bugs; "I wrote .write(x) without await"
Block (wait) Writer awaits until space is available Normal producer-consumer with trusted producer
Drop oldest Evict buffered-oldest to make room Live feeds where stale data loses value
Drop newest Discard incoming writes Rate-limit; "process what you have"

This taxonomy is explicit in the 2026-02-27 Cloudflare post (Source: sources/2026-02-27-cloudflare-a-better-streams-api-is-possible-for-javascript), which argues the API design should require the choice at stream-creation time (patterns/explicit-backpressure-policy), not leave it implicit.

Advisory vs enforced

Two fundamentally different designs:

  • Advisory backpressure exposes a signal (e.g., desiredSize going negative, a ready promise) that the producer should consult but is not required to. This is the Web Streams design. controller.enqueue() always succeeds; writer.ready exists but is routinely ignored.

    "Stream implementations can and do ignore backpressure; and some spec-defined features explicitly break backpressure." — 2026-02-27 Cloudflare post

  • Enforced backpressure — the API fails the write or blocks the producer when the buffer is full. No way to ignore; misuse is a thrown error, not silent memory growth. systems/new-streams' default strict policy is the canonical realization.

The advisory model fails by default on the path that matters most: inexperienced developers and fire-and-forget writer.write(x) (without await) patterns. Enforcement is the failure-safe choice when the cost of silent unbounded buffering is a crash under load.

Where backpressure breaks in real systems

  • tee() unbounded buffering. Branching a single ReadableStream into two branches reading at different speeds buffers the faster-branch data until the slower branch catches up; the spec does not mandate buffer limits. Cloudflare Workers' implementation diverges — it signals backpressure based on the slowest consumer rather than the fastest — to avoid the implementation-default memory cliff.

  • TransformStream push semantics. The transform() function runs on write, not on read. Synchronous always-enqueue transforms never apply backpressure to the writable side, so a 3-stage pipeline may fill six internal buffers before the final consumer starts pulling.

  • Intermediate layers that aren't backpressure-aware. Nginx proxy_buffering on by default; compression middleware buffering until a size/time threshold; gRPC streaming ⇆ Kafka hops — each layer silently defeats end-to-end backpressure unless explicitly configured. See concepts/head-of-line-buffering for the streaming-SSR instance.

Pull semantics as implicit backpressure

Pull-based streams (for await…of) have implicit backpressure as a side effect of the consumer-drives-execution model: if the consumer stops iterating, the producer stops producing. No signal to forget to check.

Unix pipes are the canonical instance: "Data flows left to right. Each stage reads input, does its work, writes output. If a downstream stage is slow, upstream stages naturally slow down. Backpressure is implicit in the model, not a separate mechanism to learn (or ignore)."

Push-based APIs (Web streams' controller.enqueue() on a timer) produce regardless of consumer state, requiring an explicit backpressure mechanism in addition to the data-flow mechanism — and that separation is where advisory designs leak under load.

Seen in

Last updated · 178 distilled / 1,178 read