CONCEPT Cited by 2 sources
Backpressure¶
Backpressure is the control-plane primitive by which a slow consumer in a streaming pipeline signals a fast producer to slow down. Without it, a producer that outruns its consumer either (a) accumulates data unboundedly in memory, (b) drops data, or (c) blocks the producer. Backpressure is the mechanism for choosing which, explicitly and per-stream.
Why it matters¶
A streaming pipeline is not correct unless all three of these questions have answers:
- What happens when the producer is faster than the consumer?
- What happens when the consumer stops reading (crash / cancellation / network drop)?
- What happens at each intermediate stage in a multi-hop pipeline?
Without backpressure, the default answer to (1) is "buffer until OOM", to (2) is "leak the producer forever", and to (3) is "each stage quietly accumulates independently".
The four responses¶
When a bounded buffer fills, there are only four possible responses. Any other answer is either a variation of these or domain-specific logic that doesn't belong in a general streaming primitive:
| Response | Behaviour | Use case |
|---|---|---|
| Reject | Refuse to accept more data (throw / error) | Catches fire-and-forget bugs; "I wrote .write(x) without await" |
| Block (wait) | Writer awaits until space is available | Normal producer-consumer with trusted producer |
| Drop oldest | Evict buffered-oldest to make room | Live feeds where stale data loses value |
| Drop newest | Discard incoming writes | Rate-limit; "process what you have" |
This taxonomy is explicit in the 2026-02-27 Cloudflare post (Source: sources/2026-02-27-cloudflare-a-better-streams-api-is-possible-for-javascript), which argues the API design should require the choice at stream-creation time (patterns/explicit-backpressure-policy), not leave it implicit.
Advisory vs enforced¶
Two fundamentally different designs:
-
Advisory backpressure exposes a signal (e.g.,
desiredSizegoing negative, areadypromise) that the producer should consult but is not required to. This is the Web Streams design.controller.enqueue()always succeeds;writer.readyexists but is routinely ignored."Stream implementations can and do ignore backpressure; and some spec-defined features explicitly break backpressure." — 2026-02-27 Cloudflare post
-
Enforced backpressure — the API fails the write or blocks the producer when the buffer is full. No way to ignore; misuse is a thrown error, not silent memory growth. systems/new-streams' default
strictpolicy is the canonical realization.
The advisory model fails by default on the path that matters
most: inexperienced developers and fire-and-forget
writer.write(x) (without await) patterns. Enforcement is the
failure-safe choice when the cost of silent unbounded buffering
is a crash under load.
Where backpressure breaks in real systems¶
-
tee()unbounded buffering. Branching a single ReadableStream into two branches reading at different speeds buffers the faster-branch data until the slower branch catches up; the spec does not mandate buffer limits. Cloudflare Workers' implementation diverges — it signals backpressure based on the slowest consumer rather than the fastest — to avoid the implementation-default memory cliff. -
TransformStreampush semantics. Thetransform()function runs on write, not on read. Synchronous always-enqueue transforms never apply backpressure to the writable side, so a 3-stage pipeline may fill six internal buffers before the final consumer starts pulling. -
Intermediate layers that aren't backpressure-aware. Nginx
proxy_bufferingon by default; compression middleware buffering until a size/time threshold; gRPC streaming ⇆ Kafka hops — each layer silently defeats end-to-end backpressure unless explicitly configured. See concepts/head-of-line-buffering for the streaming-SSR instance.
Pull semantics as implicit backpressure¶
Pull-based streams
(for await…of) have implicit
backpressure as a side effect of the consumer-drives-execution
model: if the consumer stops iterating, the producer stops
producing. No signal to forget to check.
Unix pipes are the canonical instance: "Data flows left to right. Each stage reads input, does its work, writes output. If a downstream stage is slow, upstream stages naturally slow down. Backpressure is implicit in the model, not a separate mechanism to learn (or ignore)."
Push-based APIs (Web streams' controller.enqueue() on a timer)
produce regardless of consumer state, requiring an explicit
backpressure mechanism in addition to the data-flow
mechanism — and that separation is where advisory designs
leak under load.
Seen in¶
- sources/2026-02-27-cloudflare-a-better-streams-api-is-possible-for-javascript — canonical wiki instance: Web streams' advisory-only backpressure enumerated as a structural defect; four-policy taxonomy + enforced-default argued as the alternative.
- sources/2026-04-16-atlassian-streaming-ssr-confluence — intermediate-layer backpressure / buffering defaults in streaming SSR pipelines (concepts/head-of-line-buffering).
Related¶
- systems/web-streams-api — the canonical wiki instance of advisory-backpressure design.
- systems/new-streams — the canonical wiki instance of enforced-by-default backpressure.
- concepts/pull-vs-push-streams — the axis along which backpressure becomes implicit (pull) or explicit (push).
- concepts/async-iteration — the JS-language primitive that makes pull-semantics ergonomic.
- concepts/head-of-line-buffering — sibling streaming-hostile-defaults pattern at the HTTP-proxy tier.
- patterns/explicit-backpressure-policy — the design pattern that makes the four-response choice a required parameter.