Skip to content

PATTERN Cited by 1 source

Explicit backpressure policy

Intent

When a bounded buffer in a streaming pipeline fills and the producer wants to write more, there are only four possible responses (reject, block, drop-oldest, drop-newest). A streaming API should require the producer to choose one at stream-creation time — not bury the choice behind an advisory signal (desiredSize, writer.ready) that producers routinely ignore.

Motivation

Web Streams exposes concepts/backpressure through:

  • controller.desiredSize — integer that goes negative when over-capacity, zero when at capacity, positive when wanting more. Advisory-only; controller.enqueue() always succeeds.
  • writer.ready — promise that resolves when there's room. Advisory-only; producers often ignore it.

The problem is structural: advisory signals fail silently. A fire-and-forget writer.write(x) (no await) bypasses every backpressure mechanism in the API. Nothing throws; the write just accepts, the buffer grows unboundedly, and the process OOMs under load. "Stream implementations can and do ignore backpressure; and some spec-defined features explicitly break backpressure."

The pattern

At stream-creation time, the producer chooses one of the four responses:

// new-streams API
const { writer, readable } = Stream.push({
  highWaterMark: 10,
  backpressure: 'strict',      // (default) throws when buffer+pending full
  // backpressure: 'block',    // awaits until space available
  // backpressure: 'drop-oldest', // discards oldest buffered
  // backpressure: 'drop-newest', // discards incoming
});

Semantics per policy:

Policy On full buffer Use case
strict Throws after too many pending writes Catches fire-and-forget bugs; default
block write() awaits until space Trusted producer; throttle-to-consumer
drop-oldest Evicts oldest buffered Live feeds; stale data loses value
drop-newest Discards incoming Rate-limit; process-what-you-have

The choice is required. The default is the safest one — strict — which makes the common mistake (forgetting to await) surface as a thrown error rather than silent memory growth.

Why strict default matters

Most silent-growth bugs come from producers who learn the API from simple examples:

// Tutorial-grade pattern — seen in every streams introduction
for (const chunk of chunks) {
  writer.write(chunk);   // no await
}
await writer.end();

With advisory backpressure: accepts all chunks, buffer grows unboundedly, nothing fires until OOM. With strict: after a small pending-queue limit (tuneable via highWaterMark), the Nth write throws a clear error. The developer adds await, the problem is fixed before production.

The four-policy taxonomy is exhaustive

The 2026-02-27 post argues the taxonomy is closed:

"Any other response is either a variation of these (like 'resize the buffer,' which is really just deferring the choice) or domain-specific logic that doesn't belong in a general streaming primitive."

  • "Resize the buffer" = defer the choice. Still hits one of the four eventually.
  • "Apply backpressure upstream" = block + a signal pathway (block-with-signal).
  • "Spill to disk" = domain-specific policy layered above.

Every real streaming system converges on these four. The pattern is to make the choice load-bearing in the API.

Comparison with Web streams

Aspect Web streams Explicit policy
Choice location new ReadableStream({ }, { highWaterMark }) (size only) Stream creation (backpressure: 'strict')
Default Block-via-advisory (routinely ignored) strict (required)
Enforcement desiredSize producers-should-check API-level throw / wait
tee() behaviour Unbounded implementation-default buffering Stream.share({ highWaterMark, backpressure }) required

tee() / multi-consumer as the same pattern

The same design choice applies at the multi-consumer branch point. Web streams' tee() has no buffer limit; one branch reading 10× faster than the other silently buffers unboundedly. The explicit-policy response is that multi-consumer splits must take the same highWaterMark + backpressure parameters as stream creation:

const shared = Stream.share(source, {
  highWaterMark: 100,
  backpressure: 'strict',
});
const consumer1 = shared.pull();
const consumer2 = shared.pull(decompress);

"Both require you to think about what happens when consumers run at different speeds, because that's a real concern that shouldn't be hidden."

When to deviate

There is no universal right policy — the whole point is that the policy depends on the use case:

  • UI event streamsdrop-oldest (latest mouse position matters; old ones are stale).
  • Payment eventsblock (every event matters; slow the producer).
  • Log ingestiondrop-newest + metrics on drops (process what you can; alert on dropped-event rate).
  • General server codestrict (the default; catches bugs early).

The pattern is about making the choice explicit, not prescribing which one.

Seen in

Last updated · 200 distilled / 1,178 read