Skip to content

CONCEPT Cited by 1 source

Promise pipelining

What it is

Promise pipelining is an RPC technique in which the caller uses the result of an in-flight call as an argument to — or receiver of — a subsequent call, before the first call has returned. A chain of N dependent calls ships in one network round trip instead of N.

The caller does not wait for the first call's return value. Instead, it tells the server: "when you finish evaluating call 1, use the result as the input to call 2; when you finish evaluating call 2, use that result as the input to call 3 …" — and it does this by predicting the ID each call will occupy in the server's export table, so subsequent messages can reference that ID immediately.

Originated in Cap'n Proto (Kenton Varda, ~2013) and inherited by Cap'n Web (Cloudflare, 2025).

How it works

Each peer in a capability RPC session maintains an export table indexed by signed integer IDs. When the caller sends a push message ("evaluate this expression"), the result will land in the server's export table at a predictable positive ID — positive IDs are assigned to pushes in strictly increasing order starting from 1. Because the caller can predict the ID, it can use that ID in the very next message it sends, even before the first push has been evaluated.

Wire example (from Cap'n Web)

Source-level JavaScript:

let namePromise = api.getMyName();
let result = await api.hello(namePromise);

Wire trace (api is the server's main export, ID 0):

-> ["push", ["pipeline", 0, "getMyName", []]]
   // result lands at export ID 1
-> ["push", ["pipeline", 0, "hello", [["pipeline", 1]]]]
   // result lands at export ID 2; references ID 1 before it resolves
-> ["pull", 2]
   // "actually, send me the value of ID 2"
<- ["resolve", 2, "Hello, Alice!"]

Two push messages + one pull message + one resolve response = one network round trip for two dependent method calls.

Proxy interception (JavaScript specifically)

Cap'n Web implements pipelining by returning a JavaScript Proxy from every RPC call rather than a real Promise. Every method invoked on the proxy is interpreted as a speculative pipelined call and shipped immediately. Cap'n Proto, in C++/Java/other languages, uses a more explicit promise type but the idea is the same: the returned value carries a reference to its future result that can be used before resolution.

Why it matters

  • Collapses round trips. Latency-bound workflows (auth → fetch → filter → return) become round-trip-count-invariant. For browser apps on mobile networks with 100–300 ms RTT, a 4-call chain drops from ~1 s to ~250 ms.
  • Eliminates the "waterfall" that motivated GraphQL. Cap'n Web's pitch is that pipelining gives you the same round-trip efficiency as a nested GraphQL query — without a new query language, schema DSL, resolver system, or client library. "GraphQL gave us a way to flatten REST's waterfalls. Cap'n Web lets us go even further: it gives you the power to model complex interactions exactly the way you would in a normal program, with no impedance mismatch." (Source: sources/2025-09-22-cloudflare-capn-web-rpc-for-browsers-and-web-servers)
  • Composes with capability returns. Because pipelining allows methods on a promised value, the authenticate(key).whoami() idiom — two logically-dependent calls — ships in one round trip, not two. See patterns/capability-returning-authenticate.
  • Foundational for .map() over promised arrays. Cap'n Web's record-replay DSL for .map() is implemented entirely via pipelining — the "DSL" the server interprets per array element is literally the pipelining protocol itself.

Trade-offs and pitfalls

  • Error propagation is tricky. If call N in a chain fails, downstream calls that referenced its result must be rejected — the RPC runtime must track dependency edges and propagate reject messages appropriately.
  • Cancellation semantics. The caller may await only the final promise; intermediate pushes are implicitly "don't-care unless someone pipelines off me." Cap'n Web's rule: a pull is only sent for results the application actually awaits, which means some pushed computations are fire-and-forget.
  • Not a caching layer. Pipelining is purely round-trip optimization; it does not dedupe repeated calls or cache results across requests.
  • Debuggability shifts. A single logical "call chain" is now multiple wire messages; tracing needs to correlate push IDs across messages, not just match request/response pairs.
  • Server CPU amplification risk. A malicious or misbehaving client can pipeline a deep chain of speculative calls; the server evaluates them eagerly. Capability model mitigates (the client has to already hold the relevant stubs) but rate-limits / depth-limits are still prudent.

Relationship to other techniques

  • HTTP/2 multiplexing and HTTP/3 remove head-of-line blocking for independent requests — they do not help with dependent request chains. Promise pipelining specifically targets the dependent-chain case.
  • GraphQL nested queries achieve round-trip collapse for nested reads via a declarative query language; pipelining achieves the same for arbitrary method calls via an imperative language. Mutations-and-then-use-the-result is where the difference is most visible.
  • Apache Thrift / gRPC / JSON-RPC 2.0 do not offer promise pipelining; each dependent call costs a round trip.

Seen in

Last updated · 200 distilled / 1,178 read