PATTERN Cited by 1 source
Two-loop parallel async build¶
When to use¶
You are composing a response from N independent units (features, widgets, panels, …) where each unit needs to call upstream services, and you want total latency to be max of upstream latencies, not sum of upstream latencies — in a language or framework that predates structured concurrency (Python pre-asyncio, older Java, etc.) or whose existing interface uses synchronous-looking methods.
The pattern¶
Split each unit's build into two halves connected by a future:
- Half 1 — kick off async work. Method like
load_data()creates a future / promise for each upstream call but does not block on it. Returns immediately. - Half 2 — consume results. Method like
resolve()calls.result()on the future, which blocks only here.
Then iterate over all units twice:
- Loop 1 (fan-out): for each unit, call half 1. Every upstream request is now in flight.
- Loop 2 (await): for each unit, call half 2. The first
resolve()blocks briefly for its slowest future; laterresolve()s are non-blocking because their futures already completed while loop 1 was iterating.
Total latency is bounded by the slowest upstream call, not the sum of all upstream calls.
Yelp CHAOS concrete implementation¶
Verbatim from the 2025-07-08 post: "a view can contain
multiple features, and during the build process, all features
are built in parallel to enhance performance. To achieve this,
the feature providers are iterated over twice. In the first
loop, the build process is initiated, triggering any
asynchronous calls to external services. This includes the
steps: registers, is_qualified_to_load, and load_data.
The second loop waits for responses and completes the build
process, encompassing the steps: resolve,
is_qualified_to_present, and result_presenter."
The split point between the two loops is the boundary between non-blocking work (capability check, cheap qualification, spawning upstream requests) and blocking consumption (awaiting results, final qualification, building the output).
loop 1 ──▶ feature_A.load_data() ───▶ fires req A (future_A)
feature_B.load_data() ───▶ fires req B (future_B)
feature_C.load_data() ───▶ fires req C (future_C)
│
┌─────────────────────────────┘
│ all three requests now in flight, in parallel
▼
loop 2 ──▶ feature_A.resolve() ───▶ blocks on future_A
feature_A.result_presenter() → components
feature_B.resolve() ───▶ future_B may already be done
feature_B.result_presenter() → components
feature_C.resolve() ───▶ future_C may already be done
feature_C.result_presenter() → components
Total blocking time: ≈ max(latency_A, latency_B, latency_C), modulo the small amount of work between loops.
Why two loops and not async/await¶
Yelp notes the two-loop pattern is a transitional design and "the latest CHAOS backend framework introduces the next generation of builders using Python asyncio, which simplifies the interface."
The two-loop iteration is what you do when:
- Your interface is sync-looking methods — you can't
easily make
load_datareturn an awaitable because callers expect a void-return contract. - You need to express parallelism without rewriting call sites — the framework orchestrates parallelism, feature authors write mostly linear code.
- You're in a pre-asyncio Python, pre-virtual-threads Java, or similar environment.
With structured concurrency, the same outcome is achieved by a
single await asyncio.gather(*[feature.build() for feature in
features]) — one method per feature, no two-loop dance.
Hard problems¶
- Accidentally blocking in loop 1. If any
load_data()call blocks, the fan-out collapses into serialised calls. Discipline (or a lint rule) is required to keep loop 1 non-blocking. - Hidden dependencies between features. The pattern
assumes features are data-independent. If feature B's
load_datadepends on feature A's result, the two-loop model can't express that and features must be reordered or combined. - Error-in-loop-1 surprises. A register-matching error or
a synchronous config lookup failure in
load_data()can prevent the future from being created at all. The error surfaces only in loop 2 (or not at all, if the wrapper swallows it). See patterns/error-isolation-per-feature-wrapper. - Cancellation is ugly. Cancelling all outstanding futures on a deadline breach requires explicit tracking of every future spawned in loop 1.
Contrast with adjacent patterns¶
- Futures.allOf / CompletableFuture.allOf (Java) — single aggregate future, still one-pass.
- asyncio.gather (Python) — structured concurrency; collapses the two loops.
- Rx / Reactive streams — one pipeline per feature composed into a merged stream; different shape, similar latency property.
- patterns/parallel-integration-test-suite-for-context-switch — parallelises test runs at suite level; different domain, similar fan-out intuition.
Seen in¶
- sources/2025-07-08-yelp-exploring-chaos-building-a-backend-for-server-driven-ui — Yelp's CHAOS framework; first wiki instance.
Related¶
- systems/yelp-chaos
- patterns/feature-provider-lifecycle — defines which stages sit in each loop
- concepts/server-driven-ui