Skip to content

PATTERN Cited by 1 source

Feature provider lifecycle

When to use

You are building a server-driven-UI backend (or any request- scoped composition engine) that:

  • Composes many independent features per response.
  • Each feature has its own data dependencies (calls to upstream services, DBs, caches).
  • Each feature needs client-compatibility checks and business-logic qualification that may drop it.
  • Latency pressure requires parallel data loading across features.

The pattern

Every feature inherits from a FeatureProvider base class that defines six named stages with distinct responsibilities:

  1. registers — declare the client capability requirements and pick a presenter handler. See concepts/register-based-client-capability-matching. If no register matches the client, the feature is dropped here.
  2. is_qualified_to_load — cheap pre-load gate; feature flags, A/B bucket checks, simple config gates. Returns False to drop the feature before any upstream call.
  3. load_data — fire asynchronous upstream requests. Must be non-blocking — the build pipeline batches all features' load_data() calls and waits on the union.
  4. resolve — block on the responses from load_data and normalise them into feature-local state.
  5. is_qualified_to_present — post-load gate; can drop the feature based on data content (e.g. no reminders for this user → hide the reminders feature).
  6. result_presenter — produce the list of Components and Actions that implement the feature.

Verbatim (2025-07-08): the framework "iterate[s] over [features] twice. In the first loop, the build process is initiated, triggering any asynchronous calls to external services. This includes the steps: registers, is_qualified_to_load, and load_data. The second loop waits for responses and completes the build process, encompassing the steps: resolve, is_qualified_to_present, and result_presenter."

class FeatureProviderBase:
    @property
    def registers(self) -> List[Register]: ...

    def is_qualified_to_load(self) -> bool:
        return True

    def load_data(self) -> None:
        """Initiates asynchronous data loading."""

    def resolve(self) -> None:
        """Processes data for SDUI component configuration."""

    def is_qualified_to_present(self) -> bool:
        return True

    def result_presenter(self) -> List[Component]: ...

Two gate stages, on purpose

  • is_qualified_to_load is pre-data. Use cheap signals — feature flags, cohort membership, client platform — to avoid pointlessly hitting upstream services.
  • is_qualified_to_present is post-data. Use data- dependent signals — did the upstream return empty? is this user in a state where the feature makes sense? — to make the last-mile inclusion decision.

Separating these avoids both of the naive failure modes: loading data for features you're going to drop (wasted upstream calls), and committing to rendering features whose data says they shouldn't appear (empty rows, zero-state flicker).

Why parallel iteration matters

See patterns/two-loop-parallel-async-build for the full rationale. Key point: load_data returns futures, not results. The build pipeline calls load_data across all features in quick succession so every upstream request fires before any single feature blocks on a response. Only in loop 2 does each feature's resolve() actually block. This cuts total build latency from sum of upstream latencies to max of upstream latencies.

Error isolation

Each stage is invoked through an error-decorator wrapper — see patterns/error-isolation-per-feature-wrapper. A thrown exception drops the feature (unless marked essential) and logs the failure with owner info, rather than failing the whole view.

Yelp's concrete example

class WelcomeFeatureProvider(ProviderBase):
    @property
    def registers(self) -> List[Register]:
        return [Register(
            condition=Condition(
                platform=[Platform.IOS, Platform.ANDROID],
                library=[TextV1, IllustrationV1, ButtonV1],
            ),
            presenter_handler=self.result_presenter,
        )]

    def is_qualified_to_load(self) -> bool: return True

    def load_data(self) -> None:
        self._button_text_future = AsyncButtonTextRequest()

    def resolve(self) -> None:
        self._button_text = (
            self._button_text_future.result().text
        )

    def result_presenter(self) -> List[Component]:
        return [
            Component(component_data=TextV1(
                text="Welcome to Yelp!",
                style=TextStyleV1.HEADER_1,
                text_alignment=TextAlignment.CENTER,
            )),
            Component(component_data=IllustrationV1(
                dimensions=Dimensions(width=375, height=300),
                url="https://media.yelp.com/welcome-to-yelp.svg",
            )),
            Component(component_data=ButtonV1(
                text=self._button_text,
                button_type=ButtonType.PRIMARY,
                onClick=[Action(action_data=OpenUrlV1(
                    url="https://yelp.com/search"
                ))],
            )),
        ]

Note the minimal work in load_data (just spawning the async request) vs the blocking .result() call in resolve. That split is the whole point of the two-loop pattern.

Trade-offs

  • Discipline cost. Developers must resist calling .result() inside load_data or doing IO in result_presenter. A stray blocking call destroys the parallelism.
  • State shuttled across stages via self. Each feature instance carries state (self._button_text_future, self._button_text) between the six methods. This is fine for short-lived instances but gets confusing when lifecycle methods are inherited.
  • Post-asyncio redesign in flight. Yelp notes "the latest CHAOS backend framework introduces the next generation of builders using Python asyncio, which simplifies the interface." The six stages may collapse into a single async def build(self, context) -> List[Component] with internal awaits once asyncio migration completes.

Seen in

Last updated · 476 distilled / 1,218 read