PATTERN Cited by 1 source
Page performance quality gates¶
Problem¶
Once a frontend platform is shared by many teams and many features, any one contributor can ship a regression that the whole site eats: a heavier component, a new dependency, an unintentional non-tree-shaken import, or a layout shift. Without automated gates, these regressions usually land one commit at a time and get caught — if at all — only after they degrade production metrics for real users.
The question is: what automated, per-PR gates catch performance regressions before they ship?
Solution¶
Treat performance and accessibility as CI quality gates — not advisory dashboards — on every pull request. Zalando's Interface Framework codifies three orthogonal gates (Source: sources/2021-03-10-zalando-micro-frontends-from-fragments-to-renderers-part-1):
- Lighthouse CI — Google's lighthouse-ci runs per-page performance + accessibility tests. Each PR gets a Lighthouse score with assertions; a regression below the asserted threshold fails the PR.
- Bundle Size Limits — a bundler-wired check computes the size of each Renderer (or any per-page artifact) on every PR and reports the diff vs the mainline bundle for only the Renderers the PR changed. Regressions above the limit fail the PR.
- Client Metrics (Web Vitals + custom) — every served page emits Web Vitals (LCP, CLS, FID/INP, TTFB) plus application-specific metrics for every request. This is the production feedback loop that catches what CI gates miss — regressions that only show up on real-user devices and networks.
The three gates are complementary:
Synthetic (CI) Real-user (prod)
──────────── ──────────────
Lighthouse CI ─ scoreboard per PR
Bundle Size ─ diff per PR
Web Vitals + custom
─ continuous, keyed by page
Why all three and not just one¶
- Lighthouse CI alone catches regressions only on the synthetic device/network Lighthouse runs on. It over-detects on slow CI runners and under-detects on anything device-specific.
- Bundle-size alone is a good leading indicator but doesn't see runtime costs (hydration, layout, long tasks).
- Web Vitals alone catches real regressions but only after the release; you need the CI gates to prevent them, not just diagnose them.
Zalando's platform (Source: sources/2021-03-10-zalando-micro-frontends-from-fragments-to-renderers-part-1) ships all three together as platform-level capabilities that feature teams do not re-wire per Renderer — the platform provides the plumbing, teams provide the Renderers.
Where the gates live¶
- Lighthouse CI / Bundle Size — per pull request, in the same CI that runs unit tests. Blocking.
- Web Vitals + custom metrics — built into the Rendering Engine / the client bootstrap so every response emits them. Observable in Zalando's monitoring stack.
Known open questions from the Zalando Part 1 post¶
- Per-Renderer bundle-size budget. The post names "Bundle Size Limits" but not the policy — is the limit per-Renderer fixed, or scaled by mainline page weight?
- Lighthouse thresholds and flakiness. How are the assertions set, and how are transient CI-runner regressions handled?
- Custom client metrics. Named as "custom metrics to capture all Zalando pages' user experience" but the list is not enumerated.
Related patterns¶
- patterns/two-pass-api-migration and patterns/feature-flagged-dual-implementation — pair well with performance gates: ship the new implementation under a flag, leave the old path live, measure the delta in client metrics before cutover.
- patterns/a11y-checks-via-playwright-fixture-extension — the accessibility-gate counterpart for end-to-end tests. Lighthouse gives you synthetic a11y scores; Playwright + axe gives you the E2E a11y assertion.
Seen in¶
- systems/zalando-interface-framework — Lighthouse CI + Bundle Size Limits as per-PR CI gates, Web Vitals + custom metrics as the always-on production feedback, all provided as platform capabilities to every Renderer contributor (Source: sources/2021-03-10-zalando-micro-frontends-from-fragments-to-renderers-part-1).