Skip to content

PATTERN Cited by 1 source

Asset Preload Prediction

In SSR apps the server only learns which JS bundles the page needs after rendering completes (it records component IDs during render and looks them up in a build-time manifest). That means asset downloads block on SSR finishing, and hydration blocks on asset downloads — pushing TTI late. Asset preload prediction short-circuits this by starting bundle downloads before SSR finishes, using two signals:

  1. Historical prediction: a feedback loop over prior renders records which bundles the page route typically needs. The server emits preload <link> tags for those bundles as the very first chunk, so the browser starts the downloads in parallel with SSR.
  2. Mid-stream correction: as streaming SSR proceeds, the server pushes component-ID metadata inline in the HTML; the browser adds preload hints for any bundles not covered by the historical set, before SSR finishes the page.

Mechanics

  • Requires a build-time manifest mapping component IDs → bundle paths. Already needed for normal SSR asset resolution; the prediction layer reads the same manifest.
  • Prediction is per-route (or finer) and kept warm by recent-render frequency. "Common bundles" across every page give the biggest wins with minimum waste.
  • The "different pages use different features" long tail — Confluence's example is Table-of-Contents and Charts — is where mid-stream component-ID metadata recovers precision: prediction covers the base, metadata fills the page-specific.

Impact

Confluence reports the prediction-plus-mid-stream approach cut interaction-ready time nearly in half (Source: sources/2026-04-16-atlassian-streaming-ssr-confluence). The effect is largest on slow-network / high-latency clients, where the formerly-serialized asset download was the dominant term.

Trade-offs

  • Over-preload wastes bandwidth and can push out other critical fetches. Tune the historical-threshold so low-probability bundles aren't included.
  • Prediction staleness: a code-split refactor can invalidate the prediction set; need either automatic decay or a rebuild hook.
  • Inline metadata cost: the per-chunk component-ID sidecar adds HTML bytes. Confluence makes it small and streams it between chunks rather than as one blob.

Seen in

Last updated · 200 distilled / 1,178 read