Skip to content

ATLASSIAN 2026-04-16 Tier 3

Read original ↗

Atlassian — Streaming Server-Side Rendering in Confluence

Summary

Atlassian's Confluence team adopted React 18 streaming SSR as the second big lever in a multi-year page-load performance effort (p90 latency halved over 12 months; this change delivered ~40% First Contentful Paint improvement). Instead of rendering the full page on the server and shipping one HTML blob, the server renderToPipeableStream()s the React tree, emitting markup progressively at <Suspense> boundaries while pending data resolves. A NodeJS transform pipeline sequences per-chunk state injection before markup so client hydration finds the state it needs already in the page. Getting this in production required fixing streaming-hostile layers up and down the stack: intermediate-proxy buffering (X-Accel-Buffering: no + manual compression flush on setImmediate), object-mode streams to avoid buffer↔string thrash, and a React-18-specific hydration bug where context changes across ready Suspense boundaries cause re-renders proportional to boundary count (worked around with unstable_scheduleHydration, definitively fixed in React 19). Rolled out with a conservative multi-week A/B test tracking FCP/TTVC/TTI/hydration-success.

Key takeaways

  • Streaming SSR recovers SPA-style early FCP without losing SSR's visually-complete win. Classic SPAs paint early but TTVC waits for data; classic SSR blocks on full-tree render; streaming SSR emits the first <Suspense>-wrapped markup (e.g. navigation) while later boundaries are still fetching. ~40% FCP improvement reported. (Source: sources/2026-04-16-atlassian-streaming-ssr-confluence)
  • React 18's renderToPipeableStream + <Suspense> is the API boundary. Pending subtrees render as loading placeholders with marker comments (<!--$?--><!--$--> when resolved); React emits a small inline-JS runtime that swaps placeholders for streamed chunks as they arrive.
  • Data must be sequenced before markup per chunk, or hydration breaks. Confluence built a getServerStream / getClientStream abstraction; a NodeJS transform buffers emitted data while React is producing a chunk's markup, then flushes the data ahead of the markup. See concepts/react-hydration.
  • Intermediate proxies are streaming's biggest hidden enemy. Nginx proxy_buffering on and Node compression middleware both buffer to fill size thresholds, effectively turning a stream back into a monolithic response. Fix: X-Accel-Buffering: no header + force-flush the compression middleware on setImmediate after each chunk. See concepts/head-of-line-buffering.
  • Asset preloading needs a prediction loop, not a post-render lookup. Normally the server only knows which JS bundles the page needs after SSR finishes, so asset download blocks hydration. Confluence added a feedback loop: record component IDs seen on recent renders, preload those bundles early; also streams component-ID metadata inline as rendering progresses for per-page corrections. Cut interaction-ready time ~50%. See patterns/asset-preload-prediction.
  • Don't use buffer mode on stream transforms that do regex work. NodeJS streams are buffers by default; round-tripping buffer↔string to run regex-based page-annotation transforms was a dominant cost. Switching the pipeline to objectMode and bounding the regex search window (emit as soon as a tag can't match) removed the overhead on large pages.
  • React 18 hydration quirk: context change across a ready Suspense boundary re-renders the subtree — once per boundary. Confluence's TTI regressed during streaming rollout because each Suspense boundary added another redundant render pass, and a buggy state-management library leaked listeners on each discard. Short-term fix: unstable_scheduleHydration to raise priority and prevent context-driven re-renders. Confirmed fixed in React 19. Lesson: topline-metrics-only rollouts miss regressions that guardrail metrics catch. See patterns/ab-test-rollout.
  • Conservative multi-week A/B rollout with percentile tracking (p50, p90, p99) on FCP / TTVC / TTI / hydration-success rate was the release vehicle. The complexity of the change determined rollout speed, not the expected win size.

Key numbers

  • Confluence p90 page-load latency halved over prior 12 months (of which streaming SSR is one of several contributing changes).
  • Streaming SSR alone: ~40% FCP improvement.
  • Asset-preload-prediction: ~50% reduction in time-to-interaction.
  • Primary metrics tracked in rollout: FCP, TTVC (target: 90% visible + stable = VC90), TTI, hydration success rate — each at p50, p90, p99.

Architecture notes

The Confluence SSR pipeline:

  1. React 18 renderToPipeableStream(<App />) on the Node server.
  2. Output piped through:
  3. State-injection transform (object mode): buffers data emitted via getServerStream while React is mid-chunk, flushes it before the corresponding markup, driven by setImmediate as the chunk-boundary signal.
  4. Page-annotation transforms (start/end markers, script-preload tags, metrics markers) — also object mode, bounded search windows.
  5. compression middleware — force-flushed on setImmediate after each chunk so compressed bytes actually leave the process.
  6. Response header X-Accel-Buffering: no tells intermediate nginx proxies not to buffer the upstream stream.
  7. Browser receives progressive HTML with <!--$?--> / <template id="B:1"> placeholders that React's inline-JS runtime replaces in place as later chunks arrive; hydration completes per-boundary using state already injected ahead of each chunk.
  8. Preload <link> tags for JS bundles are emitted early, driven by the asset-prediction feedback loop from previous renders, and refined mid-stream by component-ID metadata pushed as rendering progresses.

Caveats

  • Tier-3 source; post is narrative with few raw numbers beyond the headline ~40% FCP win and "cut nearly half" of interaction-ready time.
  • No detail on the shape of the component-ID manifest, how the prediction model is built, or what fraction of assets are covered by prediction vs. mid-stream discovery.
  • The React-18 context-change-discards-subtree finding is asserted as React behavior and "confirmed fixed in React 19" but the post doesn't link to an issue / fix PR.
  • Client-side rendering performance is adjacent to the "system design at scale" focus of this wiki; filed here because streaming/backpressure/ head-of-line-buffering/hydration are generally applicable primitives, not just web-UI concerns.
Last updated · 200 distilled / 1,178 read