Skip to content

CONCEPT Cited by 1 source

Rendering delay distribution (Google, 2024)

Definition

The rendering delay distribution is the canonical empirical percentile distribution of time between Googlebot's initial crawl of a page and the Web Rendering Service's completion of rendering for that page. The Vercel + MERJ 2024-08-01 study is the first publicly-disclosed large-sample measurement of this distribution on a production Next.js site.

The canonical distribution (April 2024, nextjs.org)

Percentile Rendering delay
p25 ≤ 4 s
p50 (median) 10 s
p75 26 s
p90 ~3 h
p95 ~6 h
p99 ~18 h

Sample: 37,000+ Googlebot requests where the server access log's crawl timestamp could be paired with the beacon server's render- complete timestamp via a unique request identifier. See patterns/server-beacon-pairing-for-render-measurement for the measurement method.

(Source: sources/2024-08-01-vercel-how-google-handles-javascript-throughout-the-indexing-process.)

What this overturns

The pre-study folklore in the SEO community — "Google takes days or weeks to render a JS-heavy page" — was a generalisation from the tail to the typical. The typical render (p50 = 10 s) is fast; the p99 is indeed long (~18 h) but is the exception, not the rule.

Structured beliefs the distribution pushes back on:

  • "Rendering queue is so long JS is an SEO death sentence." → p50 = 10 s. False as stated.
  • "Dynamic / RSC / streaming content won't get indexed quickly." → p75 = 26 s, full RSCs rendered. Largely false.
  • "SPAs are disadvantaged because they're last in the queue." → No evidence of JS-complexity correlation with delay at this scale.
  • "CSR pages take days to show up in search." → On nextjs.org, they take seconds to minutes.

Slices disclosed

By URL type:

URL type p50 p75 p90
All URLs 10 s 26 s ~3 h
Without query string 10 s 22 s ~2.5 h
With query string 13 s 31 min ~8.5 h

The query-string slice is the striking one: p75 jumps from 22 seconds to 31 minutes. Google appears to de-prioritise parameterised URLs that likely serve canonical path-only content (typical ?ref=..., ?utm_source=..., etc.).

By site section (nextjs.org):

  • /docs (frequently updated): shorter median.
  • /showcase (rarely updated): longer median.

Freshness signals feed into rendering priority, not just crawl priority.

Interpretive caveats

  • Single-site measurement. nextjs.org is a well-operated Vercel-hosted site with clean sitemap, good robots.txt, unblocked critical resources, and Cache-Control headers. A random shared-host site with blocked JS resources or Cache-Control drift could reasonably land further into the tail.
  • Single-time-window. April 2024. WRS capacity allocation / queue policy may have evolved since; the study may not be re-measurable post-publication without re-instrumenting.
  • "p90 ≈ 3 h" is approximate, not exact — the source post states "~3 hours" rather than a numerical percentile-curve cut.
  • Non-Googlebot crawlers (Bingbot, DuckDuckBot, Perplexity's bots, OpenAI / Anthropic AI-training bots) are not in this distribution. The study explicitly flags them as ongoing work.
  • 100 % rendering-success rate is not the same as 100 % rendering-sooner-than-queue-deadline. A page that takes 18 h to render is "rendered" but is 18 h out of date when it completes.

Usage in builder decisions

  • For SEO-critical content: do not depend on the tail being cheap. Ensure the initial HTML body carries the content, and use server-rendering / ISR / SSG for critical pages. The queue's median is fine; the p99 can hurt you on a deadline.
  • For noindex / status-code decisions: the queue does not apply — pre-render detection is fast. Client-side noindex changes never run because they're JS, so the queue can't rescue them. See concepts/client-side-removal-of-noindex-ineffective.
  • For large sites: p75 = 26 s × 100,000 pages = ~289 CPU- days of WRS time per full re-render. Crawl budget becomes the binding constraint. See concepts/crawl-budget-impact-of-js-complexity.

Seen in

Last updated · 476 distilled / 1,218 read