CONCEPT Cited by 1 source
Rendering delay distribution (Google, 2024)¶
Definition¶
The rendering delay distribution is the canonical empirical percentile distribution of time between Googlebot's initial crawl of a page and the Web Rendering Service's completion of rendering for that page. The Vercel + MERJ 2024-08-01 study is the first publicly-disclosed large-sample measurement of this distribution on a production Next.js site.
The canonical distribution (April 2024, nextjs.org)¶
| Percentile | Rendering delay |
|---|---|
| p25 | ≤ 4 s |
| p50 (median) | 10 s |
| p75 | 26 s |
| p90 | ~3 h |
| p95 | ~6 h |
| p99 | ~18 h |
Sample: 37,000+ Googlebot requests where the server access log's crawl timestamp could be paired with the beacon server's render- complete timestamp via a unique request identifier. See patterns/server-beacon-pairing-for-render-measurement for the measurement method.
(Source: sources/2024-08-01-vercel-how-google-handles-javascript-throughout-the-indexing-process.)
What this overturns¶
The pre-study folklore in the SEO community — "Google takes days or weeks to render a JS-heavy page" — was a generalisation from the tail to the typical. The typical render (p50 = 10 s) is fast; the p99 is indeed long (~18 h) but is the exception, not the rule.
Structured beliefs the distribution pushes back on:
- "Rendering queue is so long JS is an SEO death sentence." → p50 = 10 s. False as stated.
- "Dynamic / RSC / streaming content won't get indexed quickly." → p75 = 26 s, full RSCs rendered. Largely false.
- "SPAs are disadvantaged because they're last in the queue." → No evidence of JS-complexity correlation with delay at this scale.
- "CSR pages take days to show up in search." → On
nextjs.org, they take seconds to minutes.
Slices disclosed¶
By URL type:
| URL type | p50 | p75 | p90 |
|---|---|---|---|
| All URLs | 10 s | 26 s | ~3 h |
| Without query string | 10 s | 22 s | ~2.5 h |
| With query string | 13 s | 31 min | ~8.5 h |
The query-string slice is the striking one: p75 jumps from 22
seconds to 31 minutes. Google appears to de-prioritise
parameterised URLs that likely serve canonical path-only
content (typical ?ref=..., ?utm_source=..., etc.).
By site section (nextjs.org):
/docs(frequently updated): shorter median./showcase(rarely updated): longer median.
Freshness signals feed into rendering priority, not just crawl priority.
Interpretive caveats¶
- Single-site measurement.
nextjs.orgis a well-operated Vercel-hosted site with clean sitemap, goodrobots.txt, unblocked critical resources, andCache-Controlheaders. A random shared-host site with blocked JS resources orCache-Controldrift could reasonably land further into the tail. - Single-time-window. April 2024. WRS capacity allocation / queue policy may have evolved since; the study may not be re-measurable post-publication without re-instrumenting.
- "p90 ≈ 3 h" is approximate, not exact — the source post states "~3 hours" rather than a numerical percentile-curve cut.
- Non-Googlebot crawlers (Bingbot, DuckDuckBot, Perplexity's bots, OpenAI / Anthropic AI-training bots) are not in this distribution. The study explicitly flags them as ongoing work.
- 100 % rendering-success rate is not the same as 100 % rendering-sooner-than-queue-deadline. A page that takes 18 h to render is "rendered" but is 18 h out of date when it completes.
Usage in builder decisions¶
- For SEO-critical content: do not depend on the tail being cheap. Ensure the initial HTML body carries the content, and use server-rendering / ISR / SSG for critical pages. The queue's median is fine; the p99 can hurt you on a deadline.
- For
noindex/ status-code decisions: the queue does not apply — pre-render detection is fast. Client-sidenoindexchanges never run because they're JS, so the queue can't rescue them. See concepts/client-side-removal-of-noindex-ineffective. - For large sites: p75 = 26 s × 100,000 pages = ~289 CPU- days of WRS time per full re-render. Crawl budget becomes the binding constraint. See concepts/crawl-budget-impact-of-js-complexity.
Seen in¶
- sources/2024-08-01-vercel-how-google-handles-javascript-throughout-the-indexing-process — canonical wiki instance; the only public large-sample empirical measurement of this distribution on the wiki.
Related¶
- systems/googlebot
- systems/google-web-rendering-service
- concepts/rendering-queue — the mechanism behind the distribution.
- concepts/universal-rendering — why every indexable page contributes to the distribution.
- concepts/crawl-budget-impact-of-js-complexity — the aggregate-cost view at scale.