Skip to content

CONCEPT Cited by 1 source

Rendering queue (Google)

Definition

The rendering queue is the FIFO-ish staging area between Googlebot's crawl stage (HTTP fetch of the URL) and its render stage (execution by the Web Rendering Service). Every indexable HTML page Googlebot fetches is enqueued here; the queue's delay from enqueue to render-complete dominates the time-from-crawl-to-index for JS-rendered content.

Why a queue exists

The WRS spins up a headless Chromium session per page, fetches all sub-resources, executes JavaScript, waits for async work to settle, and emits the final DOM. This is materially more expensive per page than parsing raw HTML. Google runs WRS as a shared capacity pool across the entire web; any single crawl can't get a render synchronously — it waits its turn.

Empirical distribution (Vercel + MERJ, April 2024)

Measured on nextjs.org over April 2024 via the server- beacon-pairing pattern across 37,000+ matched server-beacon pairs:

Percentile Time from crawl to render completion
p25 ≤ 4 s
p50 10 s
p75 26 s
p90 ~3 h
p95 ~6 h
p99 ~18 h

The headline: p50 is tens of seconds, not days. The long- standing SEO folklore that "Google takes days or weeks to render a JS page" was a generalisation from the tail (p90+) to the typical. The typical is fast; the tail is long. See concepts/rendering-delay-distribution for the full data disclosure.

(Source: sources/2024-08-01-vercel-how-google-handles-javascript-throughout-the-indexing-process.)

Prioritisation signals (not strict FIFO)

Empirical behaviour suggests the queue is priority-weighted:

  • High-update-frequency sections render faster. /docs (changes often) has shorter median queue delay than /showcase (changes rarely) on nextjs.org.
  • Path-only URLs render faster than query-string URLs. ?ref=...-style URLs have p75 of 31 min vs 22 s for their path-only counterparts — suggests Google de-prioritises URLs that likely re-serve canonical content.
  • Sitemap <lastmod> bumps appear to move pages up the queue.
  • JS complexity does not correlate with queue delay at nextjs.org scale — though it raises per-render cost which feeds into site-level crawl budget.

What's blocked behind the queue

  • Link value assessment — Google's PageRank-style weighting of discovered links happens post-render, so it's post-queue. Link discovery (finding URLs in the body via regex) happens at crawl time, before the queue — so CSR pages don't lose newly-discovered URLs to the queue, but do lose updated link- value assessment. See concepts/link-discovery-vs-link-value-assessment.
  • Rendered-content indexing — anything that only appears after JS execution (CSR-rendered content, API-fetched data, RSC-streamed payloads) waits for the queue to drain before it can be indexed.
  • Client-side meta-tag changes don't help — the noindex check is pre-render, so a noindex set by JS would only land after the queue, which is too late.

Not a bug; a capacity-allocation choice

The queue is not a failure mode — it's how Google allocates WRS capacity across the entire indexed web. Each page's position is shaped by a mix of Google's signals (freshness, priority, sitemap data) and the site's own behaviour (clean canonical URLs, fast-loading resources, unblocked robots.txt). Sites that optimise for these signals ride the head of the distribution; sites that don't pay with tail latency.

Seen in

Last updated · 476 distilled / 1,218 read