Skip to content

CONCEPT Cited by 1 source

Interaction to Next Paint (INP)

Interaction to Next Paint (INP) is a Core Web Vital that measures the latency between a user interaction (click, tap, key press) and the browser painting the next frame that reflects the interaction's result. INP is the current industry-standard metric for perceived interactive responsiveness of a web page under real traffic (RUM — Real User Monitoring), superseding earlier First-Input-Delay-style metrics that only measured the first interaction.

Scale

  • INP is reported in milliseconds per interaction.
  • Web.dev thresholds (per Google): ≤200 ms good / 200-500 ms needs improvement / >500 ms poor.
  • Real-world product-scale measurement requires per-interaction reporting (not just session-averaged), because each interaction is a separate tail sample — aggregating hides the worst cases.

Why it's hard to optimize

INP captures main-thread latency for the full interaction → paint cycle: event dispatch, handler execution, any React/Vue/etc. reconciliation + re-render, DOM mutation, style + layout + paint. Any of those stages can dominate. At high scale:

  • Large DOMs stretch layout + style work.
  • Deep component trees stretch reconciliation.
  • Scattered state (e.g. many useEffects) triggers extra re-renders (concepts/react-re-render).
  • O(n) lookups on interaction paths scale with list size.
  • High-cost CSS selectors (e.g. :has(...)) invalidate broad style subtrees per mutation.

The optimization surface is therefore orthogonal to server-side latency — zero backend calls can happen during an interaction and INP can still be >500 ms.

Canonical wiki instance: GitHub PR Files-changed tab

GitHub's pull-request Files-changed tab pre-rewrite saw INP >275 ms for p95+ PRs, reaching 275-700+ ms on 10,000+ diff-line PRs. The post-rewrite values were:

  • Median PRs (v1 → v2 on a 10,000-line split-diff benchmark, m1 MBP with 4× slowdown): ~450 ms → ~100 ms (~78 % faster).
  • p95+ PRs after window virtualization via TanStack Virtual: 275-700+ ms → 40-80 ms.

Load-bearing observability choice: GitHub's Datadog dashboard segments INP by PR diff-size buckets, so p95+ PR regressions don't hide under healthy-looking medians. Without size-segmented INP metrics the virtualization tier wouldn't be measurable as a separate intervention.

Interaction with other web-performance metrics

  • FCP (First Contentful Paint) / LCP (Largest Contentful Paint) — page-load metrics, distinct from per-interaction INP. Streaming SSR (concepts/streaming-ssr) primarily improves FCP; rendering-pipeline simplification primarily improves INP.
  • TTI (Time to Interactive) — how quickly a user can start interacting; orthogonal to how long each interaction then takes.
  • Tail latency at scale (concepts/tail-latency-at-scale) — the server-side analogue; the same "per-sample distribution, not mean" framing applies to INP.
Last updated · 200 distilled / 1,178 read