Skip to content

PATTERN Cited by 1 source

Index from tail-latency pattern

Problem. A query pattern shows a specific time-series shape — rising p95 within each batch, sawtooth between batches, monotonic degradation over a known window. The average latency alone doesn't tell you whether to add an index; the shape does.

Solution. Read the per-pattern time-series shape, map it to a known failure mode (full-table-scan asymptote, lock-wait amplification, missing covering index), and propose the specific index that fixes the identified mode.

(Source: sources/2026-04-21-planetscale-query-performance-analysis-with-insights.)

Canonical instance

PlanetScale's worked example from Hazen 2023-04-20:

  1. Average latency row in Insights: "this query takes approximately 8 seconds to run on average." Average alone doesn't distinguish "this pattern is legitimately slow" from "this pattern is degrading over time."
  2. Click into the pattern; see the per-pattern 24h latency graph. Shape: sawtooth keyed to hourly batch run.
  3. Minutes 10–20 past hour: "a few hundred milliseconds." Minutes 20–30 past hour: "almost 15 minutes."
  4. Map the shape → DELETE-LIMIT asymptote. Predicate-column index is missing; as matches deplete, each batch approaches full-table-scan cost.
  5. Fix: CREATE INDEX … ON … (minute) on the predicate column. Post-fix latency "consistently under a few hundred milliseconds" — confirmed via deploy-marker overlay (#505) on the same graph.

Shape-to-fix catalogue

Observed shape Candidate diagnosis Fix
Monotonic degradation within a batch DELETE/UPDATE-LIMIT asymptote; missing predicate index Index the predicate column
Flat-high p95, low p50 Missing index on ordered scan (ORDER BY without index) Index on sort column, or covering index
Spikes correlated with traffic Hot-row / lock contention Rework schema or use
patterns/slotted-counter-pattern
Step-up after a deploy marker Query-plan regression introduced by schema change Check DDL diff, add index on new column
Diurnal cycling, proportional to QPS Cache-miss amplification, not an index problem Buffer-pool / cache sizing
Flat regardless of row count Already indexed, query is CPU-bound in the engine Index won't help

Why shape beats average

An 8-second average on a DELETE that spends 10 minutes flat and 50 minutes near-zero is not a query that would benefit from general-purpose tuning — it's a query with a specific tail-degradation mode that only the time-series reveals.

Sorting by average or cumulative time surfaces the candidate (patterns/digest-based-query-prioritization); inspecting the shape tells you which kind of fix applies.

Integration: Insights + deploy-markers closes the loop

  1. Pick a pattern — Insights query list, sort by cumulative or tail-latency cost.
  2. Read the shape — per-pattern 24h latency graph.
  3. Diagnose — map shape to mode.
  4. Fix — ship a deploy request with the index addition. PlanetScale's gh-ost-style online schema change adds the index without blocking.
  5. Confirm — the deploy marker appears on the same graph; the shape steps down or disappears.

Seen in

  • sources/2026-04-21-planetscale-query-performance-analysis-with-insights — canonical wiki disclosure. Hazen walks the full loop on a real PlanetScale production query, calling out the shape diagnosis ("we see an interesting pattern") and the deploy-marker confirmation ("we see a deploy marker on the query latency graph, labeled #505") as the closed-loop Insights workflow.
Last updated · 470 distilled / 1,213 read