Skip to content

CONCEPT Cited by 1 source

Bursty query pattern

Bursty query pattern is a workload shape where reads concentrate into short, high-intensity bursts rather than flowing at a steady rate. It's most common in incident-driven telemetry stores — dashboards, logs, traces, and especially continuous profiles — where operators open the UI during an outage and fire dozens of queries in minutes, then leave it alone for hours or days.

Why it's architecturally significant

A bursty workload breaks capacity-planning intuitions designed for steady-state traffic:

  • Steady-state p99 sizing overbuilds. Provision for peak-burst QPS and you pay for that capacity 99% of the time when it's idle.
  • Provision for average and you fall over during incidents — precisely when query performance matters most for root-cause.
  • Coupled read/write tiers amplify the problem. If the same nodes serve writes and reads (as in Cortex- era observability DBs), a read burst steals capacity from the always-on write path.

The Pyroscope 2.0 framing

The Pyroscope 2.0 launch post names bursty query patterns as one of three traits that drive profiling-DB design:

"Pyroscope 2.0 applies similar architectural principles, adapted for the unique characteristics of profiling data: large payloads, heavy symbolic information, and bursty query patterns."

(Source: sources/2026-04-22-grafana-introducing-pyroscope-2-0)

Continuous-profiling queries are bursty by nature: profiles are queried heavily during incidents, barely at all between them. A metrics-style steady-state read-path sizing wastes capacity; a direct copy of Mimir's metrics read-path design won't fit profiling well.

Architectural responses

Systems built for bursty reads tend to converge on:

  1. Decouple reads from writes at storage layer. Independent tiers scale independently; a read burst can't starve the write path.
  2. Object storage as the primary read substrate. Object stores handle bursts well — very high aggregate throughput, each request pays only for what it reads.
  3. Cache the recent hot data. Most incident queries are "the last hour" — a small hot cache in front of object storage absorbs the typical burst without touching cold storage.
  4. Scalable query workers. Stateless query nodes that can be added to handle a burst, retired after.
Last updated · 517 distilled / 1,221 read