Skip to content

SYSTEM Cited by 1 source

Telescope (PlanetScale)

Definition

Telescope is PlanetScale's internal benchmarking tool for "creating, running, and assessing benchmarks" across Postgres (and MySQL) database products. Disclosed in the 2026-04-21 Benchmarking Postgres post as the engine behind PlanetScale's public benchmark programme comparing PlanetScale for Postgres against Amazon Aurora, Google AlloyDB, CrunchyData, Supabase, TigerData, and Neon.

We built an internal tool, "Telescope", to be our go-to tool for creating, running, and assessing benchmarks.

We have used this as an internal tool to give our engineers quick feedback on the evolution of our product's performance as we built and tuned it. We decided to share our findings with the world, and give others the tools to reproduce them.

(Source: sources/2026-04-21-planetscale-benchmarking-postgres)

Workloads driven

Telescope drives three named benchmarks:

  1. Latency — repeated SELECT 1; round-trips from an in-region client (200 runs per configuration).
  2. TPCC-like — Percona's sysbench-tpcc scripts with TABLES=20, SCALE=250 producing a ~500 GB database.
  3. OLTP read-onlysysbench's built-in oltp_read_only at 300 GB for top performers.

Uses

  • Internal regression feedback. "Give our engineers quick feedback on the evolution of our product's performance as we built and tuned it" — the same feedback loop any vendor benchmark harness serves internally.
  • Public multi-vendor comparison. Runs the same workload configs against competitor cloud-Postgres products to publish comparative numbers.
  • Reproducibility substrate. Instructions published at /benchmarks/instructions/tpcc500g and /benchmarks/instructions/oltp300g let third parties rerun the same benchmarks on the same configs — canonical wiki instance of the patterns/reproducible-benchmark-publication pattern.

What Telescope is (and isn't) on the wiki

Telescope is not a novel benchmark language or a new workload shape — it orchestrates standard sysbench-family workloads. Its distinguishing property is organisational: a vendor committing to publish its internal benchmarking harness's methodology + configs + reproduction instructions, along with a feedback address (benchmarks@planetscale.com).

The architectural sibling is Figma's afternoon-of-Go OpenSearch harness (same pattern at a single- product configuration-sweep altitude) and Meta's DCPerf (same pattern at hyperscaler microarchitectural altitude). All three are "vendor's default benchmark biases the signal → build one that matches our workload shape → publish it so consumers can audit."

Seen in

Last updated · 319 distilled / 1,201 read