Skip to content

SYSTEM Cited by 2 sources

PlanetScale for Postgres

What it is

PlanetScale for Postgres is PlanetScale's Postgres hosting product, launched in private preview on 2025-07-01 (later Generally Available per the post's update banner). It runs real Postgres v17not a fork — under a proprietary control-plane / operator, with a proprietary proxy layer (including PgBouncer for connection pooling) handling query buffering during failover + pooling. Built on the same Metal direct-attached-NVMe cluster shape canonicalised on the MySQL side in March 2025.

PlanetScale for Postgres uses real Postgres running on our proprietary operator meaning we can bring the maturity of PlanetScale and the performance of Metal to an even wider audience. Today's release already achieves true high availability with automatic failovers, query buffering, and connection pooling via our proprietary proxy layer, which includes PgBouncer for connection pooling. We run Postgres v17 and support online imports from any version > Postgres v13, as well as automatic Postgres version updates without downtime.

(Source: sources/2025-07-01-planetscale-planetscale-for-postgres.)

Architecture (as disclosed)

  • Engine: real community Postgres v17.
  • Control plane: PlanetScale's proprietary operator provisions and manages the Postgres cluster. Canonical wiki instance of concepts/proprietary-database-operator — the structural alternative to forking Postgres (DSQL-style extensions) or to compute-storage-separation rewrites (systems/lakebase / Neon).
  • Proxy layer: PlanetScale's proprietary proxy sits in front of the Postgres primary and replicas; it integrates PgBouncer for connection pooling and adds query buffering so clients don't see errors during automatic failover.
  • Cluster topology: Metal's primary + 2 replicas on direct-attached NVMe (inherited from the MySQL Metal launch) applies to Postgres on Metal. No artificial IOPS cap.
  • Import path: online imports from any Postgres version

    13; mechanism not disclosed in the post (candidates: logical replication, pg_dump + WAL catch-up).

  • Version upgrade path: "automatic Postgres version updates without downtime" — mechanism likely logical- replication-based blue/green, not disclosed.

Positioning

PlanetScale's own framing of why Postgres, why now:

  1. Customer-demand forcing function — Metal's March 2025 launch surfaced "an immense number of companies reaching out to us asking us to support Postgres. The demand was so overwhelming that by the end of launch day we knew we had to do this."
  2. 50+ customer pain-survey"regular outages, poor performance, and high cost" consistent across the existing-vendor set. Same three pain axes (concepts/performance-variance-degradation + cost + reliability) Metal-for-MySQL targeted.
  3. "We are an engineering company" — performance parity / superiority was an entry requirement. The post claims "consistently outperform every Postgres product on the market, even when giving the competition 2x the resources", anchored on a separate benchmarking-methodology post.

Competitively, PlanetScale for Postgres sits across from:

PlanetScale's structural differentiation is real Postgres on direct-attached NVMe with a proprietary control plane — same open-source engine as RDS, but fundamentally different storage architecture.

Relationship to Vitess and Neki

The post is explicit: Vitess does not extend to Postgres.

Vitess is one of PlanetScale's greatest strengths and has become synonymous with database scaling. Contemporary Vitess is the product of PlanetScale's experience running at extreme scale. We have made explicit sharding accessible to hundreds of thousands of users and it is time to bring this power to Postgres. We will not however be using Vitess to do this.

Vitess' achievements are enabled by leveraging MySQL's strengths and engineering around its weaknesses. To achieve Vitess' power for Postgres we are architecting from first principles.

PlanetScale for Postgres today (launch-time) runs as a single-primary Metal cluster (primary + 2 replicas). The horizontal-sharding layer for Postgres is Neki, a separate under-construction system at neki.dev — waitlist-only at launch. Canonical wiki instance of patterns/architect-sharding-from-first-principles-per-engine — each engine gets its own purpose-built sharding layer.

Launch customer

Convex"the complete backend solution for app developers, is migrating their reactive database infrastructure to PlanetScale for Postgres." Migration writeup: pscale.link/convex.

Caveats

  • Private-preview-at-launch. GA status came later (update banner in the post); no GA timeline disclosed at publication.
  • Architectural detail for the proprietary operator is absent. Provisioning, failover fencing, WAL archival, backup cadence, cross-AZ / cross-region replication, PITR granularity, extension-allowlist policy not disclosed.
  • Query buffering mechanics not characterised. Buffer depth, timeout, client-visible error contract, guarantees on idempotent vs non-idempotent statements, prepared- statement handling not stated.
  • PgBouncer deployment mode not stated. Session vs transaction pooling, per-user vs per-db pools, statement pooling availability, prepared-statement rewriting not disclosed.
  • Online-import mechanism not disclosed. Which tool / protocol bridges existing vendor → PlanetScale for Postgres.
  • No per-workload benchmarks in the post itself. Only the "2× resource parity beat" aggregate claim; competitor names, workload mix, percentile curves deferred to the separate methodology post.
  • Extension compatibility not enumerated. Specifically whether popular extensions (pgvector, pg_stat_statements, pgcrypto, postgis, pg_partman, etc.) are supported is not addressed.
  • Metal-for-Postgres operational footprint not quantified. Instance types, NVMe capacity ceilings, pricing delta vs Metal-for-MySQL, geographic regions on Metal not disclosed.

Seen in

  • sources/2025-07-01-planetscale-planetscale-for-postgres — launch announcement. Canonical introduction of the proprietary-operator + real-Postgres + Metal-cluster + PgBouncer-in-proxy shape; the Vitess-rejection + Neki-is- replacement story; the Convex launch-customer datum; the 50-customer pain-survey framing.
  • sources/2026-04-21-planetscale-benchmarking-postgresmethodology disclosure for the launch's "consistently outperform every Postgres product on the market" claim. Names the reference target: i8g M-320 — 4 vCPUs, 32 GB RAM, 937 GB NVMe SSD, with primary + 2 replicas across 3 AZs by default. Confirms PlanetScale for Postgres is on Metal (direct-attached NVMe) while all benchmarked competitors (Aurora, AlloyDB, CrunchyData, Supabase, TigerData, Neon) are network- attached-storage. Introduces the Telescope harness and the three-benchmark commitment (latency / TPCC / OLTP read-only), with full reproduction instructions and benchmarks@planetscale.com feedback address. Canonical instance of the new patterns/reproducible-benchmark-publication pattern and the new concepts/price-performance-ratio concept. Availability-posture-equalisation rule established: competitor price models include replicas to match PlanetScale's default 3-AZ posture.
Last updated · 319 distilled / 1,201 read