Skip to content

PLANETSCALE 2025-07-01 Tier 3

Read original ↗

PlanetScale for Postgres

Summary

Sam Lambert announces the private preview of PlanetScale for Postgres, PlanetScale's new Postgres hosting platform built on top of its proprietary Postgres operator running real Postgres v17 (not a fork). The launch brings the Metal direct-attached-NVMe architecture to Postgres, adds automatic failover with query buffering + PgBouncer connection pooling via PlanetScale's proprietary proxy layer, and ships online imports from any Postgres v13+ plus zero-downtime Postgres version updates. The post also discloses that PlanetScale will not use Vitess to add horizontal sharding to Postgres — Vitess's strength is explicitly derived from MySQL's strengths and weaknesses, so PlanetScale is architecting a new system from first principles: Neki (neki.dev). Convex is named as the launch-window customer migrating its reactive database infrastructure to the platform.

Key takeaways

  1. PlanetScale for Postgres is a new product, not a repackage. "PlanetScale for Postgres uses real Postgres running on our proprietary operator meaning we can bring the maturity of PlanetScale and the performance of Metal to an even wider audience." Real Postgres v17 under a proprietary control-plane / operator, not a Postgres fork.
  2. Proprietary proxy layer handles HA + pooling. "Today's release already achieves true high availability with automatic failovers, query buffering, and connection pooling via our proprietary proxy layer, which includes PgBouncer for connection pooling." Two canonicalised capabilities — query buffering during failover (clients don't see error) + PgBouncer-based pooling — sit in the proxy, not in Postgres itself.
  3. Online imports from Postgres 13+ and zero-downtime version upgrades. "we run Postgres v17 and support online imports from any version > Postgres v13, as well as automatic Postgres version updates without downtime." Canonical instance of concepts/online-database-import — migrate from existing managed-Postgres vendors without downtime to PlanetScale for Postgres.
  4. Metal brings direct-attached NVMe to Postgres. "PlanetScale Metal's locally-attached NVMe SSD drives fundamentally change the performance/cost ratio for hosting relational databases in the cloud. We're excited to bring this performance to Postgres." The same local-NVMe + replication architecture canonicalised for MySQL in the 2025-03-13 Metal launch now extends to Postgres.
  5. Benchmark claim: beats every Postgres product at 2× resource disparity. "After extensive testing we are proud to share that we consistently outperform every Postgres product on the market, even when giving the competition 2x the resources." Claim anchored on the separately-published benchmark methodology post. No per-workload numbers in this launch post.
  6. Customer-demand forcing function named explicitly. "In March we announced PlanetScale Metal and something wild happened. We had an immense number of companies reaching out to us asking us to support Postgres. The demand was so overwhelming that by the end of launch day we knew we had to do this." PlanetScale's decision path from Metal (2025-03) to Postgres (2025-07) was customer-demand-driven, not roadmap.
  7. 50-customer failure mode survey frames market. "We spoke to over 50 customers of the current Postgres hosting platforms and we heard identical stories of regular outages, poor performance, and high cost." Same three pain axes (performance variance + cost + reliability) PlanetScale targets with Metal on the MySQL side.
  8. Vitess explicitly rejected for Postgres — Neki is the replacement. "Vitess is one of PlanetScale's greatest strengths and has become synonymous with database scaling. … We will not however be using Vitess to do this. Vitess' achievements are enabled by leveraging MySQL's strengths and engineering around its weaknesses. To achieve Vitess' power for Postgres we are architecting from first principles. We are well under way with building this new system and will be releasing more information and early access as we progress." Canonical wiki introduction of patterns/architect-sharding-from-first-principles-per-engine — sharding layers are engine-specific by construction, not reusable across MySQL / Postgres. Product name Neki, waitlist at neki.dev.
  9. Convex named as launch-window production customer. "Convex, the complete backend solution for app developers, is migrating their reactive database infrastructure to PlanetScale for Postgres." Migration writeup at pscale.link/convex.

Systems named

  • systems/planetscale-for-postgres — the new product.
  • PlanetScale Metal — direct- attached-NVMe cluster architecture now supporting Postgres.
  • Postgres v17 — the engine; online import from any Postgres version > 13.
  • PgBouncer — connection pooler inside the proprietary proxy layer.
  • Vitess — explicitly not the sharding substrate for Postgres.
  • Neki — PlanetScale's new from-first-principles sharding system for Postgres. Waitlist only at launch time.
  • Convex — launch customer migrating reactive database infrastructure.

Concepts named

Patterns named

Operational numbers / claims

  • Postgres v17 engine runtime.
  • Online imports from any Postgres v13+.
  • Automatic Postgres version updates without downtime.
  • 50+ customers interviewed on existing-vendor failure modes before committing to build.
  • "2× resource parity beat" claim against every Postgres product on the market (benchmark-methodology post linked).
  • Primary + 2 replicas by default on Metal (inherited from 2025-03-13 Metal launch).
  • Starting price (not in this post but referenced from earlier PlanetScale $5/month Postgres post).

Caveats

  • Announcement-voice / private preview. Generally Available status arrived later ("Update: PlanetScale for Postgres is now Generally Available" banner in the post) but the post itself is the private-preview launch. No disclosed GA date.
  • No published benchmarks in the post itself. The "outperforms every Postgres product" claim is anchored on a separate benchmarking-methodology post; per-workload numbers, latency percentiles, competitor names, and workload-size / shape are all deferred.
  • No architectural disclosure of the proprietary operator. The post doesn't describe how the operator handles provisioning, failover fencing, WAL archival, backup cadence, cross-AZ / cross-region replication, PITR granularity, or extension-policy (pgvector, pg_stat_statements, etc.).
  • No Neki architecture disclosure. Only "from first principles" + "well under way" — no published design document, waitlist-only.
  • PgBouncer-inside-proxy semantics not detailed. Per-user vs per-database pooling mode, prepared-statement handling, session vs transaction pooling policy not stated.
  • Query buffering during failover not characterised. No buffer-depth, timeout semantics, client-visible error contract, or guarantees on idempotent vs non-idempotent statements.
  • Online-import mechanism not described. Logical replication? pg_dump/pg_restore + catch-up stream? Custom WAL-shipping bridge? Whether PlanetScale's import handles extensions / large objects / sequences / custom types is not stated.
  • Zero-downtime version upgrade mechanism not described. Likely logical-replication-based cross-version blue/green, but not stated.
  • Convex migration context pointed at pscale.link/convex not summarised in this post.
  • Neki waitlist gate. neki.dev is waitlist-only at time of publication — no public Neki documentation.
  • Metal-for-Postgres specific footprint not quantified. Instance types, NVMe capacity tiers, pricing gap vs Metal-for-MySQL not disclosed.

Cross-source continuity

  • Third Metal-era post on the wiki after sources/2025-03-13-planetscale-io-devices-and-latency and sources/2025-03-18-planetscale-the-real-failure-rate-of-ebs — both those posts argued why PlanetScale chose to move OLTP off network-attached storage; this post ships the Postgres axis of that argument.
  • Ecosystem positioning: the post is the PlanetScale side of the same conversation Aurora DSQL, Aurora Limitless, and Lakebase are part of — "how do we make Postgres scale-and-be-reliable beyond vanilla RDS?" PlanetScale's answer structurally differs from DSQL's (extend via extensions) and Lakebase's ( compute-storage separation via Neon pageserver): real Postgres on proprietary control plane on direct-attached NVMe.
  • Vitess boundary canonicalised. Up to this post Vitess on the wiki was "MySQL sharding" with implicit MySQL-specificity; this post explicitly names the MySQL dependency as architectural ("leveraging MySQL's strengths and engineering around its weaknesses") and establishes the general wiki claim that per-engine sharding layers are engine-specific by construction.
  • Ben Dicken / Sam Lambert / Nick Van Wiggeren PlanetScale engineering-voice continuity — this post is Lambert (CEO) launch-voice (like the 2022 slotted-counter post), complementing Dicken's pedagogy axis (2024-09-09 B-trees, 2025-03-13 IO devices) and Van Wiggeren's reliability axis (2025-03-18 EBS failure rate).

Source

Last updated · 319 distilled / 1,201 read