Skip to content

CLOUDFLARE 2026-04-16

Read original ↗

Deploy Postgres and MySQL databases with PlanetScale + Workers

Summary

Cloudflare announced the next step of its September-2025 PlanetScale partnership: customers will be able to provision PlanetScale Postgres and MySQL (Vitess) databases directly from the Cloudflare dashboard and API and have them billed through their Cloudflare account (self-serve or enterprise), including redemption of Cloudflare credits (startup programme, committed spend) against PlanetScale usage. Connect-existing / dashboard- provision flows are live today; the Cloudflare-billed flow ships "next month". The integration is plumbed through Hyperdrive — Cloudflare's database connectivity service that manages connection pools + query caching — which PlanetScale databases automatically bind to so Workers can issue SQL through any Postgres client (e.g. pg) over env.DATABASE.connectionString. To close the edge-worker-against-central-DB latency gap, Workers can be pinned to run in the Cloudflare data centre nearest the PlanetScale region via an explicit placement hint ("placement": { "region": "aws:us-east-1" }), with a forward-looking commitment that Cloudflare will eventually set the hint automatically from the DB's location.

Key takeaways

  1. PlanetScale as a first-class Workers database primitive. The post frames the relationship as: "you'll be able to create PlanetScale Postgres and MySQL databases directly from the Cloudflare dashboard and API, and have them billed to your Cloudflare account." Customers pick data storage "that fits your Worker application needs and keep a single system for billing." Full PlanetScale feature parity is preserved — same clusters, standard PlanetScale pricing, same query insights, agent-driven tooling, branching. (Source: article intro + §"PlanetScale developer experience".)

  2. Hyperdrive is the connectivity mechanism. Workers don't open direct sockets to the PlanetScale DB; they bind to Hyperdrive, which "manages database connection pools and query caching to make database queries fast and reliable." Configuration is a one-liner in wrangler.jsonc:

{ "hyperdrive": [{ "binding": "DATABASE", "id": <AUTO_CREATED_ID> }] }

Application code uses a standard Postgres client against env.DATABASE.connectionString:

const client = new Client({ connectionString: env.DATABASE.connectionString });
await client.connect();
const result = await client.query("SELECT * FROM pg_tables");

(Source: article §"Postgres & MySQL for Workers" including code snippets.) Canonical motivator for the patterns/partner-managed-service-as-native-binding pattern: the partner DB appears to Workers code identically to a Cloudflare-native binding — no extra SDK, no sidecar, no credential juggling.

  1. Explicit placement hint closes the edge-to-origin-DB latency gap. "By default, Workers execute closest to a user request, which adds network latency when querying a central database especially for multiple queries. Instead, you can configure your Worker to execute in the closest Cloudflare data center to your PlanetScale database." Configuration shape:
{ "placement": { "region": "aws:us-east-1" } }

The hint pins Worker execution to a Cloudflare data centre co-located with the DB, preserving multi-query latency budgets for SQL-heavy workloads. (Source: article §"PlanetScale developer experience".) Named in patterns/explicit-placement-hint and motivated by concepts/edge-to-origin-database-latency.

  1. Auto-placement is on the roadmap. "In the future, Cloudflare can automatically set a placement hint based on the location of your PlanetScale database and reduce network latency to single digit milliseconds." Framed as the natural end-state of the dashboard integration: Cloudflare already knows where the DB lives, so the customer shouldn't have to re-declare it in Worker config. Forward-looking; no ship date disclosed. (Source: article §"PlanetScale developer experience".)

  2. Unified billing through Cloudflare is the commercial hook. "You choose the data storage that fits your Worker application needs and keep a single system for billing as a Cloudflare self-serve or enterprise customer. Cloudflare credits like those given in our startup program or Cloudflare committed spend can be used towards PlanetScale databases." Second instance on the wiki of patterns/unified-billing-across-providers for the Cloudflare developer platform (first being the 2026-04-16 AI Platform post, which routes inference spend across 12+ model providers through the same Cloudflare invoice). Launch sequencing is deliberate: "Everything today is still billed via PlanetScale. Launching next month, new PlanetScale databases can be billed to your Cloudflare account." (Source: article intro + §"PlanetScale developer experience" + closing.)

  3. Starting-price anchor + full cluster catalog. "A single node on PlanetScale Postgres starts at $5/month." The Cloudflare-provisioned flow maps to the same PlanetScale database clusters at standard PlanetScale pricing — no Cloudflare-specific SKU or pricing surface. This is pure pass-through; Cloudflare is a reseller + billing aggregator, not a repackager. (Source: article §"PlanetScale developer experience".)

  4. Why Postgres specifically. "Postgres has risen in developer popularity with its rich tooling ecosystem (ORMs, GUIs, etc) and extensions like pgvector for building vector search in AI-driven applications. Postgres is the default choice for most developers who need a powerful, flexible, and scalable database to power their applications." Frames the AI-workload angle — pgvector as the reason Postgres matters specifically for Workers applications — while MySQL (Vitess) remains the sibling option for customers whose stack is MySQL-native. Motivates why the partnership is both Postgres and MySQL rather than Postgres-only. (Source: article §"Postgres & MySQL for Workers".)

  5. Future scope: API integration + customer-requested extensions. "We are building more with our PlanetScale partners, such as Cloudflare API integration, so tell us what you'd like to see next." Public roadmap is deliberately open-ended — tight dashboard integration is shipped now, PlanetScale API surfacing through Cloudflare's API is next-up, and further primitives are customer-requested. (Source: article close.)

Systems named

  • systems/hyperdrive — Cloudflare's database connectivity service providing connection pooling + query caching in front of any PostgreSQL / MySQL origin. Primary beneficiary: Workers running against central SQL databases, where naive per-request connection open/close would destroy pool reuse and push p50 SQL latency up by tens to hundreds of ms.
  • systems/planetscale — managed database vendor offering both Vitess-based MySQL (their original product) and a more recent Postgres offering. Canonical wiki instance of the patterns/partner-managed-service-as-native-binding pattern on Cloudflare's platform; featured with query insights, AI-agent tooling, and database branching as differentiating features vs commodity hosted SQL.
  • systems/vitess — MySQL-protocol-compatible sharded database layer originally from YouTube, the substrate of PlanetScale's MySQL product. Listed here as the MySQL-side engine even though the article treats it as implementation detail.
  • systems/cloudflare-workers — the consumer of the Hyperdrive + PlanetScale integration; picks up new config knobs (hyperdrive binding, placement.region) without runtime changes.
  • systems/postgresql — one of the two relational engines PlanetScale hosts; Cloudflare-facing env.DATABASE.connectionString speaks Postgres wire protocol so any Postgres client (pg in the example) works verbatim.
  • systems/mysql — the other PlanetScale engine, via Vitess. No code example shown; MySQL support is framed as co-equal to Postgres support.

Concepts named

  • concepts/edge-to-origin-database-latency — the fundamental cost model for edge-compute-against-central-DB: each SQL round-trip pays the RTT between the edge POP and the database region, so a request issuing N queries pays N × RTT. The placement hint re-collapses the RTT to within-datacentre.
  • concepts/network-round-trip-cost — background concept: network round-trip dominates latency for chatty protocols like SQL; batching + proximity are the two levers. Hyperdrive caches + pools to reduce the number of RTTs; placement hints reduce the cost of each RTT.

Patterns named

  • patterns/explicit-placement-hint — configuration knob that pins a serverless / edge compute unit to a specific region to co-locate it with a stateful dependency, overriding the platform's default user-proximity routing. Canonical wiki instance is the placement.region field in wrangler.jsonc for Cloudflare Workers, used to co-locate a Worker with a central Postgres / MySQL instance.
  • patterns/partner-managed-service-as-native-binding — integration shape where a third-party managed service (PlanetScale here) is surfaced to customer code through the same binding mechanism as first-party primitives, with dashboard-driven provisioning and unified billing, while the partner retains operational ownership of the underlying service. Customer code is provider-agnostic; provisioning + billing are provider-unified.
  • patterns/unified-billing-across-providers — charge the aggregator, not the original service — use aggregator credits against the original service's usage. Extended here with the storage-tier instance after the AI-inference-tier instance from 2026-04-16 AI Platform.

Operational numbers

  • $5/month entry price: single-node PlanetScale Postgres (pass-through to standard PlanetScale pricing).
  • Single-digit-millisecond latency is the forward-looking target when Cloudflare auto-sets the placement hint based on PlanetScale DB location — aspirational, not measured in this post.
  • 12+ inference providers + 70+ models (cross-post anchor from the AI Platform unification post): same billing-aggregator shape applied one tier down at the storage layer.
  • No disclosed numbers for: Hyperdrive pool size, query cache hit rate, connection reuse rate, tail latency with / without placement hint, PlanetScale QPS / storage / concurrent connection limits. This post is an integration announcement, not a performance report.

Caveats

  • Integration-announcement post, not a performance deep-dive. No measured latency numbers, no cold-start characterisation for the Worker-with-placement-hint path, no comparison against existing Workers-to-Postgres paths (direct TCP from the Worker, or other managed-Postgres providers like Neon / Supabase / AWS RDS).
  • Cloudflare-billed flow ships "next month". Today's state: dashboard provisioning works, Hyperdrive binding works, but invoices still come from PlanetScale. Flip to Cloudflare-billed is forward-looking, with no firm date disclosed.
  • Auto-placement is forward-looking too. Customers must still set placement.region manually today. Auto-placement is called out explicitly as a future capability without a ship date.
  • No discussion of failure modes. What happens if the PlanetScale region is unreachable from the placement-pinned Cloudflare POP? What's the fallback routing? What's the behaviour under PlanetScale maintenance windows or regional outages? Not covered.
  • Hyperdrive's internal mechanics are only lightly touched. "manages database connection pools and query caching" is the only architectural detail. Pool size, cache invalidation policy, staleness semantics, cost model, regional layout of Hyperdrive endpoints — all external to this post (references an earlier Hyperdrive post for depth). The wiki's systems/hyperdrive page should treat this source as "used by" rather than "designed here".
  • MySQL-side path less illustrated. Both engines are named as co-equal, but every code snippet in the post is Postgres (pg client, pg_tables query). Same Hyperdrive binding shape presumably works for MySQL via a MySQL client, but the article doesn't show it.
  • Partnership announcement from September 2025 is the prior context. This post is the billing-and-provisioning follow-up, not the original partnership reveal. Readers without the prior context see the integration land half-way through its rollout.
  • Narrow architectural substance for a Tier-1 post. The post is close to the edge of scope — no novel distributed- systems result disclosed. Ingested because: (a) introduces systems/hyperdrive / systems/planetscale as named systems on this wiki; (b) the placement-hint knob is a genuine edge-compute architectural primitive worth naming; (c) the Cloudflare-as-billing-aggregator posture repeats across tiers (AI, storage) and deserves the second Seen-in on the pattern page.

Source

Last updated · 200 distilled / 1,178 read