Skip to content

CONCEPT Cited by 1 source

PgBouncer deployment topology

Definition

PgBouncer deployment topology is the placement choice for where PgBouncer runs relative to the Postgres primary / replicas. Different placements carry different HA, connection-persistence, and failure-domain trade-offs.

Three topologies on PlanetScale

Ben Dicken (PlanetScale, 2026-03-13) names three distinct placements:

1. Local PgBouncer (default)

  • Placement: same server as the Postgres primary.
  • Invocation: same credentials as direct Postgres, port 6432 instead of 5432.
  • HA characteristics: shares failure domain with the primary — a primary failover loses the PgBouncer state.
  • Use case: the default, included with every PlanetScale Postgres database; good enough for most workloads.

2. Dedicated primary PgBouncer

  • Placement: separate nodes from Postgres, but still routing writes to the primary.
  • Invocation: append |your-pgbouncer-name to the username, port 6432.
  • Stitching: "connects to the local PgBouncer first, which then connects to Postgres" — a layered deployment where the dedicated PgBouncer funnels through the local one.
  • HA characteristics: "Client connections persist through resizes, upgrades, and most failovers" — key distinguishing property; because PgBouncer runs on separate nodes from the primary, a primary failover doesn't drop client connections.
  • Use case: workloads that cannot tolerate connection drops during Postgres-side events.

3. Dedicated replica PgBouncer

  • Placement: separate nodes routing to replicas.
  • Stitching: connects directly to replicas, bypassing the local bouncer (unlike dedicated primary).
  • Use case: "if your applications make heavy use of replicas for read queries."

Why topology matters

Topology determines three operationally distinct failure modes:

  • Connection persistence across failover: local PgBouncer shares failure domain with the primary (failover drops clients); dedicated primary "persists through resizes, upgrades, and most failovers."
  • Independent scaling of the pooler tier: local PgBouncer scales only with the primary (same hardware budget); dedicated topologies let you scale PgBouncer independently.
  • Read-vs-write separation: dedicated replica topology lets read-heavy workloads saturate replica pools without affecting the primary's pool budget.

Relation to the general two-tier pattern

These three topologies are specific PlanetScale instances of a more general shape. The general patterns/two-tier-connection-pooling pattern separates app-tier pool from proxy-tier pool. PlanetScale's dedicated primary PgBouncer introduces a layered-proxy-tier variant: client → dedicated PgBouncer → local PgBouncer → primary Postgres. See patterns/layered-pgbouncer-deployment.

Canonical invocation convention

The PlanetScale username-suffix convention (|your-pgbouncer-name) is a compact topology-selector: the same host:port (6432) serves all three topologies, distinguished by the suffix. This makes topology changes a connection-string edit rather than an infrastructure migration.

Seen in

Last updated · 470 distilled / 1,213 read