CONCEPT Cited by 1 source
PgBouncer deployment topology¶
Definition¶
PgBouncer deployment topology is the placement choice for where PgBouncer runs relative to the Postgres primary / replicas. Different placements carry different HA, connection-persistence, and failure-domain trade-offs.
Three topologies on PlanetScale¶
Ben Dicken (PlanetScale, 2026-03-13) names three distinct placements:
1. Local PgBouncer (default)¶
- Placement: same server as the Postgres primary.
- Invocation: same credentials as direct Postgres, port
6432instead of5432. - HA characteristics: shares failure domain with the primary — a primary failover loses the PgBouncer state.
- Use case: the default, included with every PlanetScale Postgres database; good enough for most workloads.
2. Dedicated primary PgBouncer¶
- Placement: separate nodes from Postgres, but still routing writes to the primary.
- Invocation: append
|your-pgbouncer-nameto the username, port6432. - Stitching: "connects to the local PgBouncer first, which then connects to Postgres" — a layered deployment where the dedicated PgBouncer funnels through the local one.
- HA characteristics: "Client connections persist through resizes, upgrades, and most failovers" — key distinguishing property; because PgBouncer runs on separate nodes from the primary, a primary failover doesn't drop client connections.
- Use case: workloads that cannot tolerate connection drops during Postgres-side events.
3. Dedicated replica PgBouncer¶
- Placement: separate nodes routing to replicas.
- Stitching: connects directly to replicas, bypassing the local bouncer (unlike dedicated primary).
- Use case: "if your applications make heavy use of replicas for read queries."
Why topology matters¶
Topology determines three operationally distinct failure modes:
- Connection persistence across failover: local PgBouncer shares failure domain with the primary (failover drops clients); dedicated primary "persists through resizes, upgrades, and most failovers."
- Independent scaling of the pooler tier: local PgBouncer scales only with the primary (same hardware budget); dedicated topologies let you scale PgBouncer independently.
- Read-vs-write separation: dedicated replica topology lets read-heavy workloads saturate replica pools without affecting the primary's pool budget.
Relation to the general two-tier pattern¶
These three topologies are specific PlanetScale instances of a more general shape. The general patterns/two-tier-connection-pooling pattern separates app-tier pool from proxy-tier pool. PlanetScale's dedicated primary PgBouncer introduces a layered-proxy-tier variant: client → dedicated PgBouncer → local PgBouncer → primary Postgres. See patterns/layered-pgbouncer-deployment.
Canonical invocation convention¶
The PlanetScale username-suffix convention (|your-pgbouncer-name) is a compact topology-selector: the same host:port (6432) serves all three topologies, distinguished by the suffix. This makes topology changes a connection-string edit rather than an infrastructure migration.
Seen in¶
- sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer — canonical wiki disclosure of the three PlanetScale topologies. Ben Dicken (PlanetScale, 2026-03-13).
Related¶
- systems/pgbouncer — the pooler whose placement this concept governs.
- systems/planetscale-for-postgres — the product that surfaces all three topologies as user-selectable.
- systems/planetscale — parent platform.
- concepts/process-os — the substrate-level constraint that motivates pooling; topology is downstream of this.
- patterns/layered-pgbouncer-deployment — the app-side + DB-side layered pattern; topology choice is its user-facing surface.
- patterns/two-tier-connection-pooling — general two-tier framing; PlanetScale topologies are a concrete instance.