PATTERN Cited by 1 source
Layered PgBouncer deployment¶
What it is¶
Layered PgBouncer deployment runs two PgBouncer instances in series — one on the application/client side, one near Postgres — forming two distinct funnels: one from many worker / process connections into a smaller egress set, one from there into a tightly controlled number of direct Postgres connections.
The shape is a specific-to-PgBouncer instance of the general patterns/two-tier-connection-pooling pattern, where both tiers happen to be PgBouncer rather than a mix of app-tier driver pool + proxy-tier pooler.
Canonical framing¶
Ben Dicken (PlanetScale, 2026-03-13), verbatim: "In some deployments, it also makes sense to layer PgBouncer. You can run one PgBouncer on the app or client side to funnel many worker or process connections into a smaller egress set, then run another PgBouncer near Postgres as the final funnel into a tightly controlled number of direct database connections."
The structural rationale: "especially useful when you need connection pooling both close to compute and close to the database."
(Source: sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer.)
Structure¶
Many app workers
│
│ (many connections)
▼
┌──────────────┐
│ App-side │ ← funnel 1: workers → small egress
│ PgBouncer │
└──────┬───────┘
│ (smaller egress)
▼
┌──────────────┐
│ DB-side │ ← funnel 2: egress → direct DB connections
│ PgBouncer │
└──────┬───────┘
│ (few direct connections)
▼
┌──────────────┐
│ Postgres │
└──────────────┘
Each layer enforces a different cap:
- App-side: caps per-compute-cluster outbound connection count, eliminates per-worker handshake cost.
- DB-side: caps per-database upstream connection count, enforces the connection chain's memory-safe ceiling.
PlanetScale's productisation¶
PlanetScale's dedicated primary PgBouncer topology is a canonical instance: dedicated PgBouncer nodes funnel into the local PgBouncer that sits with the primary. See concepts/pgbouncer-deployment-topology.
Dicken's framing treats the app-side PgBouncer as a customer-managed component separate from PlanetScale's infrastructure; the productised layer is the dedicated-primary-over-local shape.
When to use¶
- Compute fleet far from DB fleet: network RTT between app and DB is high (cross-region, edge compute); the app-side PgBouncer amortises handshake + reduces upstream traffic.
- Very large worker count: thousands of app workers × modest per-worker pool = far more connections than the DB can accept; the app-side pooler consolidates first.
- Serverless compute with fresh processes: each function invocation = fresh pool; layering adds a persistent pooler upstream of the ephemeral app-side pools.
When NOT to use¶
- Small-scale single-app deployment: one PgBouncer is enough; the second tier is overhead.
- Latency-critical hot path: two hops add RTT; some workloads can't tolerate it.
- Operational overhead concerns: two PgBouncers means two sets of configs, metrics, alerts, operational runbooks.
Trade-offs¶
- Extra hop latency: two PgBouncer traversals instead of one. Typically sub-millisecond per hop same-region, but adds up at high RPS.
- Configuration coordination: the app-side
max_client_connmust match the workers' aggregate; the DB-side must enforce the Postgresmax_connectionsceiling; mismatches produce either under-utilisation or rejected connections. - Diagnostic complexity: a slow query's wait time is split across two pool-queues, requiring instrumentation at both tiers.
- Two failure domains: either pooler can fail independently. Mitigated by running each tier with its own redundancy.
Contrast with general two-tier pooling¶
patterns/two-tier-connection-pooling is the generic pattern with app-tier driver pool + proxy-tier pooler as distinct implementations (HikariCP + PgBouncer, database/sql + VTTablet, etc.). This pattern is the special case where both tiers are the same pooler (both PgBouncer). The difference matters because:
- Configuration surface is homogeneous (both configured via PgBouncer INI files).
- Both tiers support the same session-pooling / transaction-pooling / statement-pooling modes.
- Monitoring and observability tooling works uniformly across tiers.
Seen in¶
- sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer — canonical wiki disclosure. Ben Dicken (PlanetScale, 2026-03-13) with explicit topology diagram.
Related¶
- patterns/two-tier-connection-pooling — the general pattern; this is the PgBouncer-on-both-tiers instance.
- patterns/isolated-pgbouncer-per-workload — orthogonal PgBouncer-multi-instance pattern (horizontal rather than layered).
- patterns/three-pool-size-budget-allocation — sizing pattern that applies to each tier independently.
- concepts/pgbouncer-deployment-topology — the placement concept; PlanetScale's dedicated-primary topology productises one layered shape.
- concepts/pgbouncer-connection-chain — the chain of caps; layered deployment re-applies the chain at each tier.
- systems/pgbouncer — the pooler.
- systems/postgresql — the substrate.
- systems/planetscale-for-postgres — the productised-layered-topology platform.