PlanetScale — Scaling Postgres connections with PgBouncer¶
Summary¶
Ben Dicken (PlanetScale, 2026-03-13) publishes a canonical field-manual on PgBouncer configuration tuning for Postgres, grounded in the OS-substrate cost of Postgres's process-per-connection architecture. Dicken's framing: "The near-universal choice for solving this problem is PgBouncer. Though there are upcoming systems like Neki which will solve this problem in a more robust way, PgBouncer has proven itself an excellent connection pooler for Postgres." The post names three PgBouncer deployment shapes on PlanetScale (local, dedicated primary, dedicated replica), re-canonicalises the three pooling modes (session / statement / transaction) with an unequivocal verdict ("Transaction pooling is the only sensible option"), and walks the full configuration hierarchy — max_client_conn → default_pool_size → max_db_connections / max_user_connections → Postgres-side max_connections + superuser_reserved_connections — with three worked scenarios carrying hard operator-facing numbers (PS-80 multi-tenant, M-2650 multi-tenant-at-scale, M-1280 single-tenant-per-database).
The post is load-bearing for three reasons: (1) it is the first canonical wiki disclosure of the PgBouncer configuration chain from client-facing max_client_conn to Postgres-side max_connections, formalised as a single sizing formula; (2) it names three deployment topologies with explicit HA / connection-persistence trade-offs (local PgBouncer lives with the primary and loses connections on failover; dedicated primary PgBouncer runs on separate nodes and "connections persist through resizes, upgrades, and most failovers"; dedicated replica PgBouncer routes read queries to replicas, bypassing the local pooler entirely); (3) it canonicalises layered PgBouncer deployment (app-side funnel + DB-side funnel) and isolated-PgBouncer-per-workload (web app / background workers / analytics each get their own PgBouncer) as two distinct architectural patterns, complementary to PlanetScale Database Traffic Control™ ("PgBouncer manages connections, Traffic Control manages resource consumption"). Architecture density ~90%.
Key takeaways¶
-
PgBouncer is the near-universal answer to Postgres's per-connection memory + context-switch cost. Dicken verbatim: "Every connection forks a dedicated OS process consuming 5+ MB of RAM and adding context-switching overhead. PgBouncer solves this by maintaining a pool of reusable server connections, reducing resource consumption and letting PostgreSQL handle far more concurrent clients than its native
max_connectionswould otherwise allow." Canonical wiki framing: the 5+ MB-per-connection datum complements Dicken's sister 2025-09-24 Processes-and-Threads post (sources/2026-04-21-planetscale-processes-and-threads) which disclosed the same cost at OS-substrate altitude without the specific MB number. -
Tens of direct connections for smaller instances, hundreds for larger servers. "It's best-practice to keep the count of direct connections to Postgres small. Tens of connections for smaller instances. Hundreds for larger servers." Canonical wiki datum: the operator-facing cap scales with instance size, but not linearly with CPU — the M-2650 scenario (32 vCPU, 256 GB RAM, 32× the PS-80 CPU budget) takes
max_connectionsfrom 50 → 500, a 10× increase rather than 32×. Dicken's framing: "Just because we now have 32x the CPU power, we don't want to increase direct Postgres connections by 32x. It's still wise to keep this on the lower side." -
Transaction pooling is the only sensible option. "Session pooling … is not incredibly useful. … Statement pooling … disallows multi-statement transactions entirely. Most apps need this, so not useful in 99% of cases! Transaction pooling is the only sensible option. It assigns a server connection for the duration of a transaction, returning it to the pool the moment a COMMIT or ROLLBACK completes." Canonical wiki framing: session pooling's 1:1 client↔server mapping "does little to reduce Postgres connection count"; statement pooling breaks transactions entirely; transaction pooling is the default with the known unsupported-features list (
LISTEN, session-levelSET/RESET, SQLPREPARE/DEALLOCATE— see PgBouncer features documentation). PlanetScale only supports transaction pooling; callers needing the unsupported features must fall back to direct connections. -
query_wait_timeoutdefault is 120 seconds. "When all pool connections are in use, PgBouncer queues the client until one becomes available rather than rejecting it. If the wait exceedsquery_wait_timeout(default: 120 seconds), the client is disconnected with an error." Canonical wiki datum: PgBouncer's admission-control default is queue-then-disconnect, not fail-fast. The 120-second default is long enough that most operational bursts flow through transparently, short enough that a genuinely stuck pool disconnects rather than hanging indefinitely. -
PgBouncer defaults to port 6432; switching to pooled is usually just a port change. "Whereas the Postgres default port is
5432, PgBouncer defaults to6432. Typically, switching from a direct connection to a PgBouncer connection is as simple as switching the port in your client connection string." Canonical wiki datum for the client-side switchover cost. -
Three PgBouncer deployment topologies on PlanetScale — local, dedicated primary, dedicated replica. Each Postgres database includes a local PgBouncer running on the same server as the primary (same credentials, port
6432). A dedicated primary PgBouncer runs on separate nodes from Postgres (better HA; connects through the local bouncer, which then connects to Postgres; "Client connections persist through resizes, upgrades, and most failovers"; invoked by appending|your-pgbouncer-nameto the username on port 6432). A dedicated replica PgBouncer is similar but routes to replicas (and bypasses the local bouncer entirely). Recommended when "applications make heavy use of replicas for read queries." Canonical wiki framing: three distinct placements with three distinct connection-persistence guarantees. -
The configuration chain:
max_client_conn→default_pool_size→max_db_connections+max_user_connections→max_connections. Full hierarchy: max_client_conn— maximum application connections PgBouncer will accept (frequently 1000s).default_pool_size— server connections per(user, database)pool. With 4 users × 2 databases anddefault_pool_size=20, PgBouncer could open up to 160 connections to Postgres.max_db_connectionsandmax_user_connections— hard caps across all pools for a given database or user; default 0 (no limit); acting as "safety valves to prevent pool arithmetic from exceeding PostgreSQL limits."- Postgres
max_connections— total server connections must stay below this. "We should always keep a few available direct connections reserved for admin tasks and other emergency scenarios. We NEVER want PgBouncer to use all of the connections!" - Postgres
superuser_reserved_connections— reserve slots for the superuser (docs).
Canonical wiki framing: the chain is a cascade of caps, each capping a different dimension (client-facing, per-(user,db)-pool, per-db, per-user, Postgres-global). The operator's job is to keep num_pools × default_pool_size ≤ max_db_connections ≤ max_connections − reserved_slack.
- Three worked scenarios with hard numbers.
- PS-80 (1 vCPU, 8 GB RAM), multi-tenant 1 logical DB + 3 users (
app,analytics,export):max_connections=50,max_client_conn=500,default_pool_size=30,max_user_connections=30,max_db_connections=40— 10 direct slots reserved for non-PgBouncer callers. - M-2650 (32 vCPU, 256 GB RAM), same multi-tenant shape at 32× scale:
max_connections=500(10×, not 32×),max_client_conn=10000,default_pool_size=200,max_user_connections=200,max_db_connections=450— 50 direct slots reserved. - M-1280 (16 vCPU, 128 GB RAM), single-tenant 200 logical DBs × 200 roles (1:1 mapping):
max_connections=400,max_client_conn=5000,default_pool_size=2(per-pool),max_user_connections=2,max_db_connections=2— per-pool cap of 2 × 200 pools = 400 upstream connections max, matchingmax_connectionsexactly. "A single tenant can have 10s or even 100s of connections to PgBouncer, but all these will get multiplexed through at most 2 direct Postgres connections."
Canonical wiki framing: the single-tenant-per-database scenario inverts the multi-tenant sizing intuition — when pool count is large (200) and pool size is small (2), default_pool_size becomes the load-bearing cap rather than max_db_connections.
-
Layered PgBouncer: app-side funnel + DB-side funnel. "You can run one PgBouncer on the app or client side to funnel many worker or process connections into a smaller egress set, then run another PgBouncer near Postgres as the final funnel into a tightly controlled number of direct database connections." Canonical wiki framing: layered PgBouncer is two-tier pooling at the PgBouncer level, distinct from the general patterns/two-tier-connection-pooling pattern which conceptualises app-tier + proxy-tier regardless of implementation. Useful "when you need connection pooling both close to compute and close to the database."
-
Multi-PgBouncer for workload isolation. "In large-scale deployments, setting up multiple PgBouncers is useful for traffic isolation. When your web app, background workers, and other consumers all share one pool, a spike from one class of traffic can saturate the PgBouncer and delay everything else. … Giving each major consumer its own PgBouncer creates independent funnels with their own limits, pool sizing, and failure domains. That makes it easier to protect latency-sensitive app traffic from bursty worker traffic and tune each workload separately." Canonical wiki framing: PgBouncer-per-workload isolation is the connection-layer sibling to Database Traffic Control's resource-budget-per-workload. "PgBouncer manages connections, Traffic Control manages resource consumption. The two approaches complement each other well."
-
Four operational discipline rules at the conclusion. (1) "Transaction pooling is the mode that matters." (2) "Use PgBouncer as much as possible. If you absolutely need features that are incompatible with transaction pooling, like
LISTEN, session-levelSET/RESET, or SQLPREPARE/DEALLOCATE, use a direct connection. In all other cases, the small latency penalty of PgBouncer is well worth the scalability and connection safety." (3) "The key configs to pay attention to are:max_connections(Postgres), plusmax_client_conn,default_pool_size,max_db_connections, andmax_user_connections(PgBouncer)." (4) "Ensure things are configured to allow for direct connections, even when all PgBouncer connections are in use."
Systems / concepts / patterns surfaced¶
-
New systems: (none — systems/pgbouncer, systems/postgresql, systems/planetscale-for-postgres, systems/planetscale, systems/planetscale-traffic-control, systems/neki all already canonical).
-
New concepts (5):
- concepts/pgbouncer-connection-chain — the cascade of caps from
max_client_connthroughdefault_pool_sizeandmax_db_connections/max_user_connectionsto Postgres-sidemax_connections. Each cap bounds a different dimension of the pooling geometry. - concepts/pgbouncer-pool-sizing-formula — the single arithmetic constraint
num_pools × default_pool_size ≤ max_db_connections ≤ max_connections − reserved_slack. The operator's sizing calculation. - concepts/query-wait-timeout — PgBouncer's admission-control default: queue-then-disconnect with a 120-second default timeout. Distinct from the fail-fast semantics of some alternative poolers.
- concepts/reserved-admin-connection-budget — the operational discipline of always keeping some direct-to-Postgres connections reserved for admin tasks and emergency scenarios; specifically
superuser_reserved_connectionson the Postgres side +max_connections − max_db_connectionsslack on the PgBouncer side. -
concepts/pgbouncer-deployment-topology — the three PlanetScale PgBouncer placements (local on primary / dedicated primary on separate nodes / dedicated replica) with distinct connection-persistence + HA trade-offs.
-
New patterns (3):
- patterns/layered-pgbouncer-deployment — app-side PgBouncer funnels many workers into small egress; DB-side PgBouncer funnels into tightly controlled direct connections. Two-tier pooling at the PgBouncer level specifically, distinct from the general patterns/two-tier-connection-pooling pattern.
- patterns/isolated-pgbouncer-per-workload — dedicated PgBouncer per major consumer class (web app / background workers / analytics), so spikes in one class don't starve another. Connection-layer sibling to workload-class resource budgets.
-
patterns/three-pool-size-budget-allocation — sizing
default_pool_size+max_user_connections+max_db_connectionsto reserve slots for non-PgBouncer callers and prevent any one user from monopolising the pool. Pattern visible in all three Dicken worked scenarios. -
Extended:
- systems/pgbouncer — first canonical wiki disclosure of configuration chain + three PlanetScale deployment topologies + three worked scenarios. Pre-existing Seen-in entries (sources/2025-07-01-planetscale-planetscale-for-postgres, sources/2026-04-21-planetscale-processes-and-threads) named PgBouncer at topology altitude; this post canonicalises the operator-facing configuration surface.
- systems/postgresql — extends with
superuser_reserved_connectionsas admin-reservation primitive and 5+ MB-per-connection memory datum at specific number (prior Dicken post used general "memory overhead" framing). - systems/planetscale-for-postgres — extends with three PgBouncer deployment topologies (local / dedicated primary / dedicated replica) and transaction-pooling-only constraint.
- systems/planetscale-traffic-control — extends with positioning framing: "PgBouncer manages connections, Traffic Control manages resource consumption."
- systems/neki — extends with Dicken's "upcoming systems like Neki which will solve this problem in a more robust way" framing; positions Neki as next-generation Postgres-scaling substrate successor to PgBouncer.
- concepts/max-connections-ceiling — extends with PgBouncer's operational discipline of always reserving a few direct slots; canonicalises the pool-side enforcement of the memory budget.
- concepts/memory-overcommit-risk — extends with 5+ MB-per-connection specific datum.
- concepts/connection-pool-exhaustion — extends with
query_wait_timeoutas the queue-then-disconnect admission-control mechanism. - concepts/reserved-connection — extends with
superuser_reserved_connectionsas the Postgres-side reservation mechanism. - patterns/two-tier-connection-pooling — extends with layered-PgBouncer as an instance of two-tier-at-the-pooler-level (distinct from app-pool + proxy-pool general shape).
- companies/planetscale — extends with Recent-articles entry + new concepts + new patterns.
Operational numbers¶
- 5+ MB RAM per Postgres connection: dedicated OS process cost. Canonical wiki datum.
- 5432 / 6432: Postgres default port / PgBouncer default port.
- 120 seconds: PgBouncer
query_wait_timeoutdefault. Queue-then-disconnect threshold. - Tens to hundreds: operator-facing best-practice direct-connection count range (scales sub-linearly with CPU).
- PS-80 scenario: 1 vCPU, 8 GB RAM.
max_connections=50,max_client_conn=500,default_pool_size=30,max_user_connections=30,max_db_connections=40. 10-slot admin reserve. - M-2650 scenario: 32 vCPU, 256 GB RAM.
max_connections=500,max_client_conn=10000,default_pool_size=200,max_user_connections=200,max_db_connections=450. 50-slot admin reserve. 10×max_connectionsfor 32× CPU. - M-1280 scenario: 16 vCPU, 128 GB RAM. 200 tenants × 200 roles (1:1).
max_connections=400,max_client_conn=5000,default_pool_size=2,max_user_connections=2,max_db_connections=2. 200 × 2 = 400 upstream connections max, equal tomax_connections. - 4 users × 2 databases = 8 pools example,
default_pool_size=20→ 160 max upstream connections. Canonical worked-example for thenum_pools × default_pool_sizearithmetic.
Caveats¶
- Post is operator-facing tutorial, not Vitess/PlanetScale-internals. The PgBouncer-inside-PlanetScale-proprietary-proxy architecture canonicalised in sources/2025-07-01-planetscale-planetscale-for-postgres is not re-described; Dicken's framing is the user-visible surface (credentials, ports, username-suffix invocation) rather than the internal stitching.
- Transaction-pooling unsupported-features list is incomplete. Dicken references the PgBouncer features page but doesn't enumerate all incompatible features; the post names
LISTEN, session-levelSET/RESET, SQLPREPARE/DEALLOCATEin the conclusion but elides many others (advisory locks, temp tables, cursors held across statements,SET LOCALnuances). Practitioners should consult the PgBouncer docs. query_wait_timeout= 120s presented as-is. No discussion of when operators should tune it (fail-fast use cases where 120s hangs are intolerable; high-latency application tiers where 120s is too short). The default is given as the canonical number, not the only sensible number.- Three-scenario sizing is guidance, not prescription. The specific numbers are canonical worked examples but Dicken doesn't claim universality — real workloads need measurement. The arithmetic constraint
num_pools × default_pool_size ≤ max_db_connectionsis the transferable invariant; the specific numbers are illustrative. - Layered / multi-PgBouncer patterns described at topology altitude, not configuration altitude. The post gives the architectural shape ("funnel many into smaller egress") but doesn't specify per-tier pool sizes for a layered deployment, or per-workload pool sizes for a multi-PgBouncer deployment. Operators must derive configurations from first principles.
- Neki framing forward-looking. "Upcoming systems like Neki which will solve this problem in a more robust way" — as of publication, Neki is waitlist-only (see systems/neki); Dicken is acknowledging a future replacement without displacing the PgBouncer canonical recommendation for 2026.
- No benchmarked numbers. Unlike the sister 1M-connections post (sources/2026-04-21-planetscale-one-million-connections), this post has no empirical benchmark — the three scenarios are sizing calculations, not measured ceilings. Dicken's authority here rests on the arithmetic rigor of the configuration-chain framing, not on a benchmarked anchor.
- PgBouncer HA not deeply explored. The post says dedicated-primary PgBouncer "connects to the local PgBouncer first" and preserves connections "through resizes, upgrades, and most failovers" but doesn't detail the HA mechanism (how connection state survives PgBouncer node failure; what "most failovers" excludes). Topology-altitude disclosure only.
- Ben Dicken's twelfth wiki ingest. Canonical Postgres/database-internals pedagogy voice (prior ingests: b-trees, slotted counter, vectors, I/O devices, EBS failure rates, Go interpreters, Postgres-for-PlanetScale, caching, Postgres 17 vs 18 benchmarks, processes-and-threads, profiling-memory-in-MySQL, sharding-workflows, AI-index-suggestions — among others). Default-include under the PlanetScale Tier-3 skip rules.
Source¶
- Original: https://planetscale.com/blog/scaling-postgres-connections-with-pgbouncer
- Raw markdown:
raw/planetscale/2026-04-21-scaling-postgres-connections-with-pgbouncer-65e5a7e2.md
Related¶
- systems/pgbouncer — the subject. First canonical wiki disclosure of its configuration surface.
- systems/postgresql — the underlying substrate whose process-per-connection cost motivates PgBouncer.
- systems/planetscale-for-postgres — the PlanetScale product whose three PgBouncer deployment topologies are named here.
- systems/planetscale-traffic-control — complement to connection pooling on the resource-consumption axis.
- systems/neki — future-generation substrate referenced as Postgres-scaling successor.
- sources/2026-04-21-planetscale-processes-and-threads — Dicken's sister 2025-09-24 post on OS-substrate altitude; this post is the operator-surface altitude for the same underlying cost.
- sources/2025-07-01-planetscale-planetscale-for-postgres — first canonical wiki instance of PgBouncer inside PlanetScale's proprietary proxy layer; this post is the operator-facing configuration walkthrough for what runs inside.
- sources/2026-04-21-planetscale-one-million-connections — sibling PlanetScale connection-scaling post at MySQL/Vitess altitude; benchmarked 1M ceiling framing.
- sources/2026-04-21-planetscale-connection-pooling-in-vitess — Gangal's 2023 Vitess-VTTablet-pool internals canonicalisation; this post is the Postgres-side sibling.
- concepts/pgbouncer-connection-chain / concepts/pgbouncer-pool-sizing-formula / concepts/query-wait-timeout / concepts/reserved-admin-connection-budget / concepts/pgbouncer-deployment-topology — new canonical concepts introduced here.
- patterns/layered-pgbouncer-deployment / patterns/isolated-pgbouncer-per-workload / patterns/three-pool-size-budget-allocation — new canonical patterns introduced here.
- patterns/two-tier-connection-pooling — general pattern of which layered-PgBouncer is a same-implementation instance.
- patterns/workload-class-resource-budget — Traffic Control analog pattern; the resource-consumption axis complementary to this post's connection-layer axis.
- companies/planetscale.