PATTERN Cited by 1 source
Three-pool-size budget allocation¶
What it is¶
Three-pool-size budget allocation is the PgBouncer sizing pattern of configuring three independent size dials — default_pool_size, max_user_connections, and max_db_connections — to simultaneously:
- Allow adequate per-(user, database) pool capacity for expected concurrency (
default_pool_size). - Prevent any single user from monopolising the database's pool budget (
max_user_connections). - Cap the database's aggregate pool footprint strictly below Postgres's
max_connections, reserving slack for admin / emergency direct connections (max_db_connections).
Why three dials, not one¶
A single default_pool_size cap can't enforce all three goals:
- Too low: any user hits the cap even in normal operation.
- Too high: one aggressive user monopolises the pool, starving others.
Adding max_user_connections decouples per-pool from per-user. Adding max_db_connections caps the aggregate below Postgres's limit. The three dials are non-redundant — each constrains a different dimension of the pooling geometry.
(Source: sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer.)
Canonical sizing recipe¶
From Dicken's three worked scenarios:
Multi-tenant (1 DB, multiple users)¶
Dominant user (e.g. app for web traffic): sized for peak concurrency. Other users (analytics, export): bounded so they can't starve the dominant user.
PS-80 (1 vCPU, 8 GB, 1 logical DB, 3 users):
default_pool_size = 30 # per-pool cap; 3 users × 30 = 90 theoretical max
max_user_connections = 30 # no user can exceed 30 slots
max_db_connections = 40 # database cap; allows dominant user + slack for others
max_connections = 50 # Postgres ceiling; 10-slot admin reserve
Multi-tenant at scale (same shape, 32× CPU)¶
M-2650 (32 vCPU, 256 GB, same shape):
default_pool_size = 200 # 10× not 32× the PS-80 value
max_user_connections = 200
max_db_connections = 450 # 50-slot admin reserve
max_connections = 500
Single-tenant-per-database (inverted)¶
M-1280 (16 vCPU, 128 GB, 200 DBs × 200 users, 1:1 mapping):
default_pool_size = 2 # per-pool cap; 200 pools × 2 = 400
max_user_connections = 2 # per-user; 1:1 mapping means same value
max_db_connections = 2 # per-db; 1:1 mapping means same value
max_connections = 400 # 200 × 2 exactly = 400
The inversion: with 200 pools, default_pool_size becomes the load-bearing dial (rather than max_user_connections, which is load-bearing in the multi-tenant case).
The allocation logic¶
Walking the three dials top-down:
- Start with Postgres-side
max_connections(chosen based on instance memory — see concepts/max-connections-ceiling). - Subtract
reserved_slackfor admin / emergency direct connections (see concepts/reserved-admin-connection-budget): that'smax_db_connections. - Decide how to partition
max_db_connectionsacross users: - If dominant user + secondary users: set
max_user_connectionshigh enough for the dominant user but belowmax_db_connections(leaves slack for secondaries). - If users are equal: set
max_user_connections=max_db_connections / num_users. - Set
default_pool_sizeto the expected peak concurrency per (user, database) pair; constrain sonum_pools × default_pool_sizedoesn't wildly exceedmax_db_connections(see concepts/pgbouncer-pool-sizing-formula).
When this pattern applies¶
- Multi-tenant Postgres with differentiated workloads: one dominant user + secondaries.
- Single-tenant-per-DB with many tenants:
default_pool_sizedominates; other dials collapse. - Mixed architectures: combine with patterns/isolated-pgbouncer-per-workload (per-workload PgBouncer, each with its own three-dial allocation) or patterns/layered-pgbouncer-deployment (re-apply the three dials at each layer).
When it doesn't apply¶
- Single user, single DB: two of the three dials collapse (all equal to
default_pool_size); no partitioning benefit. - Fail-fast admission-control: the budgeting presumes queue-then-disconnect (see concepts/query-wait-timeout); strict fail-fast pools have different sizing logic.
Observability implications¶
Each dial should have its own alert:
default_pool_sizesaturation: pool-level contention (per user / per database).max_user_connectionsnear cap: a specific user is consuming their full budget.max_db_connectionsnear cap: database-wide pressure; may be legitimate (scale up) or pathological (stuck queries).max_connectionsnear cap: admin reserve is at risk; critical alert.
Seen in¶
- sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer — canonical wiki pattern disclosure. Ben Dicken (PlanetScale, 2026-03-13) demonstrates across three scenarios spanning small/large multi-tenant and many-tenant single-tenant shapes.
Related¶
- systems/pgbouncer — the pooler.
- systems/postgresql — the substrate whose
max_connectionsthe pattern works within. - concepts/pgbouncer-connection-chain — the cascade of caps this pattern configures.
- concepts/pgbouncer-pool-sizing-formula — the arithmetic invariant this pattern satisfies.
- concepts/reserved-admin-connection-budget — the
reserved_slackthe pattern preserves. - concepts/max-connections-ceiling — the upstream anchor.
- patterns/layered-pgbouncer-deployment — can apply this pattern at each tier.
- patterns/isolated-pgbouncer-per-workload — can apply this pattern within each workload's PgBouncer.