Skip to content

PATTERN Cited by 1 source

Three-pool-size budget allocation

What it is

Three-pool-size budget allocation is the PgBouncer sizing pattern of configuring three independent size dials — default_pool_size, max_user_connections, and max_db_connections — to simultaneously:

  1. Allow adequate per-(user, database) pool capacity for expected concurrency (default_pool_size).
  2. Prevent any single user from monopolising the database's pool budget (max_user_connections).
  3. Cap the database's aggregate pool footprint strictly below Postgres's max_connections, reserving slack for admin / emergency direct connections (max_db_connections).

Why three dials, not one

A single default_pool_size cap can't enforce all three goals:

  • Too low: any user hits the cap even in normal operation.
  • Too high: one aggressive user monopolises the pool, starving others.

Adding max_user_connections decouples per-pool from per-user. Adding max_db_connections caps the aggregate below Postgres's limit. The three dials are non-redundant — each constrains a different dimension of the pooling geometry.

(Source: sources/2026-04-21-planetscale-scaling-postgres-connections-with-pgbouncer.)

Canonical sizing recipe

From Dicken's three worked scenarios:

Multi-tenant (1 DB, multiple users)

Dominant user (e.g. app for web traffic): sized for peak concurrency. Other users (analytics, export): bounded so they can't starve the dominant user.

PS-80 (1 vCPU, 8 GB, 1 logical DB, 3 users):

default_pool_size = 30           # per-pool cap; 3 users × 30 = 90 theoretical max
max_user_connections = 30        # no user can exceed 30 slots
max_db_connections = 40          # database cap; allows dominant user + slack for others
max_connections = 50             # Postgres ceiling; 10-slot admin reserve

Multi-tenant at scale (same shape, 32× CPU)

M-2650 (32 vCPU, 256 GB, same shape):

default_pool_size = 200          # 10× not 32× the PS-80 value
max_user_connections = 200
max_db_connections = 450         # 50-slot admin reserve
max_connections = 500

Single-tenant-per-database (inverted)

M-1280 (16 vCPU, 128 GB, 200 DBs × 200 users, 1:1 mapping):

default_pool_size = 2            # per-pool cap; 200 pools × 2 = 400
max_user_connections = 2         # per-user; 1:1 mapping means same value
max_db_connections = 2           # per-db; 1:1 mapping means same value
max_connections = 400            # 200 × 2 exactly = 400

The inversion: with 200 pools, default_pool_size becomes the load-bearing dial (rather than max_user_connections, which is load-bearing in the multi-tenant case).

The allocation logic

Walking the three dials top-down:

  1. Start with Postgres-side max_connections (chosen based on instance memory — see concepts/max-connections-ceiling).
  2. Subtract reserved_slack for admin / emergency direct connections (see concepts/reserved-admin-connection-budget): that's max_db_connections.
  3. Decide how to partition max_db_connections across users:
  4. If dominant user + secondary users: set max_user_connections high enough for the dominant user but below max_db_connections (leaves slack for secondaries).
  5. If users are equal: set max_user_connections = max_db_connections / num_users.
  6. Set default_pool_size to the expected peak concurrency per (user, database) pair; constrain so num_pools × default_pool_size doesn't wildly exceed max_db_connections (see concepts/pgbouncer-pool-sizing-formula).

When this pattern applies

  • Multi-tenant Postgres with differentiated workloads: one dominant user + secondaries.
  • Single-tenant-per-DB with many tenants: default_pool_size dominates; other dials collapse.
  • Mixed architectures: combine with patterns/isolated-pgbouncer-per-workload (per-workload PgBouncer, each with its own three-dial allocation) or patterns/layered-pgbouncer-deployment (re-apply the three dials at each layer).

When it doesn't apply

  • Single user, single DB: two of the three dials collapse (all equal to default_pool_size); no partitioning benefit.
  • Fail-fast admission-control: the budgeting presumes queue-then-disconnect (see concepts/query-wait-timeout); strict fail-fast pools have different sizing logic.

Observability implications

Each dial should have its own alert:

  • default_pool_size saturation: pool-level contention (per user / per database).
  • max_user_connections near cap: a specific user is consuming their full budget.
  • max_db_connections near cap: database-wide pressure; may be legitimate (scale up) or pathological (stuck queries).
  • max_connections near cap: admin reserve is at risk; critical alert.

Seen in

Last updated · 470 distilled / 1,213 read