PATTERN Cited by 3 sources
Two-tier connection pooling¶
What it is¶
Two-tier connection pooling is the architectural pattern where connections between application code and a database pass through two distinct pools with different responsibilities:
- Application-tier pool — owned by the application
process (e.g. HikariCP in JVM,
database/sqlpool in Go,ActiveRecord::ConnectionPoolin Rails). Eliminates the per-request connection-handshake cost for this process. - Proxy-tier pool — owned by a central proxy between
applications and the database (e.g. Vitess VTTablet, pgbouncer,
ProxySQL). Enforces the database's
max_connections/ memory budget once, globally, across all application processes.
The pattern separates two failure modes that a single pool conflates: per-request latency (fixed by app-tier pool) and global admission control (fixed by proxy-tier pool).
Problem¶
A single-tier pool (just the app-tier pool) is the default.
At small scale (one app process, one DB), it works: the pool
is sized conservatively to stay under max_connections, and
the handshake cost is eliminated.
Problems appear as soon as N > 1 application processes share the database:
- Each process has its own pool of
Pconnections → aggregate isN × Pupstream connections. - The DB's
max_connectionscap is breached atN = max_connections / P. For RDS's 16,000 cap andP = 20(typical), this is 800 application processes — reached trivially in any medium-scale deployment. - Serverless compute makes
Nexplode: every function invocation is a fresh process with a fresh pool.Nis the peak concurrent function count, not the number of static server nodes.
Raising max_connections isn't a solution because of
memory overcommit risk —
crossing the memory envelope exposes the database to OOM
crashes.
Solution¶
Interpose a proxy pool between applications and database:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ App pool │ │ App pool │ │ App pool │ ← app-tier
│ N app1 │ │ N app2 │ │ N app3 │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
└─────────────────┼─────────────────┘
▼
┌───────────────┐
│ PROXY POOL │ ← proxy-tier
│ (VTTablet / │
│ pgbouncer / │
│ Global Route │
│ Infra) │
└───────┬───────┘
│
│ small capped
│ upstream pool
▼
┌───────────────┐
│ DATABASE │ ← origin
│ max_connections│
└───────────────┘
Each tier serves a different constraint:
- App-tier pool: per-process pool. Size = peak concurrent queries the process will execute. Purpose: eliminate the cold-start connection-handshake cost (~50 ms MySQL SSL handshake per new connection — see sources/2026-04-21-planetscale-connection-pooling-in-vitess).
- Proxy-tier pool: shared pool. Size =
max_connectionson the database. Purpose: enforce the memory-safe ceiling once, globally, and convert many client-facing connections into a capped number of upstream connections.
Canonical instances¶
Vitess VTTablet (in-cluster pool)¶
VTTablet is the pre-sharding Vitess component that owns the MySQL connection pool for each shard primary. Applications connect to VTGate → VTTablet; VTTablet holds the MySQL connections. The pool is lockless, using atomic operations and non-blocking data structures (see sources/2026-04-21-planetscale-connection-pooling-in-vitess).
PlanetScale Global Routing Infrastructure (edge pool)¶
PlanetScale stacks a third tier on top of VTTablet: a CDN- like edge pool that terminates the client's MySQL session at the edge node nearest the client and backhauls queries to VTTablet over warm, long-held, multiplexed backhaul connections (see systems/planetscale-global-network). This is the specific architectural substrate that enables the benchmarked 1M-concurrent-connections ceiling (source: sources/2026-04-21-planetscale-one-million-connections), 62.5× above RDS MySQL's 16k-instance cap (source: sources/2026-04-21-planetscale-comparing-awss-rds-and-planetscale).
pgbouncer / ProxySQL (generic pooler tier)¶
The pattern also applies to vanilla Postgres / MySQL: pgbouncer in front of Postgres, ProxySQL in front of MySQL, accepting many client connections and multiplexing to a small upstream pool. No edge tier; just application pool + proxy pool.
When to use¶
- Application is horizontally scaled across many processes / containers / functions.
- Serverless / edge deployment: cold-start makes app-tier pooling per-process necessary, but function-count multiplies quickly.
- Multiple different applications share the same database (canonicalised in sources/2026-04-21-planetscale-connection-pooling-in-vitess: "Once an application horizontally scales from one server to 'hundreds or thousands,' and once multiple applications share the same database, application-level pools can't enforce a global connection ceiling").
- Database has a memory-derived
max_connectionsthat cannot be raised without adding memory.
When NOT to use¶
- Single application, small scale: the proxy tier is overhead for no benefit. App-tier pool alone is enough.
- Session-level state heavy: proxy-tier pools historically fought this poorly (see concepts/tainted-connection + sources/2026-04-21-planetscale-connection-pooling-in-vitess for Vitess's three-era evolution). Modern proxy pools (Vitess v15+ settings pool) handle it, but it adds complexity.
- Transaction-heavy workloads where transactions span
many statements: proxy-tier pools in
transactionmode keep the connection pinned for the transaction duration, which reduces pool efficiency. Proxy-tier pools instatementmode don't work for transactions at all.
Trade-offs¶
- Extra proxy hop adds RTT (typically sub-millisecond for same-region VTTablet / pgbouncer; edge-tier adds more depending on client locality).
- Complexity: two systems to operate, monitor, alert on.
- Debugging: a misbehaving query goes through two pools — observability must trace across both tiers.
- Session-state semantics at the proxy tier are tricky (see concepts/tainted-connection).
The benefits at scale dominate: the pattern is what enables benchmarked ceilings 62.5× the single-database ceiling (1M vs 16k) with no OOM risk.
Seen in¶
- sources/2026-04-21-planetscale-one-million-connections — canonical benchmarked demonstration: VTTablet pool + Global Routing Infrastructure pool sustain 1M concurrent connections against a PlanetScale database; Liz van Dijk explicitly names the two-tier framing ("Vitess and PlanetScale offer connection pooling on the VTTablet level … In addition to that, PlanetScale's Global Routing Infrastructure provides another horizontally scalable layer of connectivity").
- sources/2026-04-21-planetscale-connection-pooling-in-vitess — canonical mechanism detail on the in-cluster tier; three-era VTTablet pool-design evolution with the ~50 ms MySQL SSL handshake cost as the why of the app-tier pool.
- sources/2026-04-21-planetscale-comparing-awss-rds-and-planetscale — canonical contrast: RDS MySQL's 16k instance cap is the ceiling the two-tier architecture is designed to exceed; Reyes names the PlanetScale architectural answer without specifying.
Related¶
- patterns/cdn-like-database-connectivity-layer — the PlanetScale edge tier is the "CDN for databases" instance; this pattern is the general two-tier shape it implements.
- patterns/consolidate-identical-inflight-queries — sister Vitess proxy-tier primitive; consolidation + pooling share the "proxy tier owns upstream state" structural substrate.
- concepts/max-connections-ceiling — the target-side constraint this pattern sidesteps.
- concepts/memory-overcommit-risk — the reason you can't just raise the ceiling.
- concepts/connection-pool-exhaustion — the proxy-tier pool's own capacity signal.
- concepts/serverless-tcp-socket-restriction — adjacent constraint that motivates the edge tier's HTTP-API path.