Skip to content

PATTERN Cited by 1 source

Thread-per-connection database

What it is

Thread-per-connection is a database server architecture where the server runs as a single OS process (e.g. mysqld) and serves each client connection via a dedicated thread inside that process. The canonical production example is MySQL: one long-running mysqld process handles thousands of queries per second and hundreds of concurrent connections by creating (or reusing from a thread pool) one thread per client.

Canonical framing

"MySQL is a great contrast, designed to run as a single process (mysqld). However, it is also capable of handling thousands of queries per-second, hundreds of connections, and utilizing multi-core CPUs. It achieves this via threads. A thread is an additional mechanism for achieving multitasking on a CPU, all within one process. Threads share all the process memory and code (other than their program stacks), but each can be executing at different program locations." (Source: [[sources/2026-04-21- planetscale-processes-and-threads]])

Why this architecture

Thread-per-connection optimises for three properties that process-per-connection doesn't:

  • Lower per-connection memory — threads share the process's heap (buffer pool, table caches, metadata), paying only per- thread-stack memory (~MB) instead of full per-process address space.
  • Cheaper context switchesthread switches cost ~1 μs vs ~5 μs for process switches, because the page-table root and TLB don't need swapping.
  • Shared in-process state — the buffer pool, InnoDB's internal caches, and query-plan caches are directly accessible to every thread without explicit shared-memory machinery.

Costs

The trade-off is loss of isolation: a bug in one thread's query handler can corrupt another thread's data via the shared address space, and a crash takes the whole mysqld down (not just the one connection). MySQL mitigates this with defensive coding conventions and careful locking discipline; Postgres's process-per-connection crash-containment property is not free here.

max_connections is still a memory-economics ceiling

Even with threads, each connection requires dedicated memory resources to manage connection state (per-connection buffers, session variables, prepared statements, sort_buffer_size, join_buffer_size). MySQL exposes max_connections as a last- resort safety knob: raising it blindly risks memory overcommit (concepts/memory-overcommit-risk) in the same way as Postgres's knob. Canonical wiki anchor: Liz van Dijk's 2022-11-01 One million connections benchmark ([[sources/2026-04-21-planetscale-one- million-connections]]) demonstrates the RDS MySQL 16,384-connection ceiling vs PlanetScale's two-tier-pooling ~1M sustained.

Connection pooling still applies

Dicken canonicalises the universal mitigation: "both MySQL and Postgres suffer from performance issues when the connection counts get too high. Even with threads, each connection requires dedicated memory resources to manage connection state. MySQL, Postgres, and many other databases use a technique known as connection pooling to help." (Source: [[sources/2026-04-21-planetscale- processes-and-threads]])

The canonical pooler counts: "typically between 5 and 50" direct-to-DB connections fronting thousands of client connections. At MySQL-specific scale, this is the structural role of VTTablet in-cluster + PlanetScale Global Routing Infrastructure at edge (patterns/two-tier-connection-pooling).

Thread pool refinement

MySQL's literal "new thread per connection" wasn't exactly the production design even in early versions; thread_cache_size caches idle threads for reuse, avoiding the thread-creation cost on connection churn. MySQL Enterprise + MariaDB ship a more sophisticated thread-pool plugin that caps concurrent threads and queues incoming requests, separating client-connection count from internal-thread count — the same architectural move a pooler makes externally.

When to choose

Use thread-per-connection when:

  • Per-connection memory cost must be low — high-fan-in workloads (many short-lived client connections, e.g. web request handlers) benefit from the shared address space.
  • Shared in-process caches dominate — large buffer pool, query plan cache, and adaptive hash index all sit in one address space accessible to every thread.
  • Isolation is not a hard requirement — willingness to accept whole-process crash blast radius in exchange for memory + switch- cost savings.
  • The ecosystem's tooling expects it — MySQL's replication, monitoring, and tuning tooling is built around the single-process model.

Seen in

Last updated · 470 distilled / 1,213 read