Skip to content

CONCEPT Cited by 1 source

Catalog bloat (multi-tenant table-per-tenant)

Definition

Catalog bloat is a Postgres failure mode where each backend process caches its own copy of the system catalog (pg_class, pg_attribute, pg_index, etc.) and these per-backend caches grow linearly with the number of user tables. Combined with Postgres's process-per-connection architecture, catalog memory cost is:

catalog_RSS_total ≈ backend_count × tables_in_schema × per-table-catalog-bytes

The failure lights up under the table-per-tenant multi- tenant pattern: a SaaS with 10,000 tenants each getting 20 tables produces a 200,000-table schema; each backend caches catalog entries for every table it touches, and RSS balloons backend-count × catalog-size regardless of how much user data each backend queries.

(Source: sources/2026-04-21-planetscale-high-memory-usage-in-postgres-is-good-actually.)

Simeon Griggs's framing

Griggs lists this as RSS growth driver #2 of 5:

"Catalog bloat can spike RSS usage, common in multi-tenant schemas using a table-per-tenant pattern."

The post stops at naming the failure; PlanetScale's positioning is that customers should prefer row-level-multi-tenancy + tenant-ID-column over table-per-tenant at scale, which is the broader concepts/tenant-isolation discussion.

Mechanism in detail

Each backend maintains a relcache (relation cache) that holds RelationData structures for every table / index / view the backend has touched. The relcache is per-backend, not shared: Postgres does not have shared catalog memory across backends for correctness reasons (each backend may see a different MVCC snapshot of catalog state). Backend-local state is a property of Postgres's architecture, not a configuration accident.

A single RelationData entry for a table with a few columns is typically a few KB. With catalog bloat:

  • 10,000 tables × ~5 KB per RelationData = ~50 MB catalog per backend.
  • 100 active backends × 50 MB = 5 GB catalog RSS — which is non-reclaimable and contributes directly to memory pressure.
  • Scale to 100,000 tables (table-per-tenant-at-real-scale) and catalog RSS can dominate total RSS.

Multi-tenancy patterns ranked by catalog-bloat risk

Pattern Catalog-bloat risk
Shared table, tenant_id column Low — one set of tables
Schema-per-tenant (tenant1.users, etc.) Medium — tables × tenants
Table-per-tenant (users_t1, users_t2) High — same as schema-per-tenant + no namespacing
Database-per-tenant High — but isolated per-database (no single-database bloat)

Mitigations

  • Connection pooling cuts backend_count, linearly reducing catalog-RSS. systems/pgbouncer is the default PlanetScale answer.
  • Refactor to shared-table + tenant_id-column — the cleanest long-term fix, but requires schema migration and row-level-security enforcement.
  • Backend recyclingidle_session_timeout or application-side connection-lifetime limits force backends to drop their relcache periodically. Trades connection-churn cost for RSS recovery.

Seen in

Last updated · 550 distilled / 1,221 read