Skip to content

CONCEPT Cited by 1 source

Vertical partitioning

Vertical partitioning splits a monolithic database by moving groups of related tables onto separate database instances. Each moved table stays whole — all of its rows still live on one instance. Contrast horizontal sharding, which splits a single table's rows across many instances.

At Figma, vertical partitions held groups like "Figma files" or "Organizations" — each partition a complete Postgres instance with its own tables (Source: sources/2026-04-21-figma-how-figmas-databases-team-lived-to-tell-the-scale).

Why reach for it first

  • Low developer impact. Each table still supports full relational semantics (joins within the partition, foreign keys, globally unique indexes, full transactions) — app code changes are bounded to the partition-routing layer.
  • Cheap and fast. At the table-group granularity, splitting is mostly a data move + a connection-routing config change.
  • Incremental. One partition at a time, one table-group at a time, with straightforward rollback.
  • Stepping stone to horizontal sharding. Vertical partitioning's 1→1 failover machinery (data move, replication, cutover) is reusable for horizontal sharding's 1→N failover; having already operated it many times de-risks the horizontal step (Source: sources/2026-04-21-figma-how-figmas-databases-team-lived-to-tell-the-scale).

The structural ceiling

The smallest unit of vertical partitioning is one table. Once a single table outgrows the host DB instance, vertical partitioning stops helping. Figma hit three specific Postgres ceilings that closed the door:

  1. Table size at several TB / billions of rowsPostgres vacuums (essential to avoid transaction-ID exhaustion) start producing user-visible reliability impact.
  2. Per-table write rate approaching the instance's RDS IOPS ceiling.
  3. CPU utilization on the hottest partitions (the original vertical-partitioning driver — solved for a while by splitting; once split-per-table is exhausted, re-emerges).

At that point concepts/horizontal-sharding is the only lever left.

Typical deployment shape

  • A dozen or so vertically-partitioned databases per product domain (Figma's end-of-2022 state: ~12 vertical partitions — Source: sources/2026-04-21-figma-how-figmas-databases-team-lived-to-tell-the-scale).
  • Caching layer + read replicas fronting each partition.
  • A simple hard-coded table-to-partition config in the application; dynamic topology isn't needed because the mapping rarely changes (every partition switch is a deliberate human decision, not an automated rebalance).

Seen in

Last updated · 200 distilled / 1,178 read