Skip to content

PATTERN Cited by 1 source

Sharding as IOPS scaling

Pattern

When a database's IOPS or throughput demand approaches the per-volume cap of the cheap storage tier (e.g. AWS gp3 at 3,000 IOPS / 125 MiB/s default, 16,000 / 1,000 MiB/s max), shard horizontally so each shard's per-volume demand stays below the cheap-tier ceiling. This avoids the regime-shift cost cliff of upgrading to provisioned-IOPS volumes (io1, io2) — which carry a 2-4× cost multiplier at scale.

The pattern treats sharding as a cost lever, not just a scale lever: the reason to shard isn't necessarily that a single machine can't hold the data or serve the queries — it's that the cheap-volume-tier ceiling is lower than the workload's aggregate demand, and N shards at 1/N the per-shard demand each fit under the ceiling.

Dicken's framing

"Database sharding is an excellent technique to run huge databases efficiently, without needing to pay an EBS premium. In a sharded database, we spread out our data across many primaries. This also means that our IO and throughput requirements are distributed across these instances, allowing each to use a more affordable gp2 or gp3 EBS volume."

"In this situation, we do not need to pay extra for additional IOPS or dedicated io1 infrastructure. The IOPS and throughput demand is spread evenly across the 8 shards, allowing us to stick with a more affordable class of EBS volumes."

(Source: sources/2026-04-21-planetscale-increase-iops-and-throughput-with-sharding)

The cost arithmetic

Configuration Workload Aggregate IOPS Per-primary IOPS Volume tier Monthly cost
Unsharded baseline ~3,000 3,000 gp3 default ~$1,749
Unsharded 8× ~24,000 24,000 io1 provisioned IOPS $20,520-$24,197
Sharded 8× (8 shards + 1 unsharded reference) ~24,000 ~3,000 per shard gp3 default per shard $13,992

Source: Dicken's worked comparison. The 8-shard PlanetScale configuration is ~40% cheaper than the unsharded RDS-with-io1 configuration because each shard's IOPS demand stays below the gp3 premium-tier threshold.

Mechanics

  1. Measure per-primary IOPS demand at peak production load. Include write-workload IOPS (fsync + page flushes + index updates) and read-workload IOPS (random-access page reads on misses).
  2. Identify the cheap-tier IOPS ceiling for your cloud provider + volume class. On AWS: gp3 default 3,000, max 16,000.
  3. Compute the minimum shard count such that peak per-shard demand stays below a comfortable fraction (e.g. 60-70%) of the ceiling. If aggregate demand is 24k and the per-shard comfortable budget is 2,000 IOPS, you need ~12 shards.
  4. Pick a shard key that distributes the IOPS demand evenly across shards. Skewed shard keys (hot shards) defeat the cost-arithmetic — a hot shard at 16k IOPS still needs io1 regardless of the aggregate.
  5. Route traffic via a database-proxy tier (Vitess / VTGate) so the application sees a single logical database.
  6. Monitor per-shard IOPS in production. Cost benefit evaporates if one shard drifts hot; rebalance (Reshard) or change the key before the hot shard crosses the premium-tier threshold.

When it works

  • Even shard-key distribution — the aggregate demand really does divide by N.
  • Workload fits horizontal sharding — not every query pattern shards well (scatter-gather queries don't benefit; cross-shard transactions have real cost).
  • Growing on all three sharding triggers — data size, write throughput, IOPS. If you're only growing on data size, vertical partitioning may be cheaper; if you're only growing on IOPS, storage-tier upgrade or local-NVMe substrate may be simpler.

When it doesn't work

  • Hot shards. A time-series workload sharded by timestamp has all recent traffic on the most-recent shard, which stays hot at 16k IOPS while the older shards are idle. Cost benefit collapses. See concepts/shard-key-volatility and concepts/hot-key.
  • Small overflow. If your aggregate demand is 4,000 IOPS (just above the gp3 default), paying for provisioned IOPS on one volume is cheaper than operating 2+ shards. Sharding has real operational overhead (VTGate proxy latency, cross-shard query logic, failure-mode expansion).
  • Read-heavy with high cache hit ratio. Reads from memory don't burn EBS IOPS. If your buffer pool catches 99% of reads, the disk IOPS ceiling isn't the binding constraint — adding read replicas is cheaper than sharding.
  • Metal / direct-attached NVMe substrate. If you've moved to PlanetScale Metal or similar, the IOPS cap doesn't exist — the motivation for sharding-as-IOPS-scaling disappears. Sharding may still be right for data-size or write-throughput reasons, but not for IOPS cost.

Trade-offs vs alternatives

Alternative What you give up
Pay for gp3 additional IOPS Linear $-scaling to 16k IOPS ceiling, then cliff. Cheap-ish up to ~10k IOPS.
Upgrade to io1 / io2 provisioned IOPS 2-4× cost multiplier on IOPS, operational simplicity of single primary.
4-volume gp3 striping (RAID-0) 4× the per-volume cap (12k baseline, 64k max) without sharding the app. Adds volume-group complexity + doubled blast-radius (one volume failure kills the stripe).
Shard horizontally (this pattern) Stay in cheap-tier forever, pay only operational sharding complexity.
Move to direct-attached NVMe (patterns/direct-attached-nvme-with-replication) Substrate-level shift; no IOPS cap, no sharding pressure from IOPS. Different durability story (replication instead of EBS's remote durability).

Cross-pattern composition

Seen in

  • sources/2026-04-21-planetscale-increase-iops-and-throughput-with-sharding — Ben Dicken's canonical worked-example (8× workload on RDS with io1 = 11-13× cost; 8-shard PlanetScale = 8× cost) establishes the pattern. The architectural claim is explicit: "sharding is an excellent technique to run huge databases efficiently, without needing to pay an EBS premium" because "our IO and throughput requirements are distributed across these instances, allowing each to use a more affordable gp2 or gp3 EBS volume." Canonical companion to patterns/direct-attached-nvme-with-replication — same problem, different architectural answer.
Last updated · 378 distilled / 1,213 read