CONCEPT Cited by 1 source
Serialized per-shard updates¶
Serialized per-shard updates is Temporal's consistency discipline: all state mutations for workflows that hash to the same shard are applied to the persistence layer sequentially, not concurrently. "Temporal serializes all updates belonging to the same shard, so all updates are sequential" (Source: sources/2026-04-21-planetscale-temporal-workflows-at-scale-sharding-in-production).
Why Temporal does this¶
Temporal's durability contract is mechanical replay of an append-only event history. For replay to be deterministic, the persistence-layer writes for a given workflow must linearise. The cheapest way to guarantee that across many workflows sharing storage is to serialise every write within a shard — the History subsystem holds a per-shard ownership lock, and all updates to any workflow in that shard queue through it. This sidesteps any need for cross-row transactions, MVCC conflict detection, or compare-and-swap in the persistence layer; the shard's owning History process is the serialization point.
Direct consequence: single-shard throughput is latency-bound¶
Because updates are sequential within a shard, the latency of one persistence-layer operation gates the maximum operation rate of that shard. "The latency of a database operation limits the maximum theoretical throughput of a single shard" (Source: sources/2026-04-21-planetscale-temporal-workflows-at-scale-sharding-in-production). If a single write to the backing store takes 5 ms, the shard caps at ~200 operations/sec regardless of how many cores or IOPS the backing MySQL instance has. See concepts/single-shard-throughput-ceiling for the operational consequence.
This is different from the usual database-shard scaling shape where shard capacity tracks disk or CPU bandwidth. Under Temporal, adding capacity to one shard does not raise its ceiling — only more shards do.
Why it composes with horizontally-sharded storage¶
Vitess shards by the same shard_id / range_hash column Temporal uses, via an xxhash Primary Vindex. This means each Temporal shard's rows stay on one MySQL primary — which is exactly what preserves the serialisation invariant for free: a single MySQL primary naturally serialises writes per-connection, and Temporal's shard-owning History process holds the connection. Crucially the substrate doesn't need to coordinate across shards; Temporal has already pre-partitioned the work along a key that the storage layer also partitions on.
Contrast with row-level or table-level concurrency¶
Most SQL workloads allow concurrent writes within a table as long as they touch different rows — MVCC, row locks, and indexed conflict detection provide the per-row serialisation, and the table-level write throughput can scale past one row's latency bound. Temporal explicitly forgoes this: the serialisation boundary is the shard, not the row, because Temporal's correctness contract needs a deterministic commit order across all workflows on the shard (not just per-workflow).
Seen in¶
- sources/2026-04-21-planetscale-temporal-workflows-at-scale-sharding-in-production — Savannah Longoria (PlanetScale, 2022-12-14) canonicalises the serialise-per-shard property as the correctness constraint driving Temporal's per-shard throughput ceiling. Motivates the immutable-
numHistoryShardssizing discipline + the compose-cleanly argument for horizontally-sharded backing stores.