PATTERN Cited by 1 source
In-memory partition actor (actor-per-partition with mailbox)¶
When to use¶
You have a write-heavy workload where many requests target the same partition concurrently, and you want:
- Intra-partition serialization (one request per partition at a time, without DB-level locks).
- Zero in-instance contention on partition state.
- An opportunity to batch multiple same-partition requests into a single DB round-trip.
The pattern¶
request(partition=P, op=write) ──► actor(P).mailbox.enqueue(op)
│
▼
┌──────────┐
│ actor(P) │ (single thread)
│ │
│ loop: │
│ ops = drain(mailbox, batch_size)
│ apply(ops, partition_state)
│ commit(ops) ◄── one DB write
└──────────┘
- One actor per partition, living in-process on the instance that owns the partition (co-located with patterns/sticky-session-scatter-gather).
- Mailbox is a concurrent queue; API requests enqueue messages, one thread drains.
- Batching: the actor pulls multiple messages from the mailbox each loop iteration and commits them in one DB operation — the same-partition write amplification of request rate becomes near-1.
Canonical example¶
Walmart's inventory-reservations API (summarized in the High Scalability Dec-2022 roundup):
"In-Memory concurrency using actor pattern with mailbox to restrict the processing of a single partition to a single thread. This also helps with batch processing of the same partition requests."
See systems/walmart-inventory-reservations.
Why it works¶
- No locks needed — the actor is the serialization primitive. A single thread owning the partition's state means no concurrent mutators, no mutex acquisition, no lock-contention tail latency.
- Batch commits amortize network + disk round-trip cost across many requests — the limiting factor becomes throughput-per-batch, not throughput-per-request.
- Composes with patterns/sticky-session-scatter-gather: the gateway routes by partition-key → one instance per partition → one actor per partition inside that instance.
Trade-offs¶
- Single-threaded per partition = latency is bounded by actor loop throughput. Can't parallelize a single hot partition across cores.
- Mailbox backpressure needs deliberate design — if the actor falls behind, mailbox growth becomes an OOM risk.
- Crash recovery: mailbox contents in memory are lost on crash, unless the actor persists before committing. Usual mitigation: require a durable write-ahead log upstream of the actor, or require clients to retry.
Related patterns¶
- Erlang/Elixir "one process per entity" — this is the distributed-systems refinement.
- Durable Objects (systems/cloudflare-durable-objects) — managed cloud version: one DO instance per key, actor- like single-threaded, persistent storage built in.
- Akka cluster sharding — JVM implementation of the same idea.
Seen in¶
- sources/2022-12-02-highscalability-stuff-the-internet-says-on-scalability-for-december-2nd-2022
- systems/walmart-inventory-reservations