CONCEPT Cited by 1 source
Context switch¶
Definition¶
A context switch is the OS kernel's act of saving the current process (or thread)'s execution state and restoring another one's, so the CPU can resume a different unit of work. It is the mechanism by which modern OSes give the illusion of many programs running simultaneously on a small number of physical cores.
What gets saved and restored¶
At minimum: register state (general-purpose registers, program counter, stack pointer, flags). For a full process-to-process switch on Linux, the kernel must also:
- Switch to kernel mode — trap into the scheduler.
- Save the outgoing process's registers to its task struct.
- Swap the page-table root (write new value to the
CR3register on x86-64;TTBR0on ARM64). - Flush (or selectively invalidate) the TLB — the translation lookaside buffer caches virtual-to-physical page mappings, and those mappings are per-address-space.
- Restore the incoming process's registers from its task struct.
- Return to user mode in the new process.
For a thread-to-thread switch within the same process, the
address space and page table don't change, so the TLB flush and
CR3 write are skipped — which is why thread switches are ~5×
faster than process switches.
Cost numbers (modern x86-64 CPUs)¶
| Switch type | Cost | Notes |
|---|---|---|
| Thread switch | ~1 μs | Same address space; no TLB flush |
| Process switch | ~5 μs | Full TLB flush + page-table swap |
| Instructions per switch | tens of thousands | Kernel-mode bookkeeping |
Source: sources/2026-04-21-planetscale-processes-and-threads: "The full time of a context switch takes ~5 microseconds on modern CPUs (1 microsecond = 1 millionth of a second). Though this sounds fast (and it is!) it requires executing tens of thousands of instructions, and this happens hundreds of times per second."
Aggregate overhead¶
At billions of instructions per second per core and hundreds of context switches per second, the bookkeeping consumes tens of millions of instructions/sec — typically a single-digit percentage of CPU time on a production server. This is the "small performance penalty" of multi-processing: the convenience of multitasking costs a measurable but acceptable fraction of CPU throughput.
When context-switch cost dominates¶
Most workloads are not context-switch-bound. When they are, it's usually one of:
- Too many active threads/processes — each gets tiny time slices, spending most time switching rather than running.
- Short-lived processes — programs that
fork()andexec()constantly (e.g. shell pipelines) pay the spawn + exit bookkeeping as dominant cost. - I/O-bound with many blocking connections — each connection held by a dedicated thread, all blocked on different sockets, thrashing the scheduler. This is the motivation for user-mode concurrency primitives like goroutines + virtual threads.
- Database connection storms — a process-per-connection DB under connection-per-request workloads pays the full process- switch tax per query batch, feeding demand for connection pooling.
Virtual memory is what makes it affordable¶
The simplified "copy all of RAM" picture of a context switch is not what happens — that would be prohibitive. Because each process has its own page table, switching is "change the page-table root + flush the TLB", not "copy the address space". The TLB flush is the real cost: for the first N memory accesses after the switch, every one takes a TLB miss. Modern CPUs with tagged TLB entries (ASID / PCID on x86-64) can skip the flush for recently- used address spaces, reducing cost further.
Relationship to throttling + pools¶
Because context-switch cost dominates at high concurrency, database connection poolers (PgBouncer, PlanetScale's VTTablet + Global Routing Infrastructure, patterns/two-tier-connection-pooling) are designed to keep the direct-to-DB connection count 5–50 even when the upstream client count is in the thousands or millions. This is the OS-substrate reason pooling matters beyond memory — the CPU tax of context switching between 10k processes dwarfs the tax of pooling through 50.
Seen in¶
- sources/2026-04-21-planetscale-processes-and-threads — Ben Dicken canonicalises the ~5 μs process-switch cost + ~1 μs thread-switch cost + "tens of thousands of instructions per switch, hundreds of switches per second" framing. Interactive article with animated CPU-swap visualisations.
Related¶
- concepts/process-os — the abstraction being swapped.
- concepts/thread-os — the lighter-weight alternative with cheaper switches.
- concepts/cpu-utilization-vs-saturation — context-switch
storms show up as high system-CPU (
%sy) intop/vmstat. - concepts/run-queue-latency — wait time between ready and running, dominated by context-switch rate under oversubscription.
- concepts/virtual-thread — JVM's answer to OS-thread context-switch costs at high I/O concurrency.