CONCEPT Cited by 1 source
Test parallelism worker count¶
The number of worker processes a parallelised test runner
spreads a test suite across. In Rails + minitest the knob is
literal: parallelize(workers: N). Gems exist to expose the
same knob in RSpec and other frameworks.
Definition¶
A parallelised test runner divides the test files or
individual tests across N independent worker processes and
collects results. The wall-clock cost of the full suite drops
roughly by 1/N — until two ceilings are hit:
- Host cores. Past the number of cores on the machine, additional workers stop helping (CPU-bound) or only help marginally (IO-bound / blocking tests that wait on DB or network can benefit from some oversubscription, but not many-fold).
- Individual test cost. Once N is saturated, wall-clock is bounded by the slowest worker's longest test. Past that point, further speedup only comes from making individual tests faster — e.g. reducing per-test setup via factory audits.
Why it's a cheap lever¶
Increasing N typically requires no test code changes. PlanetScale's
canonical datum (How our Rails test suite runs in 1 minute on
Buildkite, 2022-01-18, Mike Coutermarsh) records the lever's
impact exactly:
When we initially started this, we began by running our tests in parallel on 2 workers. You're limited by the number of cores the machine you're running on has. … Our infrastructure team set us up with some 64 core machines on Buildkite.
Result: ~12 min serial → 3-4 min at 64 workers. "This had the biggest impact and is also the easiest step to improve your test suite speed."
Relationship to host cores¶
Worker count is structurally bounded by host core count. Past that ceiling, worker processes share CPU time and wall-clock stops dropping. For a 64-core machine, 64 workers is the natural ceiling for CPU-bound tests; IO-bound tests may benefit from slight oversubscription but the returns diminish sharply.
This is why hosted CI models that don't let customers pick agent shape (fixed 2-vCPU or 4-vCPU runners) cap test- parallelism economics at the vendor's shape. The Buildkite customer-owned-agent model decouples the lever from the vendor.
The environment-gate¶
PlanetScale (and most Rails shops) guard parallelism to the CI environment:
Rationale: local developers rarely run the full suite (CI-parallel-over- local-serial); on a laptop with 4-8 cores, 64 workers would be worse than serial due to context-switch overhead.
Relationship to other levers¶
- Before parallelism: optimise individual test cost (factory-explosion audits, moving DB-cleanup outside the hot path, avoiding real network calls). Parallelism amplifies existing test cost — if each test setup creates 100 rows, 64 workers do that 64× simultaneously.
- After parallelism saturates cores: reduce per-test setup work. This is when the assert-factory- object-count pattern earns its keep.
- Parallel-run correctness: 64-way parallelism surfaces latent test flakiness from shared state (ports, DB rows, singletons, caches). Tests that pass serially often fail in parallel; running them in random order helps surface this sooner.
Seen in¶
- sources/2026-04-21-planetscale-how-our-rails-test-suite-runs-in-1-minute-on-buildkite — 64 workers on a 64-core Buildkite agent; suite goes from 12 min serial → 3-4 min → ~1 min after follow-up factory audit.
Related¶
- concepts/factorybot-object-explosion — the per-test cost term that parallelism amplifies.
- concepts/test-feedback-loop — the DevEx primitive being optimised.
- patterns/ci-parallel-over-local-serial — the investment
rule that motivates scaling
Non CI but not locally. - systems/buildkite — the customer-owned-agent CI model that makes 64-core agents viable.