Skip to content

PATTERN Cited by 1 source

Mock external dependencies for isolated load test

What this is

Mock external dependencies for isolated load test is the pattern of deploying service virtualisation / API mocks alongside the services under load — one mock per external dependency — so that a pre-production load test can produce production-shape traffic to the system-under-test without producing production-shape traffic to the external providers the system depends on.

The essential claim: internal service architecture is what the load test is trying to characterise, so external call-outs should be isolated and controllable, not authentic.

Why mock, not call real dependencies

  • Cost. External APIs charge per request. A 2-hour peak-scale load test against a real payment provider would produce a bill the load test doesn't need to pay.
  • Blast radius. Real third-party integrations have real side effects (emails, text messages, partner notifications, inventory decrements). Load-test traffic would ship them.
  • Rate limits. Third parties throttle; internal systems shouldn't be gated on a partner's rate limit when the test is trying to find internal capacity.
  • Latency control. A load test wants latency to be a property of the system, not of an unrelated third-party's regional outage that day.
  • Determinism. Real dependencies inject variance; mocks with templated responses don't.

The shape (Zalando instantiation)

Zalando Payments' Cyber-Week load-test cluster (Source: sources/2021-03-01-zalando-building-an-end-to-end-load-test-automation-system-on-top-of-kubernetes):

  • One Hoverfly instance per mocked external dependency, deployed into NodePool A alongside Locust controller + workers + the Load Test Conductor.
  • Each Hoverfly instance boots with hoverfly -webserver -import simulation.json — simulation file declares request matchers + templated responses per dependency.
  • The services under test route to either the real dependency or the Hoverfly mock per request based on a load-test header (see patterns/header-routed-mock-vs-real-dependency for the switching mechanism).

Tool-selection criteria (from Zalando)

Zalando's 5-way evaluation enumerated these dimensions:

  • Latency simulation — fixed, random, or none.
  • Fault simulation.
  • Stateful behaviour — required for flows where a POST and a later GET must agree (order placement, idempotency keys).
  • Extensibility — custom response logic beyond JSON matching.
  • Proxying — ability to record real traffic for replay.
  • Response templating — dynamic fields like {{ currentDateTime }}.
  • Request matching — path, method, header, body matchers.
  • Record & Replay — capture real traffic, replay later.

Zalando's picks were statefulness + record-and-replay + language- agnostic deployment — which landed on Hoverfly.

Required discipline

  • Mock schema drift. The real dependency's schema changes; the mock must be updated or test results are against stale contracts. Periodic contract verification is the complement to this pattern.
  • Mocks co-located with the services under test. Placing mocks in a different cluster introduces cross-cluster network latency that distorts the load-test results. Zalando puts both in the same test cluster's NodePool A.
  • Cleanup scope for stateful mocks. Mocks that accumulate state (Hoverfly's k/v map) need resetting between test runs or carry-over behaviour polluting subsequent tests.

Trade-offs

  • Doesn't exercise the real dependency path. A regression in the integration layer is invisible if the integration runs only to a mock. Separate integration tests (not load tests) are still required.
  • Mock authorship cost. Every mocked dependency needs a maintained simulation file. For a large third-party surface this is non-trivial; record-and-replay helps but doesn't eliminate it.
  • Mock realism ≠ real realism. Hoverfly's response templating is rich but bounded; complex stateful workflows may require custom extension.

Relation to other patterns

Seen in

Last updated · 476 distilled / 1,218 read