CONCEPT Cited by 1 source
Example-based test constant-input antipattern¶
Definition¶
The example-based test constant-input antipattern is the failure mode where a unit test hand-types specific constant values as inputs, so the test passes trivially against any implementation that hard-codes — or accidentally returns — the same constant. The test name suggests it exercises a universal property of the function under test; in fact it exercises only the one input pair the author thought of.
Canonical worked example from Zalando's 2021 post:
// production code
func set(value: String, for key: String) {
internalStorage.set(value, for: key)
}
func get(for key: String) -> String? {
internalStorage.value(for: key) as? String
}
// test (before — uses constants)
storage.set(value: "Zalando", for: "companyName")
let obtained = storage.get(for: "companyName")!
XCTAssertEqual("Zalando", obtained)
Now tamper the production get(for:) to return "Zalando".
The test still passes. The function signature says it
accepts and returns any String; the test only covered one.
The test is near-tautological against the tampered
implementation (Source: sources/2021-02-01-zalando-stop-using-constants-feed-randomized-input-to-test-cases).
The fix — randomise the input¶
let value = String.random
let key = String.random
storage.set(value: value, for: key)
let obtained = storage.get(for: key)!
XCTAssertEqual(value, obtained)
Now the test expresses "for any (String, String) pair, set
then get returns the same value" — a universal property. The
tampered return "Zalando" implementation fails immediately
because value differs from "Zalando" on every run.
This is the entry-level form of property-based testing: one random permutation per test run, no explicit invariants beyond the round-trip, no shrinker or seed replay machinery. It is where you start with PBT discipline — not where you finish.
Why constants persist in test suites¶
Several forces pull tests toward hand-typed constants:
- Literal readability.
"Zalando"at the call site makes the intent obvious at a glance;String.randomrequires the reader to know the Random-protocol machinery exists. - Deterministic failure output. A failing constant test reports the exact failing input; a failing random test without a shrinker or recorded seed gives whatever the RNG produced that run — harder to reproduce.
- Historical habit. Most unit-testing tutorials and books are written in the example-based idiom; developers carry that habit forward.
- Assumed redundancy. "We have many tests, so one hard- coded value is fine" — misses the failure mode that every test can share the same hard-coded blind spot for the same function.
How to detect the antipattern in code review¶
- Every test input is a string / integer / struct literal at the call site.
- The assertion's expected value is a literal that appears
elsewhere in the test (the same
"Zalando"appears both in the setup and the assertion). - Wrapping
return <literal>into the production code does not cause any test to fail. - No test uses a typed random generator (
.random,Arbitrary,Gen,generator()depending on language).
Where this hits hardest¶
- Serialisation / deserialisation round-trips.
encode → decodeshould be identity for any input. Constants test one specific shape. - Storage abstractions.
set → getshould round-trip for any key/value pair. Kandel's canonical worked example. - Protocol conformance. If a type conforms to
Equatable/Hashable/Codable, randomised instances exercise the conformance across the type's full space. - Data transformation functions. Parsers, formatters, validators — anywhere the function claims to handle "any valid input of type T."
Where constants are correct¶
Not all constant-input tests are the antipattern. Tests that
target specific edge cases — "", nil, Int.max, malformed
bytes, known-historical-bug reproducers — are named
example-based tests precisely because the example matters.
The antipattern is specifically: using a generic-looking
constant where the function contract accepts any value of the
type.
A healthy test suite pairs property-based tests for universal claims + example-based tests for known edge cases. Neither replaces the other.
Generalisation beyond Swift¶
The antipattern and the Type.random fix compose cleanly across
languages with type inference and typeclass-style generator
dispatch:
- Haskell —
QuickCheck'sarbitrary :: Gen aresolves per type. - Rust —
proptest'sArbitrarytrait orquickcheck'sArbitrarytrait. - Python —
hypothesis.strategies.from_type(T). - JavaScript / TypeScript —
fast-check.sample(fc.anything())orfc.sample(fc.constantFrom(T)). - Java / Kotlin —
jqwik/Kotest's typed generators. - Swift — Randomizer's
Randomprotocol.
Zalando's contribution is not the insight (which dates to QuickCheck, Claessen & Hughes 2000) but the Swift library that makes it ergonomic at the iOS developer altitude.
Seen in¶
- sources/2021-02-01-zalando-stop-using-constants-feed-randomized-input-to-test-cases — canonical disclosure. Vijaya Kandel (Zalando iOS) names the antipattern, ships the fix as an open-source Swift library (Randomizer), and applies it inside Zalando's mobile codebase. Attribution: technique credited to Jorge Ortiz's 2017 Swift Aveiro workshop on Clean Architecture.
Related¶
- patterns/property-based-testing — the pattern that fixes the antipattern at its full altitude.
- concepts/type-class-driven-random-generator — the generator-dispatch mechanism the fix depends on.
- systems/randomizer-swift — Zalando's Swift implementation.
- patterns/seed-recorded-failure-reproducibility — the reproducibility discipline Randomizer as described does not yet provide.
- concepts/test-case-minimization — the shrinker discipline that makes random-failure-input diagnosable.
- concepts/test-data-generation-for-edge-cases — Yelp's adjacent discipline: not random inputs, but cumulative- production-edge-case-backport fixtures. Different mechanism, same motivation (fixtures systematically narrower than the function contract).