CONCEPT Cited by 1 source
Bot-safer-than-human¶
Definition¶
Bot-safer-than-human is the deliberate decision to automate transformations that are not strictly necessary, precisely because they are delicate enough that human reviewers will get them wrong. The counterintuitive framing: scope up the bot to de-scope the human.
Why this is counterintuitive¶
The default review discipline treats bots as dangerous and humans as the safety backstop. The usual rule of thumb is "anything sensitive, a human should touch." Meta's observation at Java-to-Kotlin migration scale inverts this:
"Contrary to popular belief, we've found it's often safer to leave the most delicate transformations to bots. There are certain fixes we've automated as part of postprocessing, even though they aren't strictly necessary, because we want to minimize the temptation for human (i.e., error-prone) intervention." (Meta, 2024-12- 18.)
Three properties make the bot-safer stance hold:
- Transformation is deterministic. A bot applies the same rewrite every time; a human applies a slightly different rewrite each time.
- Transformation is delicate. The kind of thing a rushed reviewer would "fix while touching the file" and accidentally break. The specific named example: condensing long chains of null checks. A bot compresses them correctly every time; a human compressing them by hand can "accidentally drop a negation."
- The volume is high. At 40,000+ conversions, even a small per-file human error rate produces unacceptable aggregate bug introduction.
Canonical wiki instance — Kotlinator's long null-check chains¶
Meta's Kotlinator condenses long chains of null checks during postprocessing not because the short form is more correct (it isn't), but because reviewers touching that code inadvertently invert negations. The bot eliminates the opportunity.
The framing extends the broader [[patterns/automated- migration-at-monorepo-scale|monorepo-scale migration]] pattern: once the pipeline is good, every step left to a reviewer is a potential regression vector. The optimisation is to push as much as possible inside the pipeline.
When does bot-safer fail?¶
Four conditions invalidate the bot-safer stance, and the pattern should not be blindly applied:
- Transformation is ambiguous. If the bot has to make a judgement call (e.g. inferring business logic), a human is safer — the bot's determinism makes it systematically wrong across the whole codebase.
- The transformation is one-off. Volume matters; a single rewrite is cheap enough for a careful human.
- Bot's rewrite isn't fully reviewed. If the bot's output isn't still going through human review, the safety story collapses — the point is to reduce reviewer opportunities for error, not to eliminate review.
- Transformation isn't well-specified. Delicate transformations benefit from a test suite; if the rewrite spec exists only in someone's head, the bot version is a regression waiting to happen.
Related framing: "source code is the code running"¶
Bot-safer-than-human is a cousin of Meta's earlier stance from the Haskell / Sigma post (sources/2015-06-26-meta-fighting-spam-with-haskell): "source code in the repo is the code running in Sigma". Both posts express the same underlying discipline — when you can move a class of risk into the tooling infrastructure, do so, and reduce the surface the human interacts with.
Seen in¶
- sources/2024-12-18-meta-translating-10m-lines-of-java-to-kotlin — canonical wiki instance.
Related¶
- systems/kotlinator — the pipeline that operationalises the stance.
- concepts/interlanguage-null-safety — the problem space where the null-check-chain example lives.
- patterns/automated-migration-at-monorepo-scale — the wrapping architectural pattern.
- patterns/closed-feedback-loop-ai-features — the human- in-the-loop sibling: bot-safer-than-human doesn't mean bot-alone; the bot's output still receives review.