CONCEPT Cited by 1 source
JIT compilation¶
Just-in-time (JIT) compilation is the highest-performance point on the dynamic-language execution spectrum: at runtime, the engine compiles bytecode (or sometimes AST) directly to native machine instructions of the host CPU, which then execute without interpreter-loop dispatch.
The flow:
Why JIT wins (when it wins)¶
In a bytecode VM, each instruction pays dispatch cost —
fetch opcode, branch to handler, return to the dispatch loop.
For languages where individual bytecode operations map closely
to native CPU instructions ("an ADD operator only has to
perform a native x64 ADD"), dispatch cost can dominate
runtime, so replacing the dispatch loop with a block of
CPU-native instructions wins big (Source:
sources/2025-04-05-planetscale-faster-interpreters-in-go-catching-up-with-cpp).
Canonical JIT engines:
- V8 (JavaScript) — multiple tiers: Ignition bytecode interpreter → Maglev mid-tier JIT → TurboFan optimising JIT.
- HotSpot (Java) — C1 and C2 compilers + tiered escalation.
- LuaJIT — one of the most aggressive JITs ever built; Mike Pall's blog archive is canonical reading on interpreter engineering.
- PyPy (Python) — tracing JIT on top of RPython.
- CPython 3.13+ — experimental copy-and-patch JIT.
The deoptimization contract¶
JITs typically speculate on runtime types: they emit native
code assuming x is always an integer, with a type-guard
instruction that bails back to the bytecode VM if the assumption
fails. This is structurally identical to the
Vitess evalengine
VM→AST deoptimization flow — just
one layer lower.
Why JIT is not always the right next step¶
Martí's 2025 PlanetScale post canonicalises the dispatch-overhead-share threshold as the decision rule for JIT:
"JIT compilers are important for programming languages where their bytecode operations can be optimized into a very low level of abstraction (e.g. where an 'add' operator only has to perform a native x64 ADD). In these cases, the overhead of dispatching instructions becomes so dominant that replacing the VM's loop with a block of JITted code makes a significant performance difference. However, for SQL expressions, and even after our specialization pass, most of the operations remain extremely high level (things like 'match this JSON object with a path' or 'add two fixed-width decimals together'). The overhead of instruction dispatch, as measured in our benchmarks, is less than 20%." (Source: sources/2025-04-05-planetscale-faster-interpreters-in-go-catching-up-with-cpp)
The resulting rule of thumb:
- If instruction-dispatch cost > ~30% of runtime, JIT is justified — the ceiling you can reach by optimising the VM loop alone is too low.
- If instruction-dispatch cost < 20%, JIT is "needlessly complex dead optimization" — other factors dominate, and JIT adds substantial engineering cost (code generation, relocation, invalidation, security surface, multi-arch support).
For Vitess's SQL expressions, most "add" operations are actually "add two fixed-width decimals" or "match a JSON path" — high-level operations where the opcode body dwarfs the dispatch cost. JIT wouldn't help.
Seen in¶
- sources/2025-04-05-planetscale-faster-interpreters-in-go-catching-up-with-cpp — canonical rejection of JIT for Vitess evalengine on the measured <20% dispatch-overhead-share threshold. First canonical wiki statement of the dispatch-overhead-share decision rule for when JIT is justified vs. overkill.