Methodology
Each benchmark is run 15 warmup iterations followed by 10 measured iterations.
Compile time and execution time are measured separately. Results show the mean of measured iterations.
- rayzor-cranelift — Source → MIR (O2) → Cranelift JIT. Compile includes parsing, type-checking, MIR lowering, optimization, and JIT compilation.
- rayzor-llvm — Source → MIR (O2) → LLVM MCJIT. Same frontend pipeline, LLVM backend for peak throughput.
- rayzor-tiered — Source → interpreter → Cranelift JIT. Uses the Benchmark tier preset: interpreter thresholds (2/3/5), immediate bailout, synchronous optimization. Compile includes parsing + module loading; execution includes interpreter startup and JIT tier-up.
- rayzor-precompiled — Pre-bundled .rzb (MIR already O2-optimized) → Cranelift JIT. Compile is bundle load + JIT only (no parsing/lowering).
- rayzor-precompiled-tiered — Pre-bundled .rzb → tiered execution with LLVM upgrade after warmup.
All targets share the same runtime (librayzor_runtime) and execute the same Haxe source code.
MIR optimization level O2 includes: dead code elimination, constant folding, copy propagation, function inlining, LICM, and CSE.