Fri 20 Nov 2020 23:00 - 23:20 at SPLASH-I - F-3A Chair(s): Hidehiko Masuhara, Ramy Shahin
In order to generate efficient code, dynamic language compilers often need
information, such as dynamic types, not readily available in the program
source. Leveraging a mixture of static and dynamic information, these
compilers speculate on the missing information. Within one compilation
unit, they specialize the generated code to the previously observed
behaviors, betting that past is prologue. When speculation fails, the
execution must jump back to unoptimized code. In this paper, we propose
an approach to further the specialization, by disentangling classes of
behaviors into separate optimization units. With contextual dispatch, functions are
versioned and each version is compiled under different assumptions. When a
function is invoked, the implementation dispatches to a version optimized
under assumptions matching the dynamic context of the call. As a
proof-of-concept, we describe a compiler for the R language which uses
this approach. Our implementation is, on average,
$1.7\times$ faster than the GNU R reference
implementation. We evaluate contextual dispatch on a set of benchmarks and
measure additional speedup, on top of traditional speculation with deoptimization
techniques. In this setting contextual dispatch improves the performance of 18 out of
46 programs in our benchmark suite.