Linked by Peter Gerdes on Mon 10th Jan 2005 17:35 UTC
Editorial As a recent ACM Queue article observes the evolution of computer language is toward later and later binding and evaluation. So while one might quibble about the virtues of Java or the CLI (also known as microsoft.net) it seems inevitable that more and more software will be written for or at least compiled to virtual machines. While this trend has many virtues, not the least of which is compatibility, current implementations have several drawbacks. However, by cleverly incorporating these features into the OS, or at least including support for them, we can overcome these limitations and in some cases even turn them into strengths.
Permalink for comment
To read all comments associated with this story, please click here.
Short point on JIT fastness
by logicnazi on Tue 11th Jan 2005 14:52 UTC

Alright so there has been much debate over whether JIT compilation can be faster than ahead of time compilation.

Now in some *very theoretical* sense AOT compilation can match anything JIT compilation can accomplish. After all one could regard the entire JIT compiler and executed instructions as one AOT compiled program. In general no matter what compilation technique you use some sequence of machine instructions is being executed and could be coded by hand in that manner or by a sufficently good AOT compiler. So in theory AOT will always have the advantage over JIT as whatever optimizations the JIT compiler will produce with run-time data can be hardcoded into the program, i.e., in the worst case you might write a program that uses self-modifying code to duplicate whatever run-time optimizations the JIT makes use of while avoiding some of the JIT overhead.

However, whether or not some ideal AOT compiler could do a better job really isn't the question. Producing a perfect compiler is actually mathematicall impossible (it would require solving the halting problem) so the correct question is whether a JIT compiler has practical advantages which make optimization easier than in an AOT compiler. Indeed I think it does for a couple reasons.

First of all profiling is simply easier with a JIT system as it happens automatically without forcing collection of real world data and recompilation. Moreover, unless one expects users to recompile all their own binaries with their own profiling info a JIT system has access to profiling info relevant to a particular users usage pattern which AOT compilers do not. This can make a real difference as a user who calls a function on large data sets may benefit from loop unrolling greatly but another who calls it often on small data sets may not. Similar considerations apply to optimizations for the processor the user is currently using.

I won't continue listing various runtime optimizations that are *easy* to make with a JIT compiler but suffice it to say they are their. While those of you insisting that an AOT compiler could do this you are correct in principle. One might just build a profiling feature into the compiled program and a function which modifies the code in response. However, we simply don't have good AOT algorithms to do this sort of thing while they are easy to do in JIT code. Moreover, at this level the distinction between JIT code and AOT code starts to disappear as one might reasonably alledge you are just incorporating the JIT compiler into your binary.

So I think the advantages of JIT compilation with respect to performance are clear the question is just whether they overcome the overhead of JIT compilation. I think the answer is clearly yes if we make good use instruction caching.