Linked by Peter Gerdes on Mon 10th Jan 2005 17:35 UTC
Editorial As a recent ACM Queue article observes the evolution of computer language is toward later and later binding and evaluation. So while one might quibble about the virtues of Java or the CLI (also known as it seems inevitable that more and more software will be written for or at least compiled to virtual machines. While this trend has many virtues, not the least of which is compatibility, current implementations have several drawbacks. However, by cleverly incorporating these features into the OS, or at least including support for them, we can overcome these limitations and in some cases even turn them into strengths.
Permalink for comment
To read all comments associated with this story, please click here.
Re: JIT is faster than static compilation
by Anonymous on Tue 11th Jan 2005 09:01 UTC

It's true that you can manually create all the instructions C compilers produce by hand, but these 'optimizations' are actually extra instruction that are seen as 'useless' by many assembly programers and they chose to shortcut the instructions. After all, why access that register, etc, etc, when you don't have to? However, it's this particular order of instructions that actually places the processor in the right state such that it can accurately predict what to do next.

You're right that it needs in-depth knowledge to do assembly optimizations; however, the overwhelming majority that use assembly to 'optimize' don't have that knowledge and do it "because it must be faster."

And again I will say, the JIT and runtime will always know more about the executing state of the machine. Not just its configuration, but all the data and instructions issued, and dynamically recompile select sections of code to adapt. Static compilers and runtimes have to rely on alternate code paths. Dynamic compilers and runtimes can 'see' what's going on at the moment and adapt the generated machine instructions. Static compilers need to guess what will happen next and select a path. We're not talking about self-tuning of the program's internal variables and what-not, IIS and SQLServer have been doing that for years.

People like to cite the overhead of dynamic compilers and runtimes as to affecting performance in a negative way. However, this 'overhead' actually makes it faster. A lot of people find this very hard to accept as it seemingly goes against all conventional logic on the surface. It makes more sense when you go deeper and look at things like speculative execution and probabilities.

There is one very important trade-off with automatic compiler optimization, though. Source level debugging becomes useless after the compiler's had it's way, and you need to drop into the lower levels. Compiler and runtime writers dedicate a lot of effort to trying to make sure the optimizer doesn't introduce deterministic bugs