posted by Peter Gerdes on Mon 10th Jan 2005 17:35 UTC
IconAs a recent ACM Queue article observes the evolution of computer language is toward later and later binding and evaluation. So while one might quibble about the virtues of Java or the CLI (also known as it seems inevitable that more and more software will be written for or at least compiled to virtual machines. While this trend has many virtues, not the least of which is compatibility, current implementations have several drawbacks. However, by cleverly incorporating these features into the OS, or at least including support for them, we can overcome these limitations and in some cases even turn them into strengths.

So as to head off any confusion or a premature dismissal of my ideas, let me make it clear that by operating system I DO NOT mean the kernel. So when I talk about moving support for virtual machines or JIT compiling into the OS I don't necessarily mean putting it into the kernel. Some kernel support may be necessary but how much and what goes into the kernel will depend on the implementation and the costs associated with switching between user and kernel space. I would expect most of the ideas I suggest below to occupy a space similar to that of a dynamic linker. A fundamental part of the OS but not necessarily part of the kernel.

Current Virtual Machine Technology

The first, and most obvious, drawback to the use of VMs (virtual machines) is performance. I don't want to get involved in the religious debate about the speed of Java compared to C++, but the current technology for executing VM code has an inherent performance disadvantage. While simple to program, interpreters are far too slow. Just in time (JIT) compiling is much faster than interpretation but still adds the compilation time to the execution time. If one could execute the same code without the overhead of compilation you would have a significant performance gain. Even if impressive development effort and the use of run-time profiling has temporarily made JIT compilers as fast as ahead of time (AOT) compilation it is only a matter of time until these advances make their way into AOT compilers and they regain the performance advantage.

There is also another performance issue facing VM code: start up time. While this may be of limited concern to large applications or to servers which only load code once, it presents a serious difficulty to writing small utility applications in code which compiles to a virtual machine. This large load time is also also a big contributor to the end user's impression of VM code as slow. Faster processors and other tricks like leaving parts of the VM in memory might make this manageable for most applications it still doesn't let us use this code in commonly-used libraries and other system code, denying us much of the tantalizing portability promised by VMs.

This load time overhead brings us to the second drawback to current VM technology. The issue is well explained in this article in Java Developers Journal. I won't repeat the article but the short explanation is that to deal with the large load-time overhead what should be separate tasks end up inside the same OS process. As a result, programs written for a VM do not gain the same benefits of memory protection and other process/thread isolation features in modern operating systems. This also poses great difficulties for any attempt to write code in a VM which acquires permissions or privileges from an operating system.

Towards a Solution

One of the first things that should occur to someone when they learn how modern JIT compilers work is how remarkably inefficient the process is. The most obvious inefficiency is that every time the program is executed we spend time recompiling the exact same code. Despite what present software offerings suggest we don't need to make a black and white choice between recompiling the program on each instance and binary incompatibility. It is completely possible to cache snippets of compiled code for use in later execution without sacrificing the benefits of using a VM.

The FX!32 binary translator/emulator from DEC is an amazing example of the power of this method. In this case the 'virtual machine' the code was written in was actually x86 machine language which was being run on an alpha chip. Instead of simply emulating the code inefficiently or forcing the user to wait while code was recompiled FX!32 would begin by emulating the code and then compile frequently-used code snippets into native binary which would be stored on disk for later execution. This technique of mixed emulation and stored translated code was so powerful that after enough program executions the emulated code would sometimes run faster than the original code. A detailed description of the FX!32 system can be found here .

While this might be the most obvious inefficiency in JIT compilation it is not the only one. Various sorts of code analysis and intermediate representations are often recreated every time the JIT compiler is run. In some cases these are structures already computed, in a much easier fashion, in the prior compilation from a high level language to the VM code. For instance when compiling from C# to CIL a CFG (control flow graph) is computed and then discarded but then the CFG must be computed again by the JIT compiler to convert CIL to machine code. Furthermore, having to analyze a lower level representation, and having less time to do it in, may very well produce a less detailed representation, e.g. believing the entire result of an operation is needed even though part of it may be unnecessary.

Multi-level Binaries

There is an elegant solution to both of these issues. The basic idea is to avoid discarding information in the course of the compilation process. Instead of having a binary which has only machine code, or only VM code or a text file with just source code, we combine them together. When compiling from a high level language we annotate the source with a CFG (or perhaps a SSA tree) and the manner in which the high level language corresponds to the VM code. The same idea applies to our JIT compilation from VM code to machine code. We annotate the VM code with the snippet of machine code the JIT compiler generates so next time the binary is run we need not recompile that snippet. Of course corporate software developers may not wish to include their full source code but they can still benefit from the additional information contained in the multi-level binary and the performance benefits of reducing redundant work. While developers and users gain the convenience of treating source files, at any level, as if they were executables.

The performance benefits to this approach are manyfold. By only calling JIT/compiler features when they are not already cached we remove the fundamental advantage natively compiled code has over VM code while retaining the binary compatibility. When a file is transferred to a different machine the JIT compiler voids the machine specific part of the multi-level binary. By retaining the higher level code future improvements and optimizations can increase the speed of prior programs. Since the code is not fully compiled sensitive operations can still be passed to external emulation functions so as to give the sandbox style features VM code often possesses. Furthermore we can run programs with dynamic function creation (like nearly every lisp program) without the overhead of compiling an interpreter into the program. Finally, since the multi-level binary retains more information on program structure it is easier to make use of profiling information which we could also save in the multi-level binary.

Table of contents
  1. "Page 1"
  2. "Page 2"
e p (0)    32 Comment(s)

Technology White Papers

See More