Earlier today, OSNews ran a story on a presentation held by Microsoft’s Eric Traut, the man responsible for the 200 or so kernel and virtualisation engineers working at the company. Eric Traut is also the man who wrote the binary translation engine for in the earlier PowerPC versions of VirtualPC (interestingly, this engine is now used to run XBox 1 [x86] games on the XBox 360 [PowerPC]) – in other words, he knows what he is talking about when it comes to kernel engineering and virtualisation. His presentation was a very interesting thing to watch, and it offered a little bit more insight into Windows 7, the codename for the successor to Windows Vista, planned for 2010.A few months ago, I wrote a story called “Windows 7: Preventing Another Vista-esque Development Process“, in which I explained how I would like to see Windows 7 come to fruition: use the proven NT kernel as the base, discard the Vista userland and build a completely new one (discarding backwards compatibility, reusing code where it makes sense), and solve the backwards compatibility problem by incorporating virtualisation technology. Additionally, maintain a ‘legacy’ version of Windows based on Vista (or, more preferably, 2003) for businesses and enterprises who rely on older applications.
Traut’s talk gives some interesting insights into the possibility of this actually happening. There is a lot of talk on virtualisation, at four different levels:
- Server Virtualisation: Virtual Server 2005/Windows Server 2008
- Presentation Virtualisation: Terminal Services (RDP)
- Desktop Virtualisation: Virtual PC
- Application Virtualisation: SoftGrid Application Vitualization – this allows applications to run independently, so they do not conflict with each other. Traut: “You might think, well, isn’t that the job of the operating system? Yeah it is. That’s an example of where where we probably didn’t do as a good a job as we should have with the operating system to begin with, so now we have to put in this ‘after the thought’ solution in order to virtualise the applications.”
There was a lot of talk on the new hypervisor in Windows Server 2008. It is a small kernel (75 000 lines of code) that uses software “partitions” where the guest operating systems go. The hypervisor is a very thin layer of software, it does not have a built-in driver model (it uses ‘ordinary’ drivers, which run in the partition – access to for instance a NIC goes straight through the hypervisor, it is not even aware of the NIC), and, most importantly, it is completely OS agnostic. It has a well-defined, published interface, and Microsoft will allow others to create support for their operating systems as guests. In other words, it is not tied to Windows.
Traut also introduced the ‘Virtualization Stack’ – the functionality that has been stripped from the micro-hypervisor to allow it to be so small in the first place. This stack runs within a parent partition, and this parent partition manages the other ‘child’ partitions (you can have more than one virtualization stack). Interestingly – especially for the possibility of future Windows versions pushing backwards compatibility into a VM – this virtualization stack can be expanded with things like legacy device emulation, so the guest operating system is presented with vitualised instances of the legacy devices it expects to be there.
Interestingly, Traut made several direct references to application compatibility being one of the primary uses of virtual machines today. He gave the example of enterprise customers needing virtual machine technology to run older applications when they upgrade to the latest Windows release (“We always like them to upgrade to our latest operating system.”). Additionally, he acknowledged the frustrations of breaking older applications when moving to a new version of an operating system, and said that virtual machines can be used to overcome this problem. He did state, though, that this really is kind of a “sledgehammer approach” to solving this problem.
However, it is not difficult to imagine that, a few years from now, this technology will have developed into something that could provide a robust and transparent backwards compatibility path for desktop users – in fact, this path could be a lot more robust and trustworthy (as in, higher backwards compatibility) than the ‘built-in’ backwards compatibility of today. Additionally, it has a security advantage in that the virtual machines (as well as their applications) can be completely isolated from one another as well as from the host operating system.
The second important change that I would like to see in Windows 7 is a complete, ground-up overhaul of the userland on top of the proven Windows NT kernel, reusing XP/2003/Vista code there where it makes sense, disregarding backwards compatibility, which would be catered for by the use of virtualisation technology. Let’s just say that step 1 of this plan is already complete: strip the NT kernel of basically everything, bringing it back to a bare-metal kernel, on which a new userland can be built (again, re-using code where it makes sense).
The result was shown by Traut, running in VirtualPC 2007: Windows 7 MinWin. It is 25 MB on disk, and uses 40MB of RAM (still not as small as Traut wants it to be). Since MinWin lacks a graphical subsystem, it sported a slick ASCII-art bootscreen. It has a very small web server running inside of it, which can serve a task list, a file list (MinWin is made up of 100 files, compared to standard Windows having about 5000), and memory statistics in your web browser. Traut: “So that’s kind of proof that there is actually a pretty nice little core inside of Windows.”
MinWin is actually not the same as Windows Server Core. MinWin is infinitely smaller than Server Core (which is about 1.5GB on disk). MinWin is also infinitely more useless compared to Server Core – the latter is a full-featured server operating system, whereas MinWin is a stripped Windows NT kernel with minimal userland features.
Traut stressed more than once that MinWin is not going to be ‘productised’ in itself – you should see it as a base on which the various editions of Windows are going to be built – server, media center, desktop, notebook, PDA, phone, you name it. Of course, only time will tell if Microsoft will ‘simply’ dump an updated Vista userland on top of MinWin and call it Windows 7, or that it will actually build a new userland from the ground up on top of MinWin.
All in all, this presentation by one of the most important kernel engineers by Microsoft has taught us that at the very least, Microsoft is considering the virtualisation approach for Windows 7 – it only makes sense, of course, but having some proof is always a good thing. Additionally, the presentation also showed us that Microsoft is in fact working with a stripped-down, bare-metal version of the NT kernel, to be used as a base for future Windows releases.
I can now ask the same question I ended my previous Windows 7 article with: “Is it likely a similar course of action will pan out over the following years?” The answer is still “no”, but the possibility has inched closer.
And we can only rejoice about that.