The Xen team announced the release of Xen 2.0, the open-source Virtual Machine Monitor.Xen enables you to run multiple operating systems images concurrently on the same hardware, securely partitioning the resources of the machine between them. Xen uses a technique called ‘para-virtualization’ to achieve very low performance overhead –typically just a few percent relative to native. This new release provides kernel support for Linux 2.4.27/2.6.9 and NetBSD, with FreeBSD and Plan9 to follow in the next few weeks.
Xen 2.0 runs on almost the entire set of modern x86 hardware supported by Linux, and is easy to ‘drop-in’ to an existing Linux installation. The new release has a lot more flexibility in how guest OS virtual I/O devices are configured. For example, you can configure arbitrary firewalling, bridging and routing of guest virtual network interfaces, and use copy-on-write LVM volumes or loopback files for storing guest OS disk images. Another new feature is ‘live migration’, which allows running OS images to be moved between nodes in a cluster without having to stop them. Visit the Xen homepage for downloads and documentation.
i once read about something like this that enabled large server farms the ability to have several instances of an OS running, maybe for dependability so if one crashes another one is right there allready booted & running so there is no loss of time due to a os/server crash…
I spent most of the day playing with it at work today. Way cool stuff!
Seems like it doesn’t work if your root partition is Reiser4. Sucks.
What distinguishes Xen from VirtualPC/VMWare is that it needs a custom OS. Which means that if an OS is not written with Xen in mind, then it won’t run on Xen. VirtualPC/VMWare emulate the full x86 processor and the chipset.
Xen modifies all the non-virtualizable (unsafe) instructions of the OS to make it work on Xen. This means that every new kernel will need be ported to work on Xen and if an OS is not ported to work on Xen then it won’t work.
There approach is good for performance but utility wise, i would prefer virtualPC or vmware more than Xen, even though they are free.
#VirtualPC/VMWare emulate the full x86 processor and the #chipset.
Sorry, you are wrong there. VMware just passes all the instructions of guest-OS to the Ressources in it’s own proccess, and letting an internal scheduler do the arbitration work.
This is why it’s it impossible to run non-x86 binaries though VMware, and that’s why VMware is so ‘fast’.
Xen shares the Ressources of the system on a lower level, thus depends less on the Host-os.
What do you mean with utility wise?
There are needs for which Xen just fits better. Think of virtual servers for example..
QEMU becomes more usable from version to version, runs in user-space and can emulate much more archs like PPC or ARM
Give it a chance.
I agree, qemu is quickly becoming one of the best options.
It has the best performance/feature trade off in my opinion.
Hopefully there will be a new release soon though, as I am getting tired of constantly using cvs. :o/
The Reiser4 rootfs should work just fine since it is arch independent. If it’s breaking it could be a latent bug in Reiser4, a weird configuration issue, a conflict with the Xen patch (seems unlikely) or a latent bug in Xen (seems unlikely, since ext2/3, ReiserFS, XFS are believed to work fine).
Xen does require a custom kernel (although no changes to userspace are required, so standard distros should work fine). The intention is that the arch/xen tree is integrated into major releases of kernels / distributions so that this will Just Work.
Linux 2.4, 2.6 and NetBSD are already available. Arguably many people will want Linux in their virtual servers, anyhow and it’s already fully supported. It’s unfortunate that Windows isn’t available, however.
There are always going to be places where VMWare is preferable (running Windows, developing native hardware OSs…), it’s just another point in the price / compatibility / performance space. For some people it’s a better fit but not everyone.
Anyone has any thoughts on Qemu vs Xen vs UML? My only real experience lies with Qemu, but I’m interested if anyone would dare to use any of these projects on a production system.
I know that these projects have a different way of providing virtualization, but I’m very interested in the differences in performance and features when running Linux with them.
Have you looked at the benchmarks provided by Xen?
Disclaimer: I haven’t used Qemu much or UML at all. I’ll also declare a conflict of interest – I have worked on Xen 😉 I’ll try and be balanced but you should take care to evaluate the merits of all approaches before choosing one.
* Qemu is great for virtualising a whole machine, for use as a VMWare replacement. It’s supposed to have much better performance than Bochs but IIRC, the benchmarks on their site rate it as 4x slower than native execution. This performance hit is taken by *all* code running in the virtual machine. It can run lots of x86 OSs, including Windows, so it’s really good for compatibility. I think there’s some support for suspend / resume of running virtual machines, although I don’t know how mature this is. I’ve found QEmu to be rather impressive in my experience, although some quirky OSs have trouble running in it. It also supports multiple architectures (as hosts and guests) and if you compile a custom kernel to run in it, you can improve the performance still further. Fabrice has said his next project will be to improve the virtualisation to use VMWare-style techniques to achieve even higher performance.
* UML is a port of the Linux kernel to run as a Linux application. UML is officially supported in the mainstream Linux kernel tree. For CPU intensive tasks, it’s basically as fast as native execution. Kernel-intensive code (lots of syscalls / IO) takes quite a large performance hit (about the same as VMWare). I’d expect it to perform better than QEmu for most (all?) workloads. It is used in production virtual hosting environments and has been around for a while. UML has support for SMP virtual machines, I think.
* Xen is a low-level virtual machine monitor, which runs ports of operating systems to its “virtualisation friendly” architecture. Ports are relatively easy to do (compared to other OS ports ;-). Xen has a low overhead (negligible for CPU intensive workloads and rather lower for kernel-intensive workloads than UML or VMWare). Xen has some other nifty features, like live migration. Xen is also in production use in virtual hosting environments and the development team intend to get it into the mainline kernel tree. NetBSD also runs on Xen and FreeBSD and plan 9 ports are on the way. Xen supports SMP hosts but not (yet) SMP guests (it’s in the pipeline…).
For more information about the overheads, see the link provided by Ricky.
Oh, forgot to mention 😉
Vservers may also be of interest for virtual hosting and it’s also being used in production. It’s basically like chroot / jail but more flexible. All the virtual machines really share the same kernel but they can’t see each other’s processes and they can safely be given root accounts that can’t break out of their private vserver. The overhead is essentially zero so performance is excellent. The virtual machines are somewhat less flexible than the other solutions: they can’t run different kernels / OSs, there’s no suspend / resume, there’s no live migration and virtual machines are limited to a single IP address. There’s also the prospect that a kernel root exploit will compromise the entire host, not just the virtual machine (which is not the case for Xen and QEmu, not sure about UML).
You can do something similar on BSD using jail, although I’m not sure whether people use that for virtual hosting.
I’ve been very impressed by Vservers in terms of ease of use, it’s performace is great but it lacks some features of the other alternatives. Again, it depends on what suits your solution best 😉
No, they do have to convert unsafe x86 instructions otherwise a guest OS running would be able to affect the host OS. EFlags is a register which used in unsafe instructions because some bits of it can be modified in user mode and other in kernel mode. For complete isolation, it is at minimum required to do shadow page table management and emulate unsafe x86 instructions.
Even xen does these two things, but it removes the overhead by modifying the OS code. I am not sure what part it modifies but there are details about it in their architecture document.
Yes, Xen’s speed comes from not having to do the scanning / rewriting that VMWare has to do. Xen modifies guest OS memory management code so that it can cope with running in discontiguous physical memory and so that Xen is able to validate page table updates before they are installed, ensuring proper isolation. The source code changes mean that Xen does not need to use Shadown Page Tables, except during Live Migration.
The co-operation from the Guest OS minimises the overhead in maintaining an isolated virtual machine. if the guest OS tries to circumvent the protection then the operation it attempted will fail (and they guest may be killed).
Since there seems to be some emulation/virualiztion expertise in this discussion, I have a question. There are certain x86 instructions that are not safe for virtualziation, such that architectures like vmWare or Xen need to emulate or avoid these. Xen seems to require some modification to guest OS’s to help avoid this emulation. For architectures that do avoid this emulation (VServers, UML, Xen?) — is there a security risk? Although the guest OS is modified to avoid these non-privileged-yet-unsafe instructions, could user code be written, perhaps directly in ASM, to exploit these instruction into a security hole? I am not concerned with standard “buffer overflows”, etc, but rather, could one virtualized instance make some global change that would confuse another virtualized instance into crashing, a la Denial of Service?
I use Virtual PC a lot and i tried doing these things. I tried to bypass their page table management to somehow get access to physical machine RAM and all, but it seems all the holes are already plugged, even when i tried to do all this from kernel mode.
I am not sure how Xen does it, may be you can write a kernel mode driver to do some damange but i have a *feeling* that it won’t be possible. They must have done something to prevent a guest OS from taking down the host. Isolation and speed are the two pillars of virtualization. Without them virtualization is not useful.
I’ve been working on highly customized UML software for my work where we use them as virtual routers. (Similar to alcatel and foundry buch much much cheaper ). We looked into Xen about a year ago, but it lacked the needed momentum. I’ll be spending this next week looking at xen at work because the live migration is sexy. Being that we are a carrier we must have failover and be able to route duplicate ip space between contexts, which is why we could never look into vserver or freevps. Also, we (mosaix) would like to start an opensource virtual router, so if anyone would like to start a team to do this, there is a large piece of market to take away from the cisco’s of the world.
Looks like we can look forward to AMD64 support in early 2005…
> There are certain x86 instructions that are not safe for
The problem is that certain privileged instructions will fail silently when the processor is not running in supervisor mode. If you’re running virtual machines, you don’t want to let them *really* run code in supervisor mode because then they’d have total control of the machine. The trouble is that when you run them at lower privilege, this quirk of x86 could result in bits of code silently failing 🙁
The instructions don’t present a security problem, they just make it a pain to run an x86 OS in! VMWare has to scan for these instructions and rewrite them to do something saner… Xen modifies the OS not to use these instructions but to call into Xen explicitly – no tricky scanning / rewriting is involved, which benefits performance and code simplicity.
An architecture designed to support virtualisation would “trap” attempts to execute these instructions and jump into the virtual machine monitor so that it can emulate them.
So, to answer your question: it shouldn’t be possible to do anything bad using these instructions, they just make life harder for VMM developers 😉
Your feeling is right! Virtual machine kernels running on Xen are not trusted (except for the administration VM that runs the admin software, device drivers, etc.). Users can run whatever kernel they want and mess with it as much as they want (loading weird device drivers, playing with kernel memory) and they still can’t break out of their virtual machine…
x86 has four privilege levels called Ring 0 (most priv), Ring 1, 2 and 3 (user mode code). Xen runs itself in Ring 0 (total control over the machine), guest OS kernels in Ring 1. Ring 1 is set up such that guest OSs can only do certain privileged operations by asking Xen to perform them. If guests try to break out of their sandbox, the operation will just fail.
@Mark: I think this thread has benefited a lot from your answers, thanks!
I’ve now read most of the documentation (it is well done, not too much, and easy structured), and eventhough I have no experience with LPAR (AS/400-iSeries-i5), I would think that it was an inspiration for this project or am I wrong here?
Yes, IBM’s virtualisation was an inspiration to the Xen team, I believe. When it comes to virtualisation, IBM is the big granddaddy since they were basically the first people to do it in the real world (I think it was on System / 370).
VMWare’s ESX server is also similar in structure to Xen, since it runs at the lowest layer in the system with all the OSs running on top. This was also cited in the Xen white papers.
The Denali VMM ( http://denali.cs.washington.edu/ ) from University of Washington would also have been an inspiration. It predates Xen as a paravirtualising VMM but at the time of Xen’s first release did not have the necessary features to support full-blown operating systems (it was solving a different problem and so didn’t need them, although they have since been added).
Btw, if anyone’s interested in the technicalities behind Xen, it’s worth checking out the research white papers on the Xen website. They give a more detailed explanation of the hows and the whys of Xen’s architecture.