The GNU libc maintainer writes: “People are starting to realize how broken the Xen model is with its privileged Dom0 domain. But the actions they want to take are simply ridiculous: they want to add the drivers back into the hypervisor. There are many technical reasons why this is a terrible idea. You’d have to add all the PCI handling and lots of other lowlevel code which is now maintained as part of the Linux kernel. But this is of course also the direction of VMWare who loudly proclaim that in the future we won’t have OS as they exist today.”
I pretty much figured out that Xen was not the be-all end-all for virtualization by how the different “technical” minded companies reacted to Xen.
1) The FreeBSD community pretty much gave up on getting Xen running fully within FreeBSD and instead focused on making jails better and putting a KVM infrastructure (based on Linux’s implementation) into FreeBSD.
2) Linus refused to go the Xen way and instead pushed for the KVM implementation within the kernel.
3) RedHat integrated Xen into RHEL but made all its managment tool independent of Xen so they can easily switch to better virtualization technology when they come along while giving administrators an easy migration route and not forcing them to waste everything they learned.
Of course, certain distributions (won’t name them), who care more about hype than acknowledge technical excellence, did push Xen prematurely. But when some technology is fundamentally flawed, there is usually a lot of resistence. I consider Xen’s failure to gain major usage a very good thing.
Oh, yeah, one more thing. I find it ironic VMWare is proclaiming the operating system dead. Wait a few more years, when every operating system has virtualization capabilities that are based on standards, and then they will see the irony too.
Edited 2007-08-14 18:25
“””
Of course, certain distributions (won’t name them), who care more about hype than acknowledge technical excellence, did push Xen prematurely. But when some technology is fundamentally flawed, there is usually a lot of resistence. I consider Xen’s failure to gain major usage a very good thing.
“””
I’ve no such qualms about naming the responsible party: Novell.
And, of course, the so-called “community” distro which it controls: OpenSuse.
Let’s try not to follow Novell over any more cliffs.
Edited 2007-08-14 20:21
NetBSD was also hyping Xen on it’s main page for a while. But I see it’s not mentioned there now.
Once every OS will have the power of virtualization, what will VMWare sell?
Management tools.
Ponies. Everybody loves ponies.
Datacenter management tools. Even VMware admits that virtualization itself is not worth paying for; that’s why Player and Server are free. What’s worth paying for is SAN VMFS, VMotion, consolidated backup, Unity, etc.
Once every OS will have the power of virtualization, what will VMWare sell?
VMWare can rename themselves as VapourWare to be more relevent.
VMWare is all it’s glory but hope there is NO need for them soon, Xen to.
Go KVM! Go KVM!
This is a question: What about Solaris Zones?
Is Solaris Zones a KVM-like implementation?
No….Solaris Zones are a cool thing but it’s not a full virtualization solution like xen/kvm.Sun is pushing Xen as Solaris’ virtualization solution
Edited 2007-08-14 19:24
True, I’m hoping it will be merged soon – I’m looking forward to being able to run Windows on Xen to allow syncronisation with my MiniDisc player
I think at the end of the day, it depends on what you actually going to use virtualisation for – is it for legacy compatibility? server consolidation? if it is for legacy compatibility one would argue it is temporarily used for transition, but I question why, for example, would have multiple server operating systems for the back end. It sounds more to me like an IBM services wet dream of complexity when standardisation on all one server platform would yield lower costs in the long run.
I’d love to simply have windows an USB driver compatibility layer on top of linux to accompany Wine. That would be all to make me happy.
Zones is OS virtualization, like OpenVZ, Linux-VServer, BSD Jails, Microsoft SoftGrid, etc. Instead of virtualizing the hardware so that multiple operating systems think they have a dedicated system, Zones virtualizes the OS (Solaris) so that multiple application containers think they have a dedicated OS.
Compared to hardware virtualization, OS virtualization is a solution to a different problem. It focuses on isolating and managing unrelated workloads that can share a common OS. It’s lightweight and relatively simple to deploy.
However, it cannot be used where different OS flavors or releases are needed for each workload. Although it can be made very secure, most security researchers would agree that OS virtualization offers a greater opportunity to find attack vectors that escape a guest.
The issue that Ulrich raises is that the trend in hardware virtualization is for the hypervisor to be essentially a complete operating system, drivers and all. Xen initially tried to get around this by making one of its guests a privileged Dom0 that manages the hardware. However, the Linux-based Dom0 became less and less like vanilla Linux over time, posing serious maintenance and quality challenges.
Predictably, the commercial overlords of the Xen project dislike their out-of-tree Linux development model. They want to control their own destiny, and that means moving away from the Dom0 and greatly expanding the role of the hypervisor. This is the VMware approach, and XenSource is compelled to follow suit. Goodbye Xen. You’ll never match up to VMware by flying solo.
Of course, KVM follows this model as well, with the hypervisor having direct control of the hardware. The difference is that it uses a vanilla Linux kernel to do so, taking advantage of 16 years of development, thousands of developers, and millions of users. The implications for compatibility, quality, and maintainability are enormous.
Without the KVM kernel module, Linux manages the hardware on behalf of userspace processes. With the module loaded, Linux manages the hardware on behalf of userspace virtual machines. It’s a relatively minor change, so KVM will play just as nicely on your VT/SVM-enabled system as regular Linux.
I agree with Ulrich that Xen is a dead end and that VMware is making loony statements in the press. KVM isn’t ready for prime time quite yet, but it will become the leading FOSS hardware virtualization competitor up against VMware and Microsoft. In a space characterized by demand for flexibility and concern over licensing, I expect KVM to hold its own quite nicely.
Let’s not forget Qumranet, the stealthy startup driving KVM development. Besides Avi Kivity, they also employ Moshe Bar, the developer behind the late OpenMosix project. They’re located right down the road from Intel’s Israel campus, so it stands to question whether Qumranet is a prime acquisition candidate for a corporation that loves Linux and virtualization.
FreeBSD jails allow you to run different versions of FreeBSD inside each jail. So long as the host OS is newer than the jailed OS, everything will work fine.
You can run FreeBSD 7.x, 6.x, 5.x, 4.x inside a jail on a FreeBSD 7.x host.
As I understand it, OpenVZ is more like Zones then XEN, but I can’t figure out if it is able to use the new KVM stuff or not. Does anyone know? I think the OpenVZ people stick quite close to the Kernel maintainers’ way of doing things, so I’d hope for some hardware accalarated goodness in OpenVZ soon. If this happens, I don’t think I’ll ever look at XEN, I much prefer the “OS container” approach.
OpenVZ doesn’t need any hardware acceleration because it isn’t emulating anything; it’s about as fast as native so it’s not going to get faster.
OpenVZ is a *GIANT* kernel patch. They do some very strange things too that don’t allow you to do things like support NFS from the guests or change network settings (what you would use ethtool for) on the guest.
Even though it is just about as close to native speed as it gets, OpenVZ isn’t that impressive. Actually, to the contrary, because it is such an invasive kernel patch. Take a look at it yourself:
http://download.openvz.org/kernel/branches/2.6.20/2.6.20-ovz007.1/p…
Maybe the OpenVZ patch is gross, but you have to admit the functionality is useful. Now people will spend a year or two arguing and refactoring the code so it can go in mainline Linux and eventually everybody will be happy.
As Wes says, OpenVZ will be merged into the kernel slowly but surely. It won’t be merged in its current form, and OpenVZ/SWSoft appears to understand this fully. They are taking a non-confrontational approach with the goal of bringing OS virtualization to the kernel rather than OpenVZ specifically.
For example, process containers are close to being merged, which will be the memory management component of the OS virtualization solution. CFS scheduler groups will be the basis for scheduling VPS tasks. About 25% of the out-of-tree patchset is stability work and cleanups, much of which is being considered for merging.
More than anything, the current OpenVZ implementation is a template for how to integrate OS virtualization into the Linux kernel. In typical Linux fashion, many of the implementation details will be dramatically altered so that the various components will be more generally useful outside of OS virtualization. This will result in a much higher quality product in the end, although it will take time.
Edited 2007-08-14 21:55
what is the point of virtualization of a complete machine? testing, development and maybe for running an legacy os. but in the last case it’s simpler to upgrade or keep the old hardware doing the job.
running those vm’s in production does not seem usefull to me. using something like zones has a lot less overhead, and the vm’s don’t scale over multiple machines (am i wrong here?). when running everything in a vm you have both a host and a client that can crash, needs updates etc etc, which seems well, more work from an admin point of view.
Running more than one operating environment.
Zones/Jails don’t let you run Solaris alongside Linux alongside NetBSD
(though binary compatibility in the BSDs does get you pretty far—you can set up a pretty much dedicated “Linux jail” on FreeBSD without too much difficulty, for example)
With Branded Zones you can run Linux on Solaris x86:
http://opensolaris.org/os/community/brandz/
This is not limited to OpenSolaris, Solaris 10 8/07 will have the same capabilites.
VMWare has a technology called VMotion and Xen is emulating it (soon to be released) with something called XenMotion. It is basicly policy based live migration. Here is a use case:
phys_server1 has 4 vms with 1 gb of ram / each. If phys_server1 is running low on memory, it (VMWare ESX) can live-migrate one or more of those vms to another physical server with a lower memory utilization. This would prevent swapping and increase server performance with 0 impact to users.
If you have hardware that supports some of the hardware HA features, you can have vms auto-live-migrated off of hardware that is going bad.
It breaks all of the rules for SLAs (Service Level Agreements) and has the capability to increase uptime on all systems. You can just live migrate vms from one server to another and do maintenance when no vms are on it.
Very cool.
A lot of IT folks developed a “one app = one OS = one server” (aka “server sprawl”) rule of thumb to minimize conflicts between apps, but a lot of apps don’t need a whole server, so virtualization reduces hardware costs.
In my opinion VMware’s chief scientist Mendel Rosenblum is wrong when he states that today’s modern operating system is destined for the dustbin. What he ignores in his statement is that today’s OS’s contain a lot more than device drivers — these OS’s also contain functionality like a networking stack, filesystems, high-level USB services etc. Eliminating OS’s like Linux or Windows would imply to include such functionality either in each application or to include it in the hypervisor. The former approach is unacceptable — this is the approach that was followed before OS’s were invented. The latter approach would require that VMware, Xen and others standardize an ABI and API for accessing this new hypervisor functionality. Additionally, applications would have to be modified such that they use the new API and ABI.
I do not see any of this happen anytime soon however.
Having actually developed OS software on top of VMWare, I have a view to share here.
One of the nice things that VMWare has done is provide “generic” APIs for various hardware devices. For example, they provide an AMD compatible Ethernet register level API for an Ethernet device or their own register level API that models a generic video card device.
The effect of this is that the OS kernel running within the VM only has to implement a single device driver for each type of device. The hypervisor essentially hides the details of the specific hardware that implements whatever device by mapping their API that is presented to the guest OS device driver that talks to the actual hardware.
When I read Rosenblum’s comments, my take was that they were moving towards essentially implementing their own Host Operating System. That OS would essentially do was Linux does, which is to encapsulate the implementations of hundreds of different device drivers. What I think Rosenblum is eluding to is that if Guest OS’s take advantage of this layer of abstraction that the VMWare device level APIS provide, the Guest OS’s become MUCH smaller because they don’t have to deal with all the details of the hardware devices they might be running on top of. There is some logic to that but…
What would not suprise me to see going forward is for VMWare to starting selling essentially their own Linux distro with a more tightly integrated VM environment, e.g. a Linux kernel with all the drivers tightly integrated with all the device APIs exported to all the Guest OS VMs. Its the easiest way for them to garner a huge set of device drivers and do away with having to support lots of different Host OSs.
This will be attractive for certain markets, for example, it will be nice for server farms. But I think its a little overzealous to predict the end of OS’s based on this technique.
Search around on the DragonFly lists for vkernels (or ‘virtual kernels’). It’s an UML-like approach, but not quite.
There was an effort once to let DragonFly work as a Xen guest, but I don’t know where that went to.
Actually the other way around..
Xen, was the first to do that not vmware.. not to automatically move around because of memory issues.. But to move around due to hardware failures etc. Infact vmware had to look at xen very hard look at what it was doing and then start copying ideas..
Xen was the first to come up with hypervisor technology in a usable sense..
I cant believe the way this has all swung round.. When Xen was nothing more than an emerging project from Cambridge university every one was supporting it, now everyone is on the band wagon dissing it..