“What Xen is, is a very thin layer of software that essentially presents to the operating system an idealized hardware abstraction,” said Simon Crosby, vice president of marketing for XenSource. The OS is no longer glued to the hardware but floats above it, talking to Xen as if it were the machine.
And the difference between this and VMware is what? less bloat?
Can I run the Amiga OS on x86 hardware?
MacOS x, 9, 8 ,7, 6 on x86 hardware
Can I run windows on SPARC or PPC ?
Wouldn’t it be great if this meant that you could write an operating system for one spec machine and the Xen would translate between that ideal machine that the OS is dealing with and the actual hardware? It would mean no more this graphics card doesn’t work in this distro and the like.
Think of all the problems it would solve for system programming when you only had to worry about making it run on an ideal machine rather than all the various conglomorations of hardware out there.
Go educate yourself about Xen. It’s much faster than VM-Ware largely because it doesn’t try too hard. Which is a good thing in this case.
So benches anywhere?
There probably won’t be any benchmarks pitting Xen against VMware as I believe EMC (makers of vmware) prohibit that it the license.
I haven’t tried Xen yet but we run VMWare GSX server and the performance of the VMs (linux and windows) is near that of running directly on the hardware.
http://www.cl.cam.ac.uk/Research/SRG/netos/xen/performance.html
“Running over Xen, Linux’s performance is consistently close to native Linux, the worst case being OSDB-IR, which experiences an 8% slowdown. The other virtualization techniques don’t fare so well, experiencing slow downs of up to 88%. The SOSP paper contains further results, showing how performance isolation can be achieved when running multiple VMs simultaneously, and performance scalability out to 128 concurrent VMs.”
“What Xen is, is a very thin layer of software that essentially presents to the operating system an idealized hardware abstraction,”
Isn’t this pretty much the same thing as managed code in .Net or Java. I mean granted, its at the os level, but who cares. You already have VMware and Virtual PC for that. I fail to see the significance of this.
It would be cool if the Xen layer had something like a HAL that provided a uniform driver interface for it’s hosted operating systems. This would be kinda like what SNAP does with 2d video drivers, only this would do the same with everything – and possibly provide apis for complicated stuff, like opengl and directx if some lower level interface cannot be created.
Well, the significance is that you can run multiple OS’s at the same time with little impact on performance, and you can do it with free software. If you’re asking whether this is a huge innovative breakthrough, then the answer would be no.
…not the end of operating systems, since traditional OSes will still have to run on the box in a virtual environment in order to provide support for traditional applications.
So… Who will write drivers for Xen? If it won’t support esoteric hardware, it won’t be accepted in the mainstream.
To follow up on my last post, VMware and Virtual PC are only used by a few people. They’re the ones who really need a technology like this. The people behind Xen believe it could be what makes this technology commonplace. Every computer sold in 2010 could have Xen on it, and in turn this could reduce dependance on the operating system.
Found here:
http://wiki.xensource.com/xenwiki/XenFaq
it sounds like OSes have to be explicitly modified to run under Xen. What incentive would Microsoft have to do this?
It also sounds like it would make Xen worthless for running any older binary-only OSes that people might want to use.
Since Xen doesn’t itself support any end-user APIs, I’m not sure I see how dependence on traditional OSes (for which all applications are currently written) is reduced.
Could you elaborate?
So which chips support hypervisor? Anything from AMD?
I’m seriously unimpressed. I understand what xen is and what it’s supposed to do but this just reaks of a solution without a problem. The beauty of the PC is that it can do many things at once. Humans multitask and so do traditional operating systems. Who wants to run 2 operating systems though? That’s twice the memory? We’re just shifting code complexity to different layers and adding a layer of abstraction. This might lead to a more solid OS or it might not.
The problem with OS’s generally isn’t the hardware it’s the overall complexity and speed that the OS runs at. An OS can be in almost an infinate number of states at any given point, I mean the schedular alone may be running 1000 times per second with each time processes aquiring and releasing resources in an order that may never again be reproduced. Just using a debugger isn’t going to help you here.
Well written operating system should hide hardware complexity anyway. Linux for example(not saying linux is the pinacle of a well written OS just an example), abstracts away every bit of hardware setup. Hardware specific code is all in one place. Can someone explain the point of this?
And the difference between this and VMware is what? less bloat?
Can I run the Amiga OS on x86 hardware?
MacOS x, 9, 8 ,7, 6 on x86 hardware
Can I run windows on SPARC or PPC ?
No, impossible. Xen is not a hardware emulator (like VMware, Qemu, …), but a hardware multiplexer. It allows multiple OSes to run on the same hardware simultaneously, and directly.
Xen would reduce the dependency on specific *hardware*, but it can’t replace any given OS. The OS would run on top of Xen, so the same OS could theoretically be run on a Mac, Intel, or Sun box as long as Xen is available on each box.
For one, security. Run your internet browsing/email on one vm with almost no write/read access and your normal staff (accounts package, personal documents) on the other. If someone cracks into your browsing system, they won’t have access to your personal data on the other system(s).
You can also have different profiles, one optimised for multimedia, another one for development, etc and each one is completely separated from the other without rebooting.
Since Xen doesn’t itself support any end-user APIs, I’m not sure I see how dependence on traditional OSes (for which all applications are currently written) is reduced.
Could you elaborate?
I have to hurry, but what I meant was what the article said. You can move towards having less reliance on Windows only. I probably worded it poorly.
No Problem
The implications of such virtualization are enormous. For one, you would be able to run multiple operating systems on your desktop. Perhaps you have wanted to try the free version of Pro Tools that only runs on Windows 98 or would love to add a light Linux-based CAD program like CADvas to your system. No problem. The operating systems will not interfere with each other or the applications.
This general idea was originally intended for large computer systems, which employ partitioning to maximize the use of the machine’s hardware.
But your computer, armed with Intel’s hypervisor-enabled chips, would be able to do essentially the same thing, including doing tasks with which Windows and other operating systems are clumsy. In this paradigm, the OS could not be less important except as a tool to run the applications you need.
Get Me Browsing
“The first partition you might have is a TV partition that would come on, pretty much as soon as you turned the PC Latest News about PC on,” said Gartner analyst Martin Reynolds. “There wouldn’t be very much code — it would load very quickly.” Boom. You are watching TV on your PC without having to run it through Windows.
“Now, if you wanted a really fast get-me-browsing Web browser, you’d have a partition for that, too,” Reynolds added, referring to the hypervisor’s capability of easily divvying up partitions. “You’d just load what you need and go.”
Reynolds says the revolution promised by Xen’s hypervisor software could be realized within five years. The era of a single operating system for each desktop might join the ranks of other computer nostalgia like DOS, monochromatic CRT displays or floppy discs.
“It’s a three year transition,” Reynolds acknowledged. “By 2010 everyone will expect hypervisor in their system.”
practical uses are there if you have an open mind..
this is going to be huge…
the uses that have been thought of like moving oses from one machine to another in real time…
the other thing it does is removes MS forced right to be the first os on a machine…
Just have to wonder why ms bought virtual pc which at the moment is the same as vmware the old style way of doing things…. interestingly enough ((plug plug)) linux format this month has a lovely right up of xen and the software on the dvd etc..
it sounds like OSes have to be explicitly modified to run under Xen. What incentive would Microsoft have to do this?
only at the moment untill the new chips come out from intel and amd… which remove any need to change the kernal.
Who wants to run 2 operating systems though? That’s twice the memory? We’re just shifting code complexity to different layers and adding a layer of abstraction. This might lead to a more solid OS or it might not.
Crackers,Security researchers,virus writers,engineers…..
It’s very attractive to have snapshots of multiple OS.
Booting an other OS is much faster with vm’s.
You can simulate a whole subnet or multiple subnets depending on your amount of memory (vmware).
You can simulate the server you are after and the client app on one and the same machine.Better yet you make an image and you have your lab with you whereever you go (amd64 2GB+ notebook).
Actually, the very first time I’ve heard of xen was when an american university announced that they had a modified version of Windows running on it. They did it for research through their Shared Source license.
Of course they can’t release it in any way, nor they can benefit from THEIR research. Good deal, eh?
Xen is very similar to what IBM did with the VM 360/370 over 30 years ago. The only difference is that it’s now far more practical than it was on the mainframes of that era.
The idea is to allow multiple users to have what appears to be a bare machine. This is ideal for systems programming and even running mutiple virtual servers on a single machine in server farms so that one doesn’t need a seperate machine for each user.
This does not replace the operating system. Operating systems run on top of xen which allows the operating system to believe that it has the machine to itself.
Xen requires porting of guest OS kernels (not userland apps, btw) because that’s necessary to get such good virtualisation performance on x86 (and to keep the overall system complexity under control).
MS isn’t going to explicitly support Xen in the foreseeable future. However, Intel have contributed code to support their Vanderpool virtual machine extensions. When VT hardware appears on the market (later this year), running unmodified guests will become possible. AMD are also planning to support Xen with their Pacifica technology.
For additional performance, for OSs like WinXP there will be special paravirtualised device drivers that will give better IO throughput that the virtualised PC devices.
Xen pushes as much of the device support infrastructure as possible into “domain 0”, the privileged admin virtual machine. The aim is that Xen should support any modern machine, with roughly the same level of device support that Linux has. This also avoids the Xen developers having to continually port / fix new device drivers, so it keeps us sane 😉
The advantage of Xen’s paravirtualised approach is that you can have high performance virtual machines on basically any modern processor, no special hardware support is needed.
For full virtualisation (e.g. running Windows), Xen will require a chip with Intel’s Vanderpool extensions (coming out this summer / fall) or AMD’s Pacifica (release date unknown to me).
I’d expect paravirtualisation to yield better overall performance than full virtualisation, even with hardware support, so Xen will continue to support ported guests where available.
Xen does provide something like this: guest operating systems use idealised virtual devices. The only OS that needs to have any drivers for real hardware is the “domain 0” management VM.
Currently virtualised devices are available for block and network (and USB host controllers in the unstable tree). An idealised framebuffer will be added in the mid-term future, with an idealised 3D device perhaps following later on.
You can, for instance, run Linux as “domain 0” but then host services in a FreeBSD / NetBSD / Plan 9 VM on top, thus giving your target OS the ability to run on anything XenLinux runs on.
Another big win you get with Xen is the ability to “live migrate” virtual machines, i.e. move them to another host without stopping them. Having a hypervisor in the system makes this loads easier.
This is potentially *really* useful in corporate data centres, less so at home 😉 You might conceivably want to migrate your “machine” from home to the office, or from your desktop to your laptop though. (for those purposes, this will be more useful for “normal” users with the virtual framebuffer device in a few months time)
By the way, all, I work on Xen full-time, so I have a potential bias 😉 I’ll try and keep my comments balanced and factual – feel free to ask me to back up any points I make!
could someone explain the concept of xen as simply as possible? it sounds cool, but i don’t know anything about it and all the docs and faqs presume that i have more knowledge than i really have. i need something like a “xen for dummies” doc
xen is installed on my fresh suse 9.3 system and it would be nice to get started, try out some distros without repartitioning my hd?
Yes Xen requires a modified OS kernel — BUT — both AMD, and Intel are including instructions on their future chip’s to allow this technology to run without a modified OS.
Once hypervisor-enabled processors come out, and both AMD and Intel, from what I have read, will be releasing them soon, Xen will run Windows and other operating systems natively on these new cpu’s without modification.
AFAIK* there is some aspect of ring seperation at work here in x86 cpu’s in their current configuration which hinders Xen to run OS’es without minor modifications; hypervisor-enabled cpu’s are supposed to address this problem.
*I don’t have a completerly clear picture of all the details involved so I can’t give any accurate technical details….
OK, here’s the idea of Xen in a nutshell:
It’s really useful for all kinds of things to support multiple “virtual machines” on a physical host. Some architectures (e.g. IBM mainframes, POWER) have designs that make this easy. x86 makes it rather hard, so virtual machine software for x86 has to be rather complex and sometimes incurs a painful performance overhead.
Porting an OS to Xen, makes it (relatively) easy to provide virtual machines running that OS, even on x86. This yields better performance and a simpler overall system.
Xen acts as a little “shim” under the OS and is as small as possible. A single priviliged virtual machine called “domain 0” runs on top of Xen when you first boot the system. This grabs hold of the devices, the console, etc and looks rather like normal Linux.
By installing the Xen tools, you manage Xen’s operation from inside dom0 and create other virtual machines. These are called “guests”. Linux 2.4 and 2.6, FreeBSD, NetBSD and Plan 9 can all run as guests.
You’ll find more details about installing Xen in the user manual (http://www.cl.cam.ac.uk/Research/SRG/netos/xen/readmes/user/user.ht…), including an example of installing and booting a minimal Linux guest. You may also like to download the live demo CD from http://www.cl.cam.ac.uk/Research/SRG/netos/xen/downloads.html, which includes NetBSD, FreeBSD and Debian virtual machines.
There is a third-party website (http://www.kvadratrot.net/~xen/) that contains images of many popular distros for use under Xen, for when your system is set up.
that should get me started. sounds exciting
No probs.
For beginner’s help with Xen, there’s the Xen Wiki (http://wiki.xensource.com – not yet fully populated), the Xen mailing lists (http://lists.xensource.com) and the Xen IRC channel (#xen on irc.oftc.net).
Mark Williamson: Another big win you get with Xen is the ability to “live migrate” virtual machines, i.e. move them to another host without stopping them. Having a hypervisor in the system makes this loads easier.
Sounds somewhat like some other technologies I’ve heard of (as far as that feature goes) I can’t quite recall the names of them at the moment. But if I remember correctly they don’t move the whole OS. Anyway… My question is this: Wouldn’t this fail if the hardware on the target machine is different from the hardware on the source machine?
And even if it does work, wouldn’t it be distruptive to some programs, since some operate differently with different hardware?
On a side note… I’ll most likely be using Xen soon, since I work with numerous OSs. I haven’t set it up yet, because I’ve been rather busy. I’m hoping to get around to it soon though and I’m looking forward to it.
Deletomn: Sounds somewhat like some other technologies I’ve heard of (as far as that feature goes) I can’t quite recall the names of them at the moment. But if I remember correctly they don’t move the whole OS.
In this case, the whole OS really does move. Assuming you use NAS / SAN / NFS or similar for storage, there are no dependencies on the origin machine at all.
The latest VMWare ESX can live migrate running virtual machines. I don’t know of anything else that can do this.
There are various systems like openMosix which migrate individual processes around a cluster (retaining a dependence to the origin node).
Deletomn: Anyway… My question is this: Wouldn’t this fail if the hardware on the target machine is different from the hardware on the source machine?
It depends how different. Since Xen virtual machines use Xen virtual devices it’s not a problem if the devices in the host system are different.
Different CPU instruction sets can make a difference however. e.g if the guest was running on a Pentium 4 using Intel SSE2 for Raid5 checksumming, it could break when moved to an AMD machine without SSE2.
As long as you migrate between reasonably similar CPUs or don’t compile the kernel to use CPU-specific optimisations) it should work OK.
Deletomn: And even if it does work, wouldn’t it be distruptive to some programs, since some operate differently with different hardware?
Apps can also be bitten by vendor specific instructions (i.e. multimedia instructions) not being available on all CPUs. Again, the solution is to use CPUs with a similar feature set, or just don’t compile apps / kernels with these instructions.
Thanks for taking the time to educate us.
With the new technlogy Intel and AMD have coming down the pipe, Xen could indeed be a useful product for people like me (who tend to use a mix of legacy OSes as well as various open source platforms).
I look forward to being able to play with it someday. 🙂
Mark Williamson: Apps can also be bitten by vendor specific instructions (i.e. multimedia instructions) not being available on all CPUs.
Isn’t this going to be problematic though with other complicated pieces of hardware too though?
I mean… For example force-feedback controllers (like force-feedback mice), different feature sets on video cards (they don’t all support the same features), and different audio features (like EAX). (The server work I do and programs I use don’t use fancy hardware other than CPU instructions, so I couldn’t think of any non-desktop things. My apologies.)
I would think these things would be a potential problem in the future for moving stuff around from one computer to another. To make matters worse I know of some people who are working on some other types of “coprocessors” to handle various tasks.
Even if the OS and it’s programs can be moved, I would imagine that sometimes this might result in a performance decrease because they are making use of the “wrong” set of features, which while functional may not be as snappy as if they were making use of another set?
As a result, wouldn’t it be prudent to notify the OS (and the applications) that they have been moved, so they can decide if they need to change what they’re doing somewhat? (You might do this already for all I know)
Anyway… I was just wondering. I figured now was the time to ask 🙂 And thank you for answering my questions.
I can’t wait to get this technology running on a blade center. The blade environment removes any problems with migrating running Xen domains between individual blades. If you had cyclical demands on different services on your network, you could migrate stuff that was busy onto it’s own blade, while consolidating a bunch of idle stuff onto a single blade, and then reverse the process when the load changed. I’m thinking of maybe in the daytime you have your app servers running each on it’s own blade, but at night you can have mutiple backup or maintenance tasks running on separate blades. That way you can keep all your services up all the time, but allocate processor resources on the fly to the processes that need them.
Xen has practical uses.
I haven’t used Xen yet, but I was planning to one of these days when I have time. I’d definitely do it when Vanderpool becomes a reality and I can get rid of VMWare.
Currently, I install Oracle on my workstation at home for development. I’ve moved through three distros since I’ve come to Linux — RedHat, Fedora, and Ubuntu. (Before I moved to Windows, I went through M5 intern MCC interm, SLS, Yggdrasil, Slackware.) Each upgrade of RedHat and Fedora, I’ve had to reinstall everything from scratch — I didn’t trust the upgrade and I’ve had to reinstall my development environment each time. It’s a pain, but the beauty of Linux is that such things are possible without huge difficulties since all file formats are supported by later distributions. (Unlike Windows where moving Outlook Exchange information to a new OS is rather hit or miss.)
I’ve felt more comfortable upgrading Ubuntu, but eventually, I’d like to get rid of any cruft and do a full reinstall of the latest version. (I may even switch distributions a few years down the road if something better comes along. My policy is, use the best tool for the job).
That’s where Xen comes in. If I installed Oracle in a virtual machine that contains a secure but relatively unchanging OS like Debian Stable, I’ll be able to migrate OSes easily without affecting my Oracle VM. (Just copy the VM image file). The virtual machine will be fast and interact seemlessly with the main OS (unlike VMWare).
What I think would be interesting is taking the virtualization concept to expanding to to allow one OS to run off multiple pieces of hardware.
The idea being if one set of instructions do well using a x86 style processor, send it that way, but if a vector processor would do better send the instructions that way.
If you could run an OS in such a distrubted manner, then the “OS” could in a sense live (or at least thrive) off a network.
I guess its almost the same idea as offloading graphics work to a GPU, but on a much grander scale.
Deletomn: I mean… For example force-feedback controllers (like force-feedback mice), different feature sets on video cards (they don’t all support the same features), and different audio features (like EAX). (The server work I do and programs I use don’t use fancy hardware other than CPU instructions, so I couldn’t think of any non-desktop things. My apologies.)
Xen guests don’t touch any real hardware devices, so they won’t know about such differences. The real hardware is controlled by the management VM (domain 0). The devices that Xen does provide to guests are designed to be independent of the features of the underlying hardware.
All you really need for a virtual server is block and network devices, so in the server room this is easy. For display access to a virtual machine, we use X protocol, NX or VNC, to access the machine over the virtual network, so display devices aren’t a problem either.
I would think these things would be a potential problem in the future for moving stuff around from one computer to another. To make matters worse I know of some people who are working on some other types of “coprocessors” to handle various tasks.
Yes, it’s a challenge. The basic model is to provide an idealised device to the guest (so that it’s host independent) and to take advantage of hardware-specific “smarts” in dom0, where possible.
One example which would be worth looking at (and will probably get done at some stage), is providing an idealised guest interface to checksum offload in high-end ethernet cards but with dom0 driving the actual hardware.
Some people at IBM are also working on providing “TPM-like” (i.e. virtual trusted computing hardware) functionality for guests, which will abstract away the real TPM.
Even if the OS and it’s programs can be moved, I would imagine that sometimes this might result in a performance decrease because they are making use of the “wrong” set of features, which while functional may not be as snappy as if they were making use of another set?
It’s not really an issue for devices right now. Even for CPU optimisations, I doubt it’ll make much difference unless you actually migrate to a CPU that lacks instructions you need.
As a result, wouldn’t it be prudent to notify the OS (and the applications) that they have been moved, so they can decide if they need to change what they’re doing somewhat? (You might do this already for all I know)
Right now, the OS is notified before and after moving. There are two problems here:
1) Linux doesn’t have hooks to tell *it* not to use certain CPU instructions if they “disappear” due to migration
2) there’s not a uniform way to tell userspace about a migration
For Xen virtual devices, it’s much easier to make them adapt correctly to different host feature sets.
In the near term, it’s likely some hooks will be added to complain to the administrator if guests try to use unsupported instructions – at least this will let the sysadmin know he’s migrated between incompatible CPUs, so he can diagnose the problem. Anyway, many clusters will be made up of similar machines and major distros sometimes choose avoid the vendor-specific instructions anyway (e.g. RedHat).
Longer term, cluster management software could take account of machine feature sets an warn about your choice of migration destination.
Anyway… I was just wondering. I figured now was the time to ask 🙂 And thank you for answering my questions.
No probs, good luck when you try it out – hope Xen is useful to you 🙂
Here are parts 1-3 of informative articles on Vanderpool bearing relation to Xen, with quite a few technical details:
“Intel Vanderpool holds promise, some pitfalls”
http://www.theinquirer.net/?article=21448
Intel Vanderpool: the thorns, the thorns”
http://www.theinquirer.net/?article=21449
“Intel Vanderpoo[l]: More roses, roses”
http://www.theinquirer.net/?article=21450
Hopefully, this may finally lead to game makers providing live-gaming discs that would not be bounded to an OS. Think about this for a moment. With Xen, one would run an instance of a new game and not worry about having the right drivers installed or OS version, etc. If done right, it would not matter if a game is being run on a PPC, Sparc, or x86.
I hope for a day for OS indepentent PC video games.
Xen is NOT an emulator!
What virtualization does is schedule multiple OSes to run on the same hardware, the same way an OS schedules processes. It won’t magically give you the ability to run PPC code in x86.
Less bloat than VMWARE – yes. But for a reason. VMWARE runs unmodified Windows or Linux. XEN does not. XEN runs extensively modified version of Linux. No nothing about Windows, though. The last time I’ve looked at XEN (about a year ago) there was a blurb on their side that they were working with Microsoft.
So VMWare has to run through the hoops, if you look at their internal architecture, you’ll be amazed how clumsy the whole thing looks, but once again – that’s for a reason. Linux, for instance, doesn’t know if it runs on a real hardware or not. XEN modifies it. So Linux runs on XEN. Well, it’s a nice design but nothing extraordinary. So you tweak the OS and make it virtual-environment aware. VMware, on the other hand, *is* extraordinary because it works with untweaked system, and works in a fairly efficient way.
I think more fundamental solution would be to modify the kernel to make it understand whether it runs on a real hardware or inside the virtual machine and act accordingly, and make it generic enough. XEN? Before XEN, there were several similar solutions, mostly as academic research project. All were very efficient, more efficient than VMWARE. All required tweaking of the guest system, and that was the reason none of the projects went anywhere. XEN is just one of their projects, but apparently it receives some external push, so to speak.
Mark Williamson: It’s not really an issue for devices right now. Even for CPU optimisations, I doubt it’ll make much difference unless you actually migrate to a CPU that lacks instructions you need.
Quite possible. But even if the slow down itself isn’t an issue (or at least not a big issue), there’s also the question of not taking advantage of new instructions, like say you move an OS with current software from an old computer to a new computer. Say, from a Pentium with MMX, to a new Xeon. It wouldn’t know that it should consider using SSE instead of MMX.
(To me that seems like a really good use for Xen. “When you replace/upgrade your computer, just transfer everything over while it’s still running!”)
Mark Williamson: No probs, good luck when you try it out – hope Xen is useful to you 🙂
I’m sure I will find it useful. From what I’ve heard it sounds quite good. Good luck with making it even better.
athena: Hopefully, this may finally lead to game makers providing live-gaming discs that would not be bounded to an OS. Think about this for a moment. With Xen, one would run an instance of a new game and not worry about having the right drivers installed or OS version, etc. If done right, it would not matter if a game is being run on a PPC, Sparc, or x86.
I hope for a day for OS indepentent PC video games.
I don’t play games much anymore, but that would certainly be wonderful. However, I don’t think there’s much that Xen can do about PPC, Sparc, x86, etc differences unless they integrate a Java virtual machine (or something similar) that allows starting “machine independent OSs” and I believe that is presently beyond the scope of what they are trying to accomplish.
I say that because even if the “boot disc” for the game somehow recognized the different CPU architectures, it wouldn’t recognize future architectures, so you’re still kind of stuck. And even expecting game makers to recognize all current architectures is… A bit much.
No… I think your best hope for OS independent games would be Java (and similar technologies).
athena: If done right, it would not matter if a game is being run on a PPC, Sparc, or x86.
I guess you could always have a “simple Java virtual processor” (or some such thing) start first. Then Xen. Then the OS (or game).
Quite possible. But even if the slow down itself isn’t an issue (or at least not a big issue), there’s also the question of not taking advantage of new instructions, like say you move an OS with current software from an old computer to a new computer. Say, from a Pentium with MMX, to a new Xeon. It wouldn’t know that it should consider using SSE instead of MMX.
Yup. It’s not something that’s particularly straightforward to resolve right now – there aren’t the available hooks in the kernel to tell it to drop one instruction set and use another. It’s doable, at least to a certain extent but I’m not sure how much it’ll be worth implementing in practice.
Of course even then, you still might need to upgrade to newer software packages, if the instructions were introduced since those packages were built.
For many workloads, the instruction set won’t matter. It’s only if you’re doing things like RAID5, processing encoding or some optimised crypto algorithms where this’ll show.
The major paravirtualised research VMM before Xen was Denali. That was also very low overhead. Xen’s big advance over this was that it could run full-featured OSs like Linux. Denali didn’t support separate address spaces for processes, or intra-OS context switching, so it was actually not technically possible to run a “normal” OS on it.
A kernel which can run on “bare metal” or a hypervisor would be very cool 🙂 It’s doable but for Linux it’s some way off. Maybe one day! In the meantime, Xen does (right now) boot unmodified x86 Linux guests on Vanderpool machines (although they won’t be widely available for a few months).
Xen support will be merged into the mainline Linux kernel, probably later this year. NetBSD already includes Xen support in the mainline. It will be interesting to see if MS also modify Windows to use special interfaces to *their* hypervisor in a similar way. After all, they did buy VirtualPC…
BTW, VWWare is an amazingly clever piece of software. It does a very hard task and it does it well. For some people, it’ll continue to be the preferred solution for some time (perhaps because they want to run Windows on non-VT/Pacifica hardware, perhaps because of it’s more-advanced cluster management tools and user interface).
Xen tackles some similar problems to VMWare but from a different angle. You get better performance but in payment you either need a ported OS or special hardware.
It’s a tradeoff and it’ll remain so for some time. As VT/Pacifica hardware becomes more popular the balance may shift: VMWare’s ability to virtualise x86 without hardware assist isn’t very useful if you have hardware assist anyhow ;-). OTOH, hardware assist may make the performance hit of full virtualisation smaller, so that Xen-ported OSs have less advantage (although I personally would guess they’ll continue to be quite a bit more efficient for some workloads).
It seems to me… reading all, the big question is what is this good for, with some saying this is for nothing. If as one person said, with the new chips you don’t need a modified kernel then this is perfect for all small new OSs. Think about it now it is almost impossible to make a new OS because it has to do everything good, now. No one has the time to wait for a new OS to get it’s stuff together. But now with Xen, the new OSs only has to do one thing really good to get noticed. This all makes new OS in our future really likly.
thanxs my 2cents
Outside of the benefit of running virtualized machines more efficiently, and reducing management overhead I’m a skeptical of these other portability benefits.
What happens when different people start writing YA-XEN (Yet Another XEN copy) and each one of these duplicates starts heading down the same road of “minor incompatibilty” that different linux distributions have done?
At that point it seems to me that we will just have added an abstraction layer between the OS and the hardware while maintaining exactly the same kind of fragmentation problem we have today. Instead of operating systesm like Windows running only on x86, VMS running best on Alpha; we’d see Windows running only on MS-XEN, Linux runs on XEN, while OSX might only support A-XEN.
Overall while its a nice dream to have complete portability – I doubt XEN is the answer.
I’ve been using VMWare Workstation at my job for a little over 8 months now and the experience has been positive. We’ve been able to create images for developers with all the necessary goodies installed (Oracle, Weblogic, Eclipse, etc) and not be dependent on how a machines “host” is setup.
We’re also deploying our app on a server farm using VMWare ESX and server virtualization is where the biggest impact will be for these products. We’re able to virtualize about 32 servers using only 8 machines (talking about 8 CPU, 32 GB RAM, SAN), but the setup runs amazingly well.
“In computing, an operating system (OS) is the system software responsible for the direct control and management of hardware and basic system operations. Additionally, it provides a foundation upon which to run application software such as word processing programs and web browsers.” — Wikipedia
link: http://en.wikipedia.org/wiki/Operating_system
>But now with Xen, the new OSs only has to do one thing really >good to get noticed. This all makes new OS in our future >really likly.
So in fact you mean all OSs will be practically the same thing. Quite scary if you push some not so loyal moves from company to add some control as to who would access the kernel. [/useless paranoia]
in fact, i don’t quite see the big advantage. linux already has a hardware abstraction layer for example…
Can you port Cloned-Source-Systems with Xen?
The beauty of it all is that microsoft bankrolled the Xen project to start of with then realised they had opened up a can of worms they couldn’t stomach.
The implications of such virtualization are enormous. For one, you would be able to run multiple operating systems on your desktop. Perhaps you have wanted to try the free version of Pro Tools that only runs on Windows 98 or would love to add a light Linux-based CAD program like CADvas to your system. No problem. The operating systems will not interfere with each other or the applications.
Oh yes! Thats the revolution!
Let every application have it’s own look&feel in it’s own envirement!
Let those applications doesen’t know from each other, to make those ancient concepts like drag & drop impossible.
Let the people buy a new OS for every application they want to use.
Yes, this is definitly the desktop revolution everyone is waiting for!
How stupid is this? Why are companies spending money for such a crap?
Hopefully, this may finally lead to game makers providing live-gaming discs that would not be bounded to an OS. Think about this for a moment. With Xen, one would run an instance of a new game and not worry about having the right drivers installed or OS version, etc. If done right, it would not matter if a game is being run on a PPC, Sparc, or x86.
How old are you?
Have you ever coded a game at the DOS era?
You had to code drivers for yourself for every gfx card, your own 3d engine, mouse handlers, sound drivers, mamory management and so on.
That was horrible. Windows and DirectX changed that all. Now you want to go back to those times?
It is absolute impossible to burn a little fast OS rigt on the game cd and support very existing hardware with it.
There is no difference in this if the target Computer is capable running one or many OS’es simulanously.
This thing ist just a better VMware, nothing more, nothing less.
If you mean that there is a Layer on top of every OS which makes them compatible to each other, than it is the same concept like AmigaAnywhere. This is stupid too, because the underlaying platform (OS/Hardware functions) are changing quickly.
If you’re working with virtual hardware that never changes, you will get no improvement if new hardware with new functions is installed, because the game which is coded for the virtual graphic card doen’t know that it is changed and provide more functions know.
Think about Graphic Cards. How quikly they changed the functions. You can’t a code a game using shader 3.0 technology when your virtual HW platform knows only Shader 1.0, or no shader at all.
So the virtual HW designer must be an Orakel, knowing every new technology coming away the next years.
These concepts may look good at a first look, but all of them are stupid, because innovations on the hardware market can’t be followed. Because the virtual hardware can’t change unless they breaks their own rules and concepts.
So, my forecast is, that the classic way of computing (hardware with an OS coded specialy for that hardware) will not change during the next 10 or 20 years.
Look&Feel is highly over-rated. Only the most trivial home apps have any sort of of cohesive look&feel. All pro apps do their own thing anyway, and most professionals only use two or three apps at most to get their job done. I can certainly envision situations where being able to run a linux app on your windows desktop (for example) would be a real advantage. But it’s obviously not going to become the standard way for home users to do things.
Emmmm, NO
see Scott Mcneillys ideas for Sun circa 1996…………..
Look&Feel is highly over-rated. Only the most trivial home apps have any sort of of cohesive look&feel. All pro apps do their own thing anyway, and most professionals only use two or three apps at most to get their job done.
This dependes on the user/company/situation.
We have a lot of users here uses a bunch oft software for their work. Most of thema starting to cry if even one icon changed or look different.
Of course it is an improvement to have the ability to run other OS’es right from the desktop when needed.
But i refered to the text of the article, which mean that this will change the desktop computing or will be the end of classic operation systems.
This is simply nonsens.
For security purposes, jails would be much more practical and efficient. There really is no need to run complete instances of operating systems for that.
As for running multiple operating systems concurrently, the cost of PC hardware is so low nowadays that running multiple machines with a KVM switch would far better suit such needs most of the time.
For other architectures, which mostly run variants of unix, a quick recompile or direct binary support through mapped system calls already offers the compatibility needed.
Therefore, the primary use for such a solution would be for casual hobbyists without multiple machines available to them.
Look&Feel is highly over-rated. Only the most trivial home apps have any sort of of cohesive look&feel. All pro apps do their own thing anyway, and most professionals only use two or three apps at most to get their job done. I can certainly envision situations where being able to run a linux app on your windows desktop (for example) would be a real advantage. But it’s obviously not going to become the standard way for home users to do things.
on windows and linux for sure, neither platform gives a flying crap about usability. but that isnt the case for the mac. any company writes software which goes against the standards of the os is doing their customers a disservice.
I think jails and Xen are complimentary, they both address different points on the same curve.
With jails you get truly zero overhead, they’re trivial to deploy and simple to use, plus they provide decent security.
With Xen you get stronger isolation (in terms of security, availability and performance), heterogeneity (run multiple different kernels) and live migration (which is a big win in the enterprise). You also get the ability to sandbox individual device drivers in virtual machines for greater fault tolerance.
My ideal system would do both (plus preferably support electrical hardware partitioning on Big Iron boxes).
at first every app was written directly to the hardware it was running on (after some time stanadards started to show up, like vga and similar) then the os came around as a layer between the app and the hardware. now the os have become a app in its own right and therefor we get a new layer of code that seperates the os from the hardware. big woop.
i would much rather see xen like stuff put in chips on the hardware itself so that you coded towards these chips and they did the translation. basicly introduce standards for everything from grahpic cards to hardware teatimer as long as its supposed to interface with a computer and a os.
this way one avoid the stupidity we have now where some big and allmost all the small hardware companys ends up pushing a single companys os as thats what they release drivers for.
a seconds solution would be to enforce that all hardware makers had to release full specs for their hardware so that anyone could write a driver…
I think the scenario you describe is quite plausible. MS in particular have their own hypervisor and it would seem foolish not to enhance their performance by making some virtualisation tweaks.
The Open Source world does seem (right now) to be moving towards Xen as a de-facto standard. A Xen port to Apple hardware is planned, although whether it becomes standard in the Mac world will depend on Apple itself. Xen for PPC will be compatible with the guest OS interface that IBM’s hypervisor provides (IBM are doing the port, so you can rely on that ;-).
This leaves us with at least 3 popular hypervisors (MS, VMWare and Xen), each capable (possible on special hardware) of running unmodified systems, and each capable of enhanced performance with *some* systems.
It doesn’t stop everyone’s OS running on everyone else’s hypervisor but it does suggest everyone’s OS will run *best* on their *own* hypervisor 🙁 I’m afraid the market player will dictate that, regardless of purely technical / user focused considerations.
You’ll still be faced (for the foreseeable future) with making that tradeoff (although of course, somebody might add the MS interface to Xen…)
Actually, athena had a good point here.
I’m sure many of us would *like* to live in a world where games weren’t OS dependent. That’s still some way off, though. It’ll be years before virtualisation technology is pervasive and it’ll be some way after that before games makers would think about leveraging it like this. So it’s a good idea, albeit some way off.
If you mean that there is a Layer on top of every OS which makes them compatible to each other, than it is the same concept like AmigaAnywhere. This is stupid too, because the underlaying platform (OS/Hardware functions) are changing quickly.
Yes, that’s an obstacle. It’s a big obstacle and it may mean that OS-specific games continue to be the preferred delivery method.
However, it’s not insurmountable given the motivation. There’s no reason the virtual devices can’t offer an abstraction that can optimise for certain features where they’re available on the hardware but emulate or omit them where they’re not available.
Even with this capability, one virtual device abstraction won’t last forever, as you rightly point out. However there’s no reason a new virtual device interface couldn’t be defined every year or two… After all, new DirectX / OpenGL APIs appear periodically to take advantage of new hardware. You can still support the old interfaces for older games.
Finally, as an aside, I can see there being a market for competing “GameOS” products that are sold by development houses to game vendors. Such a product would present a minimal “library OS” interface for a game product to link to, which would provide functions to make the development environment more “friendly” (and perhaps even an API like an existing OS gaming API…).
Let every application have it’s own look&feel in it’s own envirement!
Let those applications doesen’t know from each other, to make those ancient concepts like drag & drop impossible.
Let the people buy a new OS for every application they want to use.
Speaking as someone who routinely runs applications that use a variety of APIs (OS/2, DOS, Win16, Win32, POSIX/X, and MacOS7) from a single desktop, it really isn’t a big deal.
There are sometimes limitations on the way the various apps deal with each other, but for most applications this is a nonissue — basic clipboard support is usually enough (heck, sometimes the ability to read/write from a common filesystem is all you need), and many dedicated programs don’t even have a need for that level of interoperability most of the time.
With multiple OSes, you might find yourself organizing apps that require tighter integration into logical workgroups based on a single integrated platform, so that will help.
But I see this a lot and it greatly annoys me..
It’s KERNEL. K-E-R-N-E-L!
NOT kernal. Kernal is not even a word!
kernal != kernel
Stop typing it! Learn to spell!
Please!
You’re rigth… Hardware is low cost…
But image this:
You have this server, that site and do allmost nothing (but still some) 23 out of 24 hours, but the last one… wooowww it eat’s all it can. Then you have en server that works all the time, at a farly staty load, sometimes it need some extra power, but you can spare some CPU time when the other server needs it.
Now in your setup, you need 2 servers: one that’s overcapitalized most of the time, and one that’s blanceovercapitalized most of the time.
When useing Xen, you have one BIG server. The second server now alway has the capity it needs and the first server will also have all it needs.
on windows and linux for sure, neither platform gives a flying crap about usability. but that isnt the case for the mac
I wish that was the case, but that simply isn’t true. Take your mac and start up, for example, shake, after effects and avid Xpres (to take three apps which you might use together) and spot the complete lack of unity between their look&feel. And you know what, I’ve never heard anybody seriously complain about this situation.
I assume you meant Closed source not Cloned source 😉 I also speculate that you’re using a Dvorak keymap…
Anyhow the answer is yes…and no…and yes 😉
Yes if you have access to the source code. For instance, Xen 1.0 had a WinXP port because the university were kindly given access to the Windows source by MS.
But if you don’t have the source code, then it’s not very straightforward, so only open source OSs are supported right now.
It has been speculated that it *might* be possible to persuade Windows to run on Xen without modifications to the core kernel. This would attempt to fully virtualise the memory and use a Xen-specific HAL dll and Xen-specific device drivers to do the rest. This *might* be possible but would be relatively painful to implement.
The tradeoff that’s most likely to be implemented is as follows: use hardware virtualisation support to allow an unmodified Windows to install. Subsequent to install, add Xen virtual device drivers. This makes the OS partially aware it’s running on Xen, so it should give decent all round performance (although not as good as being Xen native).
Btw, ReactOS is being ported to run on Xen, so that may eventually be another route to high performance virtualisation of Windows software.
Moshe Bar wrote something about running Windows on Xen. Too bad he doesn’t go into details.
http://www.moshebar.com/tech/?q=node/38
That’s a really cool link – thanks 😉
That’s Xen running Win4Lin Pro in a Linux virtual machine, which is based on QEmu and runs in userspace. Another neat thing would be a port of the QEmu accelerator kernel module to XenLinux.
Xen’s full virtualisation support will allow you to boot a Windows virtual machine directly (on machines with the appropriate hardware support). It’ll perform much better than Win4Lin Pro did in that video but it’s still cool that you can run W4LP on Xen.
Xen is definitely interesting technology, I like the idea of being able to offer pseudo-virtual hosting, where clients use shared IPs and are hosted on a single box but each client gets unfettered access to their own Linux install. One of the hosted OSes gets hosed/owned? Just blow it away and restore from a backup image.
And as someone who uses a KVM switch, I can definitely see the end-user benefits to being able to run multiple OSes on the same machine more-or-less natively. It would be interesting to see Xen support in Haiku when it matures, though I’m not sure how feasible that is.
Raise your hand if you actually read the article – or any article about Xen. Sheesh.
Can you run PPC Oses on x86? No.
Benches? – See the documentation portion of their site. They can’t cite specifics about VMWare due to licensing, but they do make general remarks about it. They compare against native Linux and User Mode Linux.
Differences between Xen and VMWare? Quite a bit. Xen does ‘paravirtualization’; that is, the guest OSes can have direct access to the underlying hardware for some things. And Xen requires modifications to the hosting OSes. VMware works somewhat differently according to the version; ESX includes its own host OS, whereas the other versions use a traditional OS (Windows, Linux). It does not require mods to the OS. Xen claims better performance.
Comparison to Managed Code. How about this for the significance of this: Starting, Stopping, and Migrating entire OS images between cluster machines. Xen is doing live guest OS migration. Not app. OS.
VMWare and Virtual PC in use by only a few people? $400 Million per year of ‘a few people’? Those few people must be using it in big ways, huh? Three words: Data Center Consolidation.
Who will write drivers for Xen? The same folks that write drivers for Linux. They already support a huge amount of hardware out there. And folks like Intel like them.
Why would Microsoft want to modify their OS to run under Xen? They already have – at least partially. Read their papers, including the one where a MSer was a co-author. They couldn’t release their work on that because of licensing limitations.
The end of reliance on End Users OSes is not the goal of Xen.
Which chips support hypervisor? Again, read their docs. Apparently one of the AMD chips already has features which support performance improvements. Intel is working on virtualization features in their new VT chipsets.
Who would want to do this? I agree that Xen use at the desktop level is more of a stretch for me right now, too, but then I do have 3 machines at my desk and 3 more in the house. When I bought my first PC I certainly didn’t expect that many machines. Xen does make a lot of sense for consolidating multiple machines in a data center. Take a look at VMWare’s published numbers on utilization. Take a look at their ROI numbers.
As for VMware vs. Xen:
– VMWare supports MS as a host and guest OS. This may make or break which product is used in a particular situation.
– VMWare and Xen have process/OS migration strategies.
– VMWare has a management console; Xen appears to be working on one. [ Which is essential. ]
– Xen is free. VMWare is not.
– Xen requires host OS mods, VMWare does not. Think about this in terms of patch management/security issues.
Just my two cents.
You can run MacOS on an x86 machine…PearPC I think its called.
if you dont mind having it run like a snail on a turtles back…
That it was called CherryOS
Regardless of Xen, desktop virtualisation appears to be on the way and will enable loads of cool stuff as discussed in the article.
Xen is being talked up a lot since it has built an incredible momentum very rapidly. However there are rumours (see http://www.virtualization.info) that MS are planning to introduce a hypervisor into Longhorn’s server version.
Whilst MS aren’t wholly reliable in following through on Longhorn promises, the indications are that multiple major OSes are going to “natively” support some kind of hypervisor. Regardless of which hypervisor (if any) is more popular, it will enable some seriously cool stuff, both on the server and (eventually) in the home.
CherryOS was a closed source GPL infringing copy of PearPC.