Virtualization allows you to have multiple “virtual machines,” each with its own operating systems running in a sandbox, shielded from each other, all in one physical machine. Each virtual machine shares a common set of hardware, unaware that it is also being used by another virtual machine at the same time. More here.
Given the date I expected a section on Xen and upcoming hardware acceleration support through VT or Pacifica. Otherwise, it’s a decent introduction to virtualization software.
Yeah, the introduction to this article made by osnews is somewhat misinforming because the article only talks about virtualisation software but not about the hardware side which is coming soon to a workstation near you.
I think it’s a rather simple introduction, and I expected to see Qemu and Xen 2 as well. Because of the date I can understand why Xen 3 and Parallels Workstation weren’t introduced though, just too late to make it in.
Just tonight I downloaded PW 2 and VWmare Workstation 5 to compare it with Qemu, so far VMware is fast on my Win2K laptop, Qemu runs okay (with kqemu), but PW refuses to start the virtual pc, giving me an access denied error. Having used Virtual PC before on the laptop I can tell it gives me okay performance.
I am using Qemu in OS X but it is slow as hell. Are there any tips to improve the speed.
Only if there’s a kqemu that runs in OS X, and so far it’s only available for Linux, FreeBSD and Windows. Maybe somehow the FreeBSD version can be compiled for OS X?
Only if there’s a kqemu that runs in OS X, and so far it’s only available for Linux, FreeBSD and Windows. Maybe somehow the FreeBSD version can be compiled for OS X?
No, because it’s closed source. Also no, because it’s a kernel module and the FreeBSD and OSX kernels are very different. And besides, it only accelerates x86 guests on x86 hosts, so it’s no use until the Intel Macs come out.
But you never know, Fabrice Bellard might be working on a OSX86 version of kqemu accelerator already.
I don’t know if anyone else noticed but the Virtual Machine Remote Control that comes with MS Virtual Server uses port 5900 which just happens to be the default port of VNC.
So have MS ripped off VNC? It would make sense as they couldn’t use their RDP protocol to remote control the VM as that sends Windows GDI calls to the remote desktop client, which wouldn’t work with non-MS guest OS’s. So I imagine they are using VNC to capture whetever the VM is displaying to the virtual video hardware.
Just because they use the same port doesn’t mean they use the VNC protocol. Using VNC wouldn’t make much sense anyway as the MS connection gives us more options. (we use several virtual servers at work with Virtual Server 2005)
According to this article:
http://msdn.microsoft.com/msdnmag/issues/04/08/VirtualServer2005/de…
It’s an extended version of VNC.
“In addition, the Virtual Service runs the Virtual Machine Remote Control (VMRC) server, which clients access in order to control virtual machines remotely. VMRC is also the name of the protocol used by the server to talk to client applications. The VMRC protocol is an extended form of the standard Virtual Network Computing (VNC) protocol. It uses an enhanced secure version of the Remote Frame Buffer (RFB) interface.
Users access virtual machines by using the rich VMRC client or Web browser via the VMRC ActiveX® control. If a virtual machine’s configuration is such that it can be accessed directly from the network, then it is possible to use Remote Desktop Protocol (RDP) to connect to it and to control it remotely.”
There’s also a sourceforge project to reimplement the VMRC protocol.
http://sourceforge.net/projects/openvmrc/
This topic is a bit out of date
XEN, Parallels and QEMU are now hot and interesting to read about! And new hardware virtualization ideas – Intel VT-x and AMD Pacifica.
But Virtual PC… Even MS forgot this fosaken product…
There is an important feature of VMware Workstation that wasn’t in the article. Memory that a guest OS isn’t using can be allocated for other processes, so a guest configured with 512MB of memory won’t neccesarily use that much. Plus, as of 5.5, VMware supports 64-bit host OS’s.
Also worth mentioning, VMware ESX is it’s own independent virtualization environment, and doesn’t run on a host OS.
Coming from and AS/400 shop, I’m quite familar with vitualization. AS/400’s have had it for years. In the AS/400 world it’s called LPAR (logical partiton). Although, in our shop this is something that we don’t use.
My question is, other than for testing OS’s on say VMWare or something, why else would virtualization be used? If you have one physical computer running multiple OS’s and services and the hardware fails then all the those OS’s and their services will be down. So, it doesn’t seem to be a technology for servers from that point of view.
So, what would be some real world uses?
If you want redundancy you’ll need VMWare’s ESX server or Xen. Both of them can move a virtual server to another node in a cluster. Depending on how fast a server will go down the running virtual machines won’t have to go down with it.
Use redundant hardware and the chances of failure are slim, and when it Does go down it would mean there is bigger trouble anyway.
Here are some use cases for OpenVZ, I guess it’s mostly similar for other technologies and products: http://openvz.org/documentation/tech/usage-scenarios
I’m a developer. VMWare Workstation 4.x allows me to use a single laptop system to emulate an entire, hetrogeneous network with Fedora Core 3, Windows 2000, Windows XP, RedHat ES 4.0, and Windows 98, all on a single machine.
This does amazing things for my productivity, when I can test many different scenarios, including client/server apps, without ever leaving my single, lilliputian laptop!
Additionally, VMWare has a “snapshot” feature, where I can take a snapshot of the filesystem for a particular VM, (Virtual Machine) futz with it for a while, and then “roll back” to the snapshot after a virtual reboot. Very, very, very handy for testing software updates and installers, as well as “running it again”!!!
I can imagine where virtualization might be useful for high uptime situations, but that just seems a little wasteful to me in most scenarios. Using good quality hardware and a good quality O/S should result in uptimes exceeding a year without unplanned downtime. Day-to-day performance is a real bottleneck with virtualization. It degrades performance (and thus costs customers wait time) probably 25% to 50% or more.
In contrast, even without redundancy, using decent quality equipment and a reliable O/S, it’s reasonable to expect less than 1 day of downtime in 2 years. That’s about 0.14% downtime, far, far lower than 25%! So, virtualization for uptime fits in those scenarios where performance degredation is much less costly than minimal uptime. (not me!)
Good article and chap has done well, very very microsoft biased, since he is from a MS background.
Also he reviewed old version of VMware not latest 5.5 so unfortunatly out of date.
Missed great VMware things like VMware player, ability to import other Virtual PC images from competitors. Cloning images.
He also very quickly moved over the VMware server solutions.
Still good info though.
One new big question is going to be Virtualization vs Blades.
Big machine with virtualization or lots of low powered machines in blade chassis.
> Coming from and AS/400 shop, I’m quite familar with vitualization. AS/400’s have had it for years. In the AS/400 world it’s called LPAR (logical partiton). Although, in our shop this is something that we don’t use.
Wrong. Partitioning and virtualization are two different kind of things. You can virtualize operating systems using either LPARs or VMs, but they work different.
> My question is, other than for testing OS’s on say VMWare or something, why else would virtualization be used?
1. Another abstraction layer. Hypervisors can e. g. track I/O accesses from guests which is pretty important when designing fault-tolerant systems
2. Costs – think of e. g. webhosters – one PIV for 50 customers
3. Better separation of different fields – Hypervisors are not as complex as kernels by faaar, so the risk of a fault in the hypervisor is pretty low. Now you don’t seperate several processes by using the kernel and the MMU but seperate different operating system instances
Thanks OSNews!
I have to do a presentation on virtusalisation to a few people next week, and that helped.
How Shielded are the VMs from one another really.
These Virtual Machines have special instructions for talking to the software monitor. How hard is it to break out of a virtual machine and into another one, or into the host.
The more VMs take off the more interesting it will be to break out of them.
In principle it should be possible to get much better assurance on a hypervisor than a conventional operating system. Under a Unix-y OS you’ve got a huge body of code and a very wide user-accessible interface.
The interface to a VMM (whether it’s paravirtualised, or fully virtualised) is comparatively narrow and so is more easily validated and secured. Hypervisors themselves can also be smaller than a full OS.
Of course, the trusted computing base in the system is really bigger than just the hypervisor: device driver code, privileged management code, etc will bring up the size of code that can potentially be exploited in some way.
Some work is underway for Xen to minimise the trusted computing base very aggressively, but this is only just getting started. (NB. I work on Xen, so I know about this; there are probably other people working on similar stuff on other platforms).