Based on functionality alone, VMware’s VMware Server 1.0 would merit serious consideration for inclusion in any developer or system administrator’s tool kit. However, it’s VMware Server’s price – free – that propels this product from merely worth having to practically must-have. During tests, eWEEK Labs found VMware Server 1.0, which was released July 12, to be extremely useful for development, testing and deployment of applications – be they stand-alone or part of a complete operating-system-to-application stack.
I have tried VMWare Server and both MS offerings. MS Virtual Server requires IIS so it’s out. (I have seen that you do not need IIS, but never got that to work) MS Virtual PC has been working fine for me for a year, but it is no speed racer. Along comes VMWare Server for free. I have always heard GREAT things about VMWare so I tried it.
Pros: F A S T, free, easy to use and many more
Cons: uses 35MB of RAM AND has five services running even if you are not using it.
My 2 cents:
If you need to run virtual machines a good deal of the time – VMWare Server is your answer.
If you are sandboxing new applications – MS VPC is the way to go.
That being said – I have switched to VMWare Server for everything now because is is so darn fast.
Enjoy and Thanks VMWare!
What is your reasoning for VPC being better for sandboxing applications? It has noticeably worse performance than VMware.
If you don’t want to keep VMware Server running on your dev machine, use it to create the VMs and then run them with VMware Player.
“Cons: uses 35MB of RAM AND has five services running even if you are not using it.”
Shut them down when you’re not using it. Create simple batch files you can run to do it quickly using the NET START and NET STOP commands.
*Googles for random tutorial* Look here:
http://www.tech-recipes.com/batch_file_programming_tips235.html
My 2 cents:
If you need to run virtual machines a good deal of the time – VMWare Server is your answer.
If you are sandboxing new applications – MS VPC is the way to go.
or on the other front:)
– If you run virtual machines, Xen
– If you’re sandboxing use Solaris zones, FreeBSD jails, UML
Edited 2006-07-26 22:42
I understand about the services, but, just set them to manual.
I have been using vmware worksation on linux and some beta on windows. And vmware server in windows.
I say both are great pices of software, I can not get hardware allaction of directx working in vmware server it seam the graphic drv is lacking that.
But if I compare both of them. I found the server more stable and less problem with.
I strong recomdate vmware server
… one can’t get by the fact that VMWare is not Free (as in speech).
I don’t know why, but after using FOSS for 6 years I feel very uncomfortable using something which I can’t compile on my own (if I wanted).
That’s why I use Xen for virtualization. It’s FAST, fairly stable and free. Sure, setup is a pain, management is not easy, but in the end it’s worth it.
What I would like to see is side-by-side performance comparison of Xen and VMWare Server…
I don’t know why, but after using FOSS for 6 years I feel very uncomfortable using something which I can’t compile on my own (if I wanted).
O_o That doesn’t sound very healthy
“O_o That doesn’t sound very healthy”
I’m sorry, we don’t understand what your getting at?
I agree with my FOSS friend on this one. I will use Vmware until Xen can run Windows (damn curse) easily with a nice management interface (which I am prepared to pay for).
“It’s FAST, fairly stable and free. Sure, setup is a pain, management is not easy, but in the end it’s worth it.”
You just described Linux.
Why someone would forego stability because they don’t have access to the source they probably wouldn’t understand in the first place is beyond me.
The price has made it must-have indeed. We’re going to make more use of virtual machines from now on, especially on the smaller branch offices abroad.
The new setup for servers will be one server with plenty of storage and a backup unit running VMWare server, on this one virtual domain controller and one fileserver. If neccessary we can add one Exchange server depending on how much storage is needed for mail.
Of course only time can tell whether this has been an advantage or not, but I do believe it’ll make deployment and maintenance easier.
Ever heard of all of your eggs in one basket? Virtual machines are great, but I wouldn’t stack all of your services onto one system. You have to remember that there is still a host OS and a hypervisor that has every opportunity to mess up, thus taking down not one service, must multiple. Also, what would be the advantage of doing this over having a domain controller also running exchange and acting as a file server? At least that way, you wouldn’t be limiting the storage of each server to chunks of the total storage. You could use it all if necessary for all of the services.
Nono, we’re not going to have the DC also running fileserver and Exchange. All will be running from it’s own virtual server.
As long as the backup keeps running we should be able to get a totally trashed machine back up and running in a short time. Worst case it means reinstalling Windows and VMWare on the host server, then restoring the virtual servers from tape.
So far we’ve only had a few hardware problems with redundant components like disks and powersupplies, so we’re not too worried about a total system failure happening often.
I run several virtual machines daily as part of my job, but I prefer VPC because it give me seperate windows for each machine instead of a tabbed window with everything in it. Having to lose focus of one machine because I need to check something on another is a bother (just like having to use the web interface for Virtual Server).
Granted, I haven’t tried setting up machines in server and then using them with the free player yet. If only the promise of being able to use VPC images in the player (beta) came true (for my machines) I might have taken the time to test the speed of VMWare sooner.
For info, you can use VMWare Server with separate windows if you don’t like using tabs.
Thanks, I thought I heard something like that, but I didn’t see it in the beta I had used for a test.
Time to download the real version and mess around with it. 🙂
– Run multiple OSes without a reboot, good to try out the latest kernel.
– Test out applications and configurations.
– Access 32 or 64 bit apps that normally would require a reboot.
– Test out new features in the kernel, Raid, Firewall, LVM.
– Code kernel modules or Systemtap scripts without worrying about brining down your system.
– Faster to setup than chroot and jails
– Containment of security breaches and applications
– Allow you to investigate possible viruses.
– Windows for the kids so they can have all there favorite plug-ins.
– Solaris on top of Linux without having to deal with hardware capability.
– Test cluster applications
– Easy way to get access to any operating system remotely
– Transferring data from foreign filesystems.
Why I would’t use it? Well, because either it is way to slow or it is broken on my system. Updating to 1.0 only made matters worse. The workstation edition however works as it should. Probably, there is some compatibility problem.
It’s great for Linux kids who try out new distributions every month.
Hi, I work on Xen in case anyone wonders what way I lean 😉 I try to keep a balance in my comments, but I obviously know most about the Xen solution…
> I will use Vmware until Xen can run Windows (damn
> curse)
Curse indeed! I’m sure you’re aware that the newer AMD and Intel processors can handle this. There’s also a tentative plan to leverage QEmu’s CPU emulator for *some* guest code, which would enable Xen to run Windows properly on older systems (it seems unlikely to me this approach would be as fast as VMware, which is optimised for this kind of setup, but it should still be pretty good).
There are some things (e.g. save/restore, migration, improved performance, corner cases to enable more guests to boot and to run better) that need enhancing for fully virtualised guests. 64-bit guests aren’t so well supported either right now – Windows64 won’t boot, although I think Linux will.
Still, things are moving in the right direction for full virtualisation support, it’s just taking a while for it to mature. [side note: Xen was the first OSS project I know of – in fact possibly the first *avalable* project that leveraged hardware virtualisation extensions. With Xen 3.0 It was probably the first released virtualisation solution to support hardware extensions but I haven’t actually checked]
> easily with a nice management interface
> (which I am prepared to pay for).
I think Xensource (http://www.xensource.com) will sell one, but to be honest I’m not terribly clear what’s in the commercial product. Enomalism (http://www.enomalism.com) are working on a free (GPL) interface, which looks very swish to me. Some other open source product announced support for Xen, Qemu the other day, but I can’t remember what it was!
Xen’s currently adding a more appropriate management API which should make setting up management GUIs much nicer.
If you only want to manage one Xen host (VMware workstation style, or a single serve) there’s XenMan (https://sourceforge.net/projects/xenman/) and VirtManager (http://people.redhat.com/berrange/virt-manager/screenshots.html).
I understand VMware have a nice management stack, although I’ve never used it myself – this stuff is important, although given the number of projects providing virtual machine mangers that work with multiple VMMs, it seems like their management tools might need to start supporting Xen, VPC, etc (and vice versa!) in order to fit people’s deployments.
Minor nitpick:
– If you’re sandboxing use Solaris zones, FreeBSD jails, UML
UML is not the same as Solaris Zones and FreeBSD’s Jail programs. The only Linux equivalent of both at the moment is Vserver:
http://linux-vserver.org/
There’s also OpenVZ (open) / Virtuozzo (commercial) for sandboxing on Linux.
I went to ASU and I don’t know why home computers waste CPU transistors on such complicated things as virtual memory and kernel protection mechanisms, when home computers have gigabytes of RAM these days. It’s not like home computers are multiuser mainframes on one meg like the old days. I’d rather Intel give me more processors so I can do multi-processing. Thus the name “losethos”.
http://www.losethos.com
Mark: Does Xen have similar network options as VMware? Bridged, NAT’d, or host only networks? Ability to attach virtual machine to its own physical NIC? These are things I rely on from VMware.
> Does Xen have similar network options as VMware?
> Bridged, NAT’d, or host only networks?
It’s certainly possible to set up all those configurations but some people find it not entirely straightforward…
The bridged setup is designed to work out of the box because it’s the more “transparent” way to arrange things. It generally seems to work OK unless you have strange requirements (like an existing bridging setup).
A routed setup can be selected by just switching a couple of config options, and eliminates some problems that the bridged setup caused for some people.
I’m not sure if there’s a NATed setup by default but you could certainly get this working with the routed setup and a bit of IPtables diddling.
Host only networks could be created by using bridging but not connecting to the outside world. Again you’d need to twiddle the scripts a bit but it’s doable.
Bridging and routing just use the standard Linux mechanisms so it’s also possible to do standard firewalling / bridgewalling in the usual way, along with using some of the Linux traffic shaping tools.
> Ability to attach virtual machine to its own
> physical NIC? These are things I rely on from
> VMware.
I didn’t know VMware could do that! Well there are two ways you can do this in Xen:
1) Set up a network config that basically bridges a single virtual NIC onto a physical NIC. I’d imagine this is how VMware does it, but maybe in some optimised way… You’d have to make a custom network config for this to work.
2) Dedicate the physical device to the virtual machine directly. In this setup the virtual machine is allowed to drive the physical NIC directly using its own PCI drivers. This requres trusting the virtual machine – current hardware can’t enforce memory protection for devices so it’ll be able to DMA to/from anywhere in RAM – but may give better performance / flexibility than usually possible.
Some large IBM machines already have the facilities to do 2) safely, hopefully it’ll spread to consumer machines eventually (I’m sure it will).
The performance of 1) should be good for many workloads, and the virtual net device is continuing to be optimised. 2) is just quite nifty so lots of people like to play with it 🙂
2) Dedicate the physical device to the virtual machine directly. In this setup the virtual machine is allowed to drive the physical NIC directly using its own PCI drivers. This requres trusting the virtual machine – current hardware can’t enforce memory protection for devices so it’ll be able to DMA to/from anywhere in RAM – but may give better performance / flexibility than usually possible.
Passing through a device means all the interesting virtualization opportunities go away. No bandwidth throttling or interrupt load balancing, no migration abilities (the VM is now locked to that one physical server), any suspended VMs can only be resumed on the one piece of hardware. And of course it’s insecure, allowing a guest to take over the computer.
The only device I’d ever like to see passed through is a graphics adapter :-). Mmm… fast VM graphics … I wish. (In seriousness, graphics adapters are too complex to pass through. But I can dream!)
My primary OS has neither host nor guest support