Linked by HAL2001 on Thu 23rd Dec 2010 22:49 UTC
Oracle and SUN Oracle VM VirtualBox enables desktop or laptop computers to run multiple operating systems simultaneously, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Version 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners and developers.
Permalink for comment 454824
To read all comments associated with this story, please click here.
RE[2]: Beta program....
by gilboa on Mon 27th Dec 2010 08:21 UTC in reply to "RE: Beta program...."
gilboa
Member since:
2005-07-06

I am sceptical about most virtualization benchmarks because of the number of unknown factors involved. One they mentioned--by default, virtualbox caches writes that should sync to disk, inflating the results unsafely. QEMU/KVM can be configured to do this too. Another is that you can't be sure inside a VM that a measured second is really a wall clock second, since the clock ticks can get optimised out (this is why the clock slips in vmware when you don't have tools installed). If you can't reliably measure time, you can't do any xyz/second benchmarks reliably.


I tend to be skeptical about benchmarks just as you.
However, according to my own private experience, qemu/KVM runs circles around VB once you start adding cores, memory and network devices, and this without using virt-io devices. (I require "real" e1000 devices on my VM's).
As for wall clock vs. real clock, well, with NTP enabled, I never experienced any massive clock drift on x86_64 guests, and I relay on having synced host / guest clock for my software.

Anecdotally, I find QEMU/KVM to be absurdly slow on IO when using qcow2 and when the disk image is mostly unallocated. Once it gets allocated (grows), it's not as bad. An example would be compiling a kernel on a brand new guest. The first compile allocates a lot of real disk space in the qcow2, pegging the real hdd with metadata updates, etc. The next compile (after rebooting, so it's not caching) doesn't because the qcow2 doesn't really have to grow for it.

I have read that how bad this is might depend on the host FS (I use ext4), but I haven't tested this.


I only use raw images so I can't really confirm or contradict your observation.

- Gilboa

Edited 2010-12-27 08:22 UTC

Reply Parent Score: 2