Xen is a relatively new technology to enable several virtual machines (domU) to run on one computer. The purpose of this article is to determine what operating system (NetBSD or Linux) should be selected as domain 0 (dom0) operating system to get the best performance when running several CPU and disk intensive virtual machines at the same time.
All already know that NetBSD is very well tuned.Now its performance is increasing heavily after 2.0 release….
Nice to see NetBSD performance.. Great NetBSD Team…
Anecdotely two different codebases compile at different speeds. This is a load of crap. Get an app that compiles on both machines and compare that. Linux will kick the netbsd’s ass in that case.
> Get an app that compiles on both machines and compare that.
> Linux will kick the netbsd’s ass in that case.
uhhh…can you show some benchmarks/proof to show that?
“uhhh…can you show some benchmarks/proof to show that?”
Useless benchmarks like the above need not be shown.
What? Each virtual machine was running NetBSD, the test was ran on all virtual machines – compiling the NetBSD kernel.
The only thing changing was the XEN host operating system.
The test was to show which host operating system could cope better with the task at hand. Although XEN seems to have some bugs when it comes to *BSD, the test was equal. It’s not like he tried to compile the bsd kernel on a linux machine. I think you misread the article. A compile stresses many parts of a machine, which makes me think this is a quite good test.
I would love to see the same test with a bug free XEN and on Opteron hardware instead.
is the whole benchmark repeated with Linux as the guest OS.
I am not familiar with XEN, but maybe NetBSD just works better under NetBSD than Linux…
Yes, a reverse test would be nice too. A test between NetBSD and linux where it only hosts linux virtual machines, compiling the 2.6 kernel.
Whatever the results, conclusions will be drawn from that.
If there is a bug in Xen on NetBSD then why is Martii using the numbers at all? I would have found an alternative like maybe sar? I am curious how he got disk performance results out of commands that don’t measure disk performance, sar -d and sar -u (on most *nix variants) would have given far more pertinent disk performance info than ps -aux.
My bad, ps -axw rather than ps -aux.
What this study does is shows the linux people that some improvements need to be made.
Why this person decided to test this particular function I’m not sure why but it does help point out a weakness that can now be corrected.
The Linux loopback driver does have rather bad performance (we recommend using LVM for virtual machine filesystems for this reason). The fact that NetBSDs seems to perform better suggests that Linux’s loopback could use some optimisation.
I’m curious as to what filesystem was used for the /xen partition, since this would also affect the performance.
It would be interesting to see further benchmarks using e.g. raw partitions for virtual machine filesystems, to eliminate the effect of the loopback driver and /xen filesystem.
It’s definitely useful to see such benchmarks to enable people to make the best possible choices for their Xen deployments.
The author suggests that half the domUs get slower performance due to using the second hyperthread. I’d actually expect the domains on the first hyperthread to get worse performance since:
a) they have to share CPU time with dom0 itself
b) they have to context switch *to* dom0 in order to do IO
It’d be interesting to clarify if this is the case.
These tests show that it’s possible to run several virtual machines at the same time on one physical server.
I already knew that:-) Why do ESX/GSX Vmware server exist?
The Xen console (xm console name) in NetBSD sometimes hangs and it’s impossible to get any response (input or output).
That’s unforgivable in a corporate environment and irritating at all times.Would be nice to see the same test performed on ESX/GSX Vmware server though.As we might acknowledge the test figures as facts the conclusion coild be linux scalability isn’t optimal which i doubt is the case.Interesting test.
First off, thank you very much for the extraordinary work you and others have done in the Xen project. Imho, Xen has the potential to completely reconfigure the commodity hardware market, a true example of a “disruptive technology,” and which is destined to become one of the premier FOSS infrastructure projects alongside the likes of Linux, the BSDs, Apache, and Samba.
I too would like to see a good set of I/O benchmarks comparing all the common storage configurations likely to be used in real world deployments (e.g., loopback, LVM, raw, ISCSI, NBD, and ???), broken out by common use case scenarios (e.g., virtual hosting, http clustering, HA clustering databases, etc. etc…).
I think running Linux kernel on both virtual machines, compiling Linux kernel, would not make much difference in the results between them. NetBSD as host isn’t faster because it was compiling NetBSD. Both virtual machines were running NetBSD and the results depends mainly on gcc performance and all pieces related to it.
I am sorry if I am asking dumb questions. but my question if I load this puter up with 10 virtual systems.. am I really getting 10 times the work done.. or with 128 MG of ram for each one.. is that an illusion.. my second question is what servers do I want to run on this.. not my back up DNS server not Web servers… etc because any hardware malfunction and all of what ever I have is down….
thanxs
Also, there was a Linux dom0 performance bug for block devices, which is fixed in the -testing tree (soon to be 2.0.6. This might also account for the discrepancy.
You’re not going to get any more work done by dividing stuff up into virtual machines. Due to the overheads of virtualisation, you’ll actually get slightly less work done overall (although Xen’s overheads are very small).
The big wins for using Xen are:
* partition services between virtual machines for security and availability (compromises / crashes caused by malicious / buggy software in one VM are sandboxed)
* consolidate multiple physical servers into virtual machines on one box, without compromising on isolation
* “live” migrate virtual machines between hosts without stopping them
* suspend virtual machines to disk and resume them later (perhaps on a different host)
With live migration and some kind of network-based storage, you may want to use Xen virtual machines simply for the migration feature (even if there’s only one guest VM per box). What’s safe to consolidate onto one piece of hardware will depend on your local set up.