“Linux-VServer allows you to create virtual private servers and security contexts which operate like a normal Linux server, but allow many independent servers to be run simultaneously in one box at full speed. All services, such as ssh, mail, Web, and databases, can be started on such a VPS, without modification, just like on any real server. Each virtual server has its own user account database and root password and doesn’t interfere with other virtual servers.” A guide for Debian is available here.
Xen seems to do more/less the same, supposedly at very little overhead.
Will this solution come at even less overhead?
Or will it be easier to maintain control over the ‘virtual servers’? (Cauz Xen doesnt really provides control other than limiting/killing, right?)
Cies Breijs.
its more like Zones (Solaris) or Jails (FreeBSD) – I wouldn’t run a live box without them
Xen still takes up memory for additional kernels, base processes etc. while Vserver has only one copy. If you are not running different OSes ( or kernels ) Vserver makes more sense.
Please fix the first link in the news, it points to OSNews. Thank you.
I have been a Jumpline users for about 2 years and this is exactly how they do it. It is very fast and effective.
Vserver is a great project, and is extremely useful. I thought about using vserve as a intense chroot type of enviroment for services. Actually, I wrote a paper about using Vserve to “jail” out Asterisk. Vserver takes a bit of tweaking before it’ll run Asterisk. Anyways, the paper is at:
http://www.telephreak.org/papers/vpa/
Is this something like User Mode Linux?
Systems like VMware and Xen host multiple operating system instances. The main advantage is that you can have different operating systems running on the same server. The main disadvantage is obvious: Running dozens of operating systems on the same server will obviously consume a lot of resources. These systems do not come without performance overhead:
Xen overhead analysis:
http://www.hpl.hp.com/techreports/2005/HPL-2005-80.pdf
Systems like VServer and Solaris Containers (Zones + Resource Manager) use far fewer resources as a single OS is shared between all containers. The performance overhead is claimed to be small to nonexistant:
http://www.usenix.org/events/vm04/wips/tucker.pdf
The main disadvantage is that you cannot use multiple operating systems (unless you run one of these systems within say a Xen domain). An interesting twist will be also be added when Linux ABI compatibility is added to Solaris Containers next year.
It really depends on your requirements, you cannot definitively say that one approach will always be better than the other.
(disclaimer: I work on Xen, so you should ruthlessly grill me to justify any claims I make ;-))
For Xen’s performance, I’d also like to reference:
http://www.cl.cam.ac.uk/netos/papers/2003-xensosp.pdf
http://www.cl.cam.ac.uk/netos/papers/2004-oasis-ngio.pdf
Which paint a more positive picture than the HP results.
Both papers are correct results, I’m sure. The main pain for Xen IO virtualisation is GigE line-rate, small network packets. This is really a pretty extreme workload and will cause a high CPU load even on native OSes. Out of our (smallish) selection of test boxes, the dual 3GHz Xeons coped with this without degradation (as shown in the above papers), nothing else had the horsepower to manage this, although the results were still good. The point I’m really making is that the performance will depend on your system, your configuration and on how evil your workload turns out.
Machine virtualisation gets these problems because it has to interpose between the OS and the hardware. The advantage of something like VServers is that it can leverage OS mechanisms directly, with the “virtual machine” abstraction being enforced by a extra hooks in the OS, rather than a hypervisor running in a different privilege ring. VServers / Containers / etc should always give pretty much native performance because of this.
The flip side is that machine-level virtualisation has a small hypervisor and well-defined narrow OS<->virtual hardware interface, which is easier to validate and secure. The extra isolation also enables things like virtual machine migration, restartable device drivers, etc.
I think a combination of the two is best: use a hypervisor for very high-assurance isolation, for servers belonging to different entities, for servers that need to migrate separately. Use VServers / Containers / Jails to isolate services within those servers.