For those of you looking at playing with Sol 10, checkout < a href=”http://wwws.sun.com/software/download/products/3f5e55d1.html“> JumpStart Enterprise Toolkit 3.2.2 . It’ll make it quite easy to make a jumpstart server for you to boot/play with Solaris 10.
This looks very cool. It’d be just the ticket for web development, where things often require installing lots of random applications or libs, instead of running a seperate physical server for each development team. I’d be curious as to how well it stacks up against the various Linux and BSD virtual OS offerings, anyone know?
In terms of stability and performance for zones, Solaris 10 beats most BSD’s/Linux. (Depending upon hardware implementations and who’s the zealot ). A similar link about solaris zones discussed such options in linux with VPS/LVS etc. as a few projects (See the discussion for more details).
My gut feeling is that Solaris would outperform them as its quite stable, Solaris 9 12/03 and above releases have significant performance tweaks and solaris in itself is quite mature with excellent smp, memory management.
But what advantages do these have over FreeBSD/DragonFly jails? I know that there are Sun people reading here, so if you could please enlighten me, I’d be greatful.
Yes AIX and HP-UX can virtualize, for AIX it was a relatively recent addition (AIX 5L 5.2) and for HP-UX it was 11. In my last job (where we used AIX and Solaris) you had to use specific models of RS/6000 machines to use LPAR (similar to Dynamic Domains in Solaris). DLPAR is new to AIX 5.2 and also seems to be limited to specific RS/6000 hardware.
Reading documentation from HP, vpars also seems to be hardware specifc (at least for now).
I have already duplicated Dennis Clarke’s efforts on a Solaris Intel machine, so with Soalris zones you are not limited to specific hardware to use virtualization.
AIX LPAR is fundamentally different from Solaris Zones, since it is relying on hardware to perform its functions and therefore lacks in granularity (as far as I know it allows only up to 4 logical partitions per processor, Solaris is theoretically unlimited). Also HyperVisor (the driving technology behind LPAR) incurs a significant performance tax on the system consuming up to 15% of the CPU, on the other hand Solaris Zones has very minimal impact on performance.
My only regret is that I did not instantly grab an AMD Opteron server to compare the Solaris 10 Zones there with the easy install that I had with the UltraSparc Netra unit. However, I was completely honest when I said that I performed the entire install from a LapTop with a modem from over 100 miles away. I had to use the Netra unit as I had remote console access and I wasn’t going to wait until Monday to start my experiments with Zones. I will update the article when I have data to report on the AMD Opteron platform.
Pardon my ignorance, but can these “virtual servers” still use all your hardware as if they were the only ones running, such as more the one CPU? Also, everyone keeps mention how minimal the performance hit is, which is nice, but how much of a performance hit is there? Under 5%? Still, the trade off seems very worthwhile for peace of mind and security. 🙂
These are good questions. In general, there are two kinds of performance impact to consider. Let’s break it down. First, how much overhead is there for running a zone? Since a zone comprises a bunch of processes, such as nscd, snmpd, etc., the normal operation of these daemons, even when idle, can chew up a small amount of CPU on your system. And of course, an otherwise idle zone does consume some memory. We usually see about 60MB of RAM in use per Zone– of course, many of these processes are idle most of the time, and so can be paged out if the system is under memory pressure.
Secondly, (and I think what you were asking was) for apps running inside the zone, is there some constant-factor overhead? Generally speaking, not much. I would expect *less* than the 5% you quote, although I might get myself in trouble saying more than that If you hit a serious performance issue with zones, report the problem– that’s the only way that Sun will know that you are being impacted.
We tried hard to make performance monitoring of Zones a reasonably simple task. Try out our ‘prstat -Z’ command and let us know what you think!
Lastly, regarding your question about SMP: because Zones are not a virtual machine technology– but rather a “virtual application environment”– they take full advantage of the underlying SMP-ness of the operating system. A bonus is that Zones is integrated with Solaris 10’s sophisticated resource management facilities, so it is possible to limit the overall resource consumption of a zone– for example, you can constrain a zone to 2 CPUs on the machine using the Resource Pools facility. Alternatively (or even in conjunction), you can allocate proportional CPU shares to the zone using the Fair-Share Scheduler (FSS(7)).
Disclaimer: Opinions expressed are my own. They are in no way related to the opinions of my employer, Sun Microsystems.
I believe this is not too far from what you can achieve with user mode linux. We’ve been using similiar technology in unix classess at school using uml.
There are however few differences:
1.) Solaris accesses host filesystem, while in user mode linux, you have to provide file or block device with disk image it will use. This is quite bad, because you have to preallocate space for zones. There is a project that aims to allow this, but I don’t know how usable is this. You could of course overcome this by doing Root FS on NFS and dhcp and letting the guest os mount host’s partition via NFS. This would probably have quite significant performance overhead though . Filesystem in filesystem is not very optimal too.
2.) It is not that easy to setup. This could be done with few scripts. I would love Debian and possibly other distros to have scripts, which would instantly create the zone’s filesystem. Preferably, it would allow for some sharing (f.e. creating hard links to original data and kernel would unlink, copy transparently if slave wants to write — some equivalent of copy on write seen in memory management).
3.) The networking is not so easy to setup. Could be also part of the script
4.) Linux does not have so well done resource allocation as Solaris. So the guest kernel should be able to limit itself (f.e. not to use more than 30% of cpu time). Is it possible to do some precise resource allocation under Linux (maybe using some patch to kernel, or something like that?)
It’s a pity, that it is not implemented yet, because the core technology is mostly there, I believe it would not be too difficult to do even before Solaris 10 is released officialy.
This more like Linux VServer (http://www.linux-vserver.org) than UML. VServer virtualizes the kernel and provides <1% overhead. It is missing some of the QoS though..
I think that UML suffers a number of performance hits within a separate Linux instance. I don’t think there is measurable data on Solaris Zones that shows any real loss. The Solaris Zone capability is built in from the very beginning all the way down to the kernel. It is not a patch.
There are a lot of fixes that need to be applied to Linux to get UML to work. On Solaris 10 all you need to do is boot it. Also, networking is simple, the entire graphical front end to the Zone is in place and you can use dtlogin to access the virtual server. It is really a separate instance of the Solaris server concept. Except it isn’t separate in the hardware. Everything that you need is there from the moment that you install. UML seems very experimental whereas the Solaris Zones is going into production soon. I currently have a developer building a bootstrap GCC compiler on a zone and nothing has halted or shown a problem. UML crashes with Apache/MySQL/PHP.
I don’t see how the two can be compared other than to say that UML exists and it is somewhat like a cheap version of the Solaris 10 Zone. If you tweak it and fiddle with it long enough it will probably work. Sort of. It is not enterprise class technology.
After making that last post I had to go back and perform another experiment. How long does it take, really? To create a third virtual server on a simple single CPU 1U rack machine running Solaris 10?
newfs: construct a new file system /dev/rdsk/c0t1d0s3: (y/n)? y
At 21:30 Hrs I config the third zone :
# mount -F ufs -o logging /dev/dsk/c0t1d0s3 /zone/3
bash-2.05b# zonecfg -z zone3
zone3: No such zone configured
Use ‘create’ to begin configuring a new zone.
zonecfg:zone3> create
zonecfg:zone3> set zonepath=/zone/3
zonecfg:zone3> set autoboot=true
zonecfg:zone3> add net
zonecfg:zone3:net> set address=192.168.35.212
zonecfg:zone3:net> set physical=hme1
zonecfg:zone3:net> end
zonecfg:zone3> verify
zonecfg:zone3> commit
zonecfg:zone3> ^D
bash-2.05b# zoneadm list -vc
ID NAME STATUS PATH
0 global running /
3 zone1 running /zone/1
5 zone2 running /zone/2
– zone3 configured /zone/3
At 21:31 Hrs I install software onto the new virtual server
# zoneadm -z zone3 install
Preparing to install zone <zone3>.
Creating list of files to copy from the global zone.
Copying <2521> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <808> packages on the zone.
Initializing package <147> of <808>: percent complete: 18%
.
. I then wait a bit ..
.
Initializing package <294> of <808>: percent complete: 36%
.
. I wait a bit more ..
.
Initialized <808> packages on zone.
Successfully initialized zone <zone3>.
At 21:39 Hrs Finally I boot the new virtual server
bash-2.05b# zoneadm -z zone3 boot
Finally I login to the console of my new virtual server and
perform basic setup. Things like the hostname and names service setup. Then I am done. I can register a new user account on it and roll it out.
What could be more convenient for an educational situation or a production situation? No mess. No fuss. Ten minutes and a coffee and you have a new server ready to go.
Dennis
ps: Sorry, it took me twelve minutes but I was slowed down by making this post and I went and got a coke from the fridge during the install phase.
I couldn’t agree more. I was looking over some of the data on both of the UML links you posted, and it seems that the UML folks would like to believe that they can do what Zones do today, but it’s not true.
On the Community UML site, they point out that in comparing UML to Zones, UML still appears to need:
1) Ability to access some part of host’s filesystem (like in chroot). So the root filesystem would not be on a device or a file, but as files under f.e. /zone/1 (see the article). There’s a project about this
2) Script to autoconfigure networking (on host and in the kernel)
3) Scripts to create new zones similiar to the Solaris ones’
This is in addition to the point you raise, that being that UML is really a kernel patch for Linux. If you read on those sites you will also notice that there are different levels of patches applied to the 2.4 and 2.6 Linux kernels, and I find that odd.
While I wouldn’t call Zones 100% complete, it is without a doubt a much more mature implementation, even in Solaris Express today as it is.
Without the administration and configuration tools provided for Zones by Sun, I fail to see how UML can compare to Zones at this point in time.
From your write-up which you put on Blastwave, I think the proof is in the pudding, as they say, you had it setup and going in minutes. The average person would need longer than that to patch and build the Linux kernel in the same amount of time.
Dennis, this is actually not a case. You can get UML running in few minutes. For example, if you use debian, you just install a debian package containing UML kernel, choose a root filesystem and you go. And it’s not that experimental. We’ve been using this feature in production for a long time (also have been several hosting companies). Actually, Solaris zones is a new feature, UML has been here for quite some time and it’s quite well tested.
For those of you looking at playing with Sol 10, checkout < a href=”http://wwws.sun.com/software/download/products/3f5e55d1.html“> JumpStart Enterprise Toolkit 3.2.2 . It’ll make it quite easy to make a jumpstart server for you to boot/play with Solaris 10.
Have fun.
-Bruno
This looks very cool. It’d be just the ticket for web development, where things often require installing lots of random applications or libs, instead of running a seperate physical server for each development team. I’d be curious as to how well it stacks up against the various Linux and BSD virtual OS offerings, anyone know?
In terms of stability and performance for zones, Solaris 10 beats most BSD’s/Linux. (Depending upon hardware implementations and who’s the zealot ). A similar link about solaris zones discussed such options in linux with VPS/LVS etc. as a few projects (See the discussion for more details).
My gut feeling is that Solaris would outperform them as its quite stable, Solaris 9 12/03 and above releases have significant performance tweaks and solaris in itself is quite mature with excellent smp, memory management.
-Bruno
Hmmm… VMS did this years ago…and AIX does this as well…and so does HP-UX.
But what advantages do these have over FreeBSD/DragonFly jails? I know that there are Sun people reading here, so if you could please enlighten me, I’d be greatful.
Yes AIX and HP-UX can virtualize, for AIX it was a relatively recent addition (AIX 5L 5.2) and for HP-UX it was 11. In my last job (where we used AIX and Solaris) you had to use specific models of RS/6000 machines to use LPAR (similar to Dynamic Domains in Solaris). DLPAR is new to AIX 5.2 and also seems to be limited to specific RS/6000 hardware.
Reading documentation from HP, vpars also seems to be hardware specifc (at least for now).
I have already duplicated Dennis Clarke’s efforts on a Solaris Intel machine, so with Soalris zones you are not limited to specific hardware to use virtualization.
AIX LPAR is fundamentally different from Solaris Zones, since it is relying on hardware to perform its functions and therefore lacks in granularity (as far as I know it allows only up to 4 logical partitions per processor, Solaris is theoretically unlimited). Also HyperVisor (the driving technology behind LPAR) incurs a significant performance tax on the system consuming up to 15% of the CPU, on the other hand Solaris Zones has very minimal impact on performance.
I don’t see how this is so “innovative”. I read the article, and it doens’t look that far off from user-mode Linux. Am I missing something here??
My only regret is that I did not instantly grab an AMD Opteron server to compare the Solaris 10 Zones there with the easy install that I had with the UltraSparc Netra unit. However, I was completely honest when I said that I performed the entire install from a LapTop with a modem from over 100 miles away. I had to use the Netra unit as I had remote console access and I wasn’t going to wait until Monday to start my experiments with Zones. I will update the article when I have data to report on the AMD Opteron platform.
Pardon my ignorance, but can these “virtual servers” still use all your hardware as if they were the only ones running, such as more the one CPU? Also, everyone keeps mention how minimal the performance hit is, which is nice, but how much of a performance hit is there? Under 5%? Still, the trade off seems very worthwhile for peace of mind and security. 🙂
I hope this is readable here. With two zones running on a single CPU Netra T1 unit :
$ zoneadm list -vc
ID NAME STATUS PATH
0 global running /
3 zone1 running /zone/1
5 zone2 running /zone/2
I see little or no CPU usage at all.
$ vmstat 2
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr f0 s0 s2 — in sy cs us sy id
0 0 0 1224192 191120 3 14 4 0 0 0 0 0 1 1 0 408 64 63 0 0 99
0 0 0 1200432 172280 2 5 0 0 0 0 0 0 0 0 0 402 48 80 0 1 99
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 43 91 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 23 71 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 39 84 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 35 81 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 48 82 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 37 78 0 0 100
0 0 0 1200432 172280 0 0 0 0 0 0 0 0 0 0 0 401 25 71 0 0 100
^C$
Dennis
Christopher,
These are good questions. In general, there are two kinds of performance impact to consider. Let’s break it down. First, how much overhead is there for running a zone? Since a zone comprises a bunch of processes, such as nscd, snmpd, etc., the normal operation of these daemons, even when idle, can chew up a small amount of CPU on your system. And of course, an otherwise idle zone does consume some memory. We usually see about 60MB of RAM in use per Zone– of course, many of these processes are idle most of the time, and so can be paged out if the system is under memory pressure.
Secondly, (and I think what you were asking was) for apps running inside the zone, is there some constant-factor overhead? Generally speaking, not much. I would expect *less* than the 5% you quote, although I might get myself in trouble saying more than that If you hit a serious performance issue with zones, report the problem– that’s the only way that Sun will know that you are being impacted.
We tried hard to make performance monitoring of Zones a reasonably simple task. Try out our ‘prstat -Z’ command and let us know what you think!
Lastly, regarding your question about SMP: because Zones are not a virtual machine technology– but rather a “virtual application environment”– they take full advantage of the underlying SMP-ness of the operating system. A bonus is that Zones is integrated with Solaris 10’s sophisticated resource management facilities, so it is possible to limit the overall resource consumption of a zone– for example, you can constrain a zone to 2 CPUs on the machine using the Resource Pools facility. Alternatively (or even in conjunction), you can allocate proportional CPU shares to the zone using the Fair-Share Scheduler (FSS(7)).
Disclaimer: Opinions expressed are my own. They are in no way related to the opinions of my employer, Sun Microsystems.
I believe this is not too far from what you can achieve with user mode linux. We’ve been using similiar technology in unix classess at school using uml.
There are however few differences:
1.) Solaris accesses host filesystem, while in user mode linux, you have to provide file or block device with disk image it will use. This is quite bad, because you have to preallocate space for zones. There is a project that aims to allow this, but I don’t know how usable is this. You could of course overcome this by doing Root FS on NFS and dhcp and letting the guest os mount host’s partition via NFS. This would probably have quite significant performance overhead though . Filesystem in filesystem is not very optimal too.
2.) It is not that easy to setup. This could be done with few scripts. I would love Debian and possibly other distros to have scripts, which would instantly create the zone’s filesystem. Preferably, it would allow for some sharing (f.e. creating hard links to original data and kernel would unlink, copy transparently if slave wants to write — some equivalent of copy on write seen in memory management).
3.) The networking is not so easy to setup. Could be also part of the script
4.) Linux does not have so well done resource allocation as Solaris. So the guest kernel should be able to limit itself (f.e. not to use more than 30% of cpu time). Is it possible to do some precise resource allocation under Linux (maybe using some patch to kernel, or something like that?)
It’s a pity, that it is not implemented yet, because the core technology is mostly there, I believe it would not be too difficult to do even before Solaris 10 is released officialy.
This more like Linux VServer (http://www.linux-vserver.org) than UML. VServer virtualizes the kernel and provides <1% overhead. It is missing some of the QoS though..
After looking at :
http://user-mode-linux.sourceforge.net/UserModeLinux-HOWTO.html
and
http://usermodelinux.org/
I think that UML suffers a number of performance hits within a separate Linux instance. I don’t think there is measurable data on Solaris Zones that shows any real loss. The Solaris Zone capability is built in from the very beginning all the way down to the kernel. It is not a patch.
Also, see :
http://user-mode-linux.sourceforge.net/help-kernel-v1.html
There are a lot of fixes that need to be applied to Linux to get UML to work. On Solaris 10 all you need to do is boot it. Also, networking is simple, the entire graphical front end to the Zone is in place and you can use dtlogin to access the virtual server. It is really a separate instance of the Solaris server concept. Except it isn’t separate in the hardware. Everything that you need is there from the moment that you install. UML seems very experimental whereas the Solaris Zones is going into production soon. I currently have a developer building a bootstrap GCC compiler on a zone and nothing has halted or shown a problem. UML crashes with Apache/MySQL/PHP.
I don’t see how the two can be compared other than to say that UML exists and it is somewhat like a cheap version of the Solaris 10 Zone. If you tweak it and fiddle with it long enough it will probably work. Sort of. It is not enterprise class technology.
Dennis
After making that last post I had to go back and perform another experiment. How long does it take, really? To create a third virtual server on a simple single CPU 1U rack machine running Solaris 10?
At 21:29 Hrs I create a new filesystem.
# newfs -v -b 8192 -f 2048 -i 2048 -m 5 /dev/rdsk/c0t1d0s3
newfs: construct a new file system /dev/rdsk/c0t1d0s3: (y/n)? y
At 21:30 Hrs I config the third zone :
# mount -F ufs -o logging /dev/dsk/c0t1d0s3 /zone/3
bash-2.05b# zonecfg -z zone3
zone3: No such zone configured
Use ‘create’ to begin configuring a new zone.
zonecfg:zone3> create
zonecfg:zone3> set zonepath=/zone/3
zonecfg:zone3> set autoboot=true
zonecfg:zone3> add net
zonecfg:zone3:net> set address=192.168.35.212
zonecfg:zone3:net> set physical=hme1
zonecfg:zone3:net> end
zonecfg:zone3> verify
zonecfg:zone3> commit
zonecfg:zone3> ^D
bash-2.05b# zoneadm list -vc
ID NAME STATUS PATH
0 global running /
3 zone1 running /zone/1
5 zone2 running /zone/2
– zone3 configured /zone/3
At 21:31 Hrs I install software onto the new virtual server
# zoneadm -z zone3 install
Preparing to install zone <zone3>.
Creating list of files to copy from the global zone.
Copying <2521> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <808> packages on the zone.
Initializing package <147> of <808>: percent complete: 18%
.
. I then wait a bit ..
.
Initializing package <294> of <808>: percent complete: 36%
.
. I wait a bit more ..
.
Initialized <808> packages on zone.
Successfully initialized zone <zone3>.
At 21:39 Hrs Finally I boot the new virtual server
bash-2.05b# zoneadm -z zone3 boot
Finally I login to the console of my new virtual server and
perform basic setup. Things like the hostname and names service setup. Then I am done. I can register a new user account on it and roll it out.
What could be more convenient for an educational situation or a production situation? No mess. No fuss. Ten minutes and a coffee and you have a new server ready to go.
Dennis
ps: Sorry, it took me twelve minutes but I was slowed down by making this post and I went and got a coke from the fridge during the install phase.
Dennis,
I couldn’t agree more. I was looking over some of the data on both of the UML links you posted, and it seems that the UML folks would like to believe that they can do what Zones do today, but it’s not true.
On the Community UML site, they point out that in comparing UML to Zones, UML still appears to need:
1) Ability to access some part of host’s filesystem (like in chroot). So the root filesystem would not be on a device or a file, but as files under f.e. /zone/1 (see the article). There’s a project about this
2) Script to autoconfigure networking (on host and in the kernel)
3) Scripts to create new zones similiar to the Solaris ones’
This is in addition to the point you raise, that being that UML is really a kernel patch for Linux. If you read on those sites you will also notice that there are different levels of patches applied to the 2.4 and 2.6 Linux kernels, and I find that odd.
While I wouldn’t call Zones 100% complete, it is without a doubt a much more mature implementation, even in Solaris Express today as it is.
Without the administration and configuration tools provided for Zones by Sun, I fail to see how UML can compare to Zones at this point in time.
From your write-up which you put on Blastwave, I think the proof is in the pudding, as they say, you had it setup and going in minutes. The average person would need longer than that to patch and build the Linux kernel in the same amount of time.
Regards,
Alan DuBoff
Dennis, this is actually not a case. You can get UML running in few minutes. For example, if you use debian, you just install a debian package containing UML kernel, choose a root filesystem and you go. And it’s not that experimental. We’ve been using this feature in production for a long time (also have been several hosting companies). Actually, Solaris zones is a new feature, UML has been here for quite some time and it’s quite well tested.
Alan, you don’t need to patch a Linux kernel. As in Solaris, where you did not compile a new kernel for the Zone, you don’t need to do so here either.
An example, on my totally unpatched fresh Debian Sarge system:
apt-get install rootstrap user-mode-linux
cp /etc/rootstrap/rootstrap.conf .
# now edit it, I only changed IP address
tunctl -u juraj
ifconfig tap0 192.168.10.1
# set up masquerading, of course I could assign it a public ip address and
# just set routing
iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -j MASQUERADE
# I want to run this as a regular user, better to put him in a group
chown juraj /dev/net/tun
su – juraj # don’t need to do this as root
rootstrap uml.image
Boot it up and go. Nothing difficult.
—–
So as you can see, the only thing, that is really “difficult” is to set up networking.
and you can also boot from “directory structure” using hostfs (that’s also in debian).
So now I can’t see any special functionality in zones, that could not be emulated by hostfs.