IBM has revamped its entry-level Unix servers with new processors and an Integrated Virtualisation Manager aimed at the SME market, and launched a vitriolic attack on rival Sun Microsystems. Launching four new servers, three of them based on the new Power5+ processor, as well as a 16-way system for the high-end, IBM claimed leadership in power and performance in the low-end Unix systems market.
Sun has some pretty impressive deals on their low-end systems and great price/performance ratios. IBM doesn’t seem to be claiming anything in the price realm or even the price/performance realm. To me, that says you’re paying a premium for IBM equipment. Is it worth it?
First, Sun’s new Galaxy systems blow Power5+ away – like completely off the table. At ONE THIRD the price. Second, Niagara looks as if it’s going to obliterate Power on throughput oriented network apps – so someone at IBM obviously isn’t watching the competition… (the fact that you can’t run AIX anywhere but IBM’s proprietary kit is also a big ding, but you can forgive IBM for thinking only about hardware).
Oh, come on.
Anyone who buys Sun or IBM has money to burn. Both are like the Apple of the server market.
But I have yet to see any benchmarks to backup those Anonymous performance claims. Anyone got some facts?
As previous poster said — Niagara will rock IBM. I think it is just a pre-emptive strike on IBM’s part, to counter the market share they will lose when it (Niagara) is available. And, Sun equipment, to an enterprise, is not that expensive (atleast lowend). High end, I agree — but that is what HA is all about. I’d like to know which company, based on x86 processors can give you HA like Sun, IBM, or HP.
Lots of Sun astroturfers still around, I see. Tell us about Niagara when it can actually be purchased. Then we’ll listen, mmk?
Companies would be smart to avoid IBM like the plauge less you get locked into their extremely expensive hardware cycle. Like other posters have said there are no claims on price my guess is that like most of IBM’s hardware its way over priced.
http://www-03.ibm.com/servers/intellistation/power/
$8,999 for the 1 way IS POWER 285 Express.
It’s not that expensive.
For the eServer p5 510 with 1 CPU, 1 GB of RAM, 2 10/100/100 Ethernet ports and 2 73 GB disks $4,799.00 base configuration: (http://www-03.ibm.com/servers/eserver/pseries/hardware/entry/510_91…)
SunFire X4100 server with a single AMD Opteron, 1 GB of RAM, 4 10/100/1000 Ethernet ports and 2 73 GB disks $3,185.00 base + disks:
(http://store.sun.com/CMTemplate/CEServlet?pm=101437:106bbbbc7a3:-69…)
Looks to me like the standard IBM (outrageous) pricing.
And the IBM base system includes the ability to do Logical Partitioning, i.e. running multiple independent copies of AIX or Linux. Seems pretty cheap to me
As long you are willing to pay for it and use AIX or SuSE Linux. If you are a RedHat shop you can’t take advantage of the DLPAR features. I just don’t see the benefit, I have 3 SunFire 4800’s that I can split into two Dynamic Domains each, and other than clustering for performance or failover there isn’t much of a need for the feature.
Considering most people concern themselves with the price tag more than the features, IBM is going to have to come down in price big time to be competitive.
Can someone please explain why someone (or even some corporation) would pay $9000 for something that specwise doesn’t seem to be all that hot? I understand that its IBM and it’s targetted at the pro market, but $9000?
So whats so good about this workstation?
Can someone please explain why someone (or even some corporation) would pay $9000 for something that specwise doesn’t seem to be all that hot?
If you have over $10,000 worth of AIX applications and you want them to run faster, it’s cheaper to buy a POWER5 than to re-buy all your applications for AMD64. Also, POWER5 has better floating-point performance than Opteron or Xeon.
Some more details in another thread:
http://www.realworldtech.com/forums/index.cfm?action=detail&PostNum…
> And the IBM base system includes the ability to do Logical Partitioning, i.e. running multiple independent copies of AIX or Linux. Seems pretty cheap to me
Micropartitioning is not that good of a feature as IBM always claims. Micropartitioning is not usable beyond just a few partitions unless you’re willing to take a pretty significant performance hit. According to IBM’s own RedBook “p5 Servers Architecture and Performance Considerations” (p.116) the performance hit breaks down like this as more uPARs are added:
4-way SMP (4 cores) 1.00 relative performance
4 partitions dedicated (1 core each) ~1.05 relative performance
4 uPARs (2 cores each) ~.96 relative performance
2 uPARs (4 cores each) ~.90 relative performance
4 uPARs (4 cores each) ~.75 relative performance
On page 115, of the same IBM reference, it states the reasons for this enormous overhead involved. It is summarized as follows:
* The inability to guarantee the availability of sets of physical CPU cores into the virtualized uPAR space, dispatched at 10 millisecond intervals
* Cache thrashing when multiple virtual threads compete for cache lines
* SMP lock contention on dispatching boundary
* Simultaneous Multi Threading (SMT) only makes this worse, which partially explains why POWER5’s SMT is sometimes turned off in some benchmarks
IBM’s own reference strongly suggests POWER5 systems with more than several uPARs, each with several cores, may require so much overhead that it may drag system performance well below 50% of full SMP mode. Extrapolating IBM’s data from IBM’s chart from 4 to 8 cores in 4 uPARs,
results in a relative performance of 35%.
If you really look through all the IBM marketing FUD surrounding p5, you should see that Power5 is not really that powerful (at least not as much as IBM would want you think)
Micropartitioning is not that good of a feature as IBM always claims. Micropartitioning is not usable beyond just a few partitions unless you’re willing to take a pretty significant performance hit. According to IBM’s own RedBook “p5 Servers Architecture and Performance Considerations” (p.116) the performance hit breaks down like this as more uPARs are added:
4-way SMP (4 cores) 1.00 relative performance
4 partitions dedicated (1 core each) ~1.05 relative performance
4 uPARs (2 cores each) ~.96 relative performance
2 uPARs (4 cores each) ~.90 relative performance
4 uPARs (4 cores each) ~.75 relative performance
On page 115, of the same IBM reference, it states the reasons for this enormous overhead involved. It is summarized as follows:
* The inability to guarantee the availability of sets of physical CPU cores into the virtualized uPAR space, dispatched at 10 millisecond intervals
* Cache thrashing when multiple virtual threads compete for cache lines
* SMP lock contention on dispatching boundary
* Simultaneous Multi Threading (SMT) only makes this worse, which partially explains why POWER5’s SMT is sometimes turned off in some benchmarks
IBM’s own reference strongly suggests POWER5 systems with more than several uPARs, each with several cores, may require so much overhead that it may drag system performance well below 50% of full SMP mode. Extrapolating IBM’s data from IBM’s chart from 4 to 8 cores in 4 uPARs,
results in a relative performance of 35%.
If you really look through all the IBM marketing FUD surrounding p5, you should see that Power5 is not really that powerful (at least not as much as IBM would want you think)
Anyone have a similiar numbers for Solaris Containers?
Check this link out and look at the right side of the page:
http://blogs.sun.com/roller/page/jclingan
An Ultra 10 is not a “speed demon” by any stretch of the imagination. A V880 is a pretty beefy box and can easily handle more than what is configured here. This goes along with what I have read about uPARs, we didn’t have the hardware or the time to “play” with this when 5L 5.2 hit the street (at that time it was hardware dependent).
Check this link out and look at the right side of the page:
http://blogs.sun.com/roller/page/jclingan
An Ultra 10 is not a “speed demon” by any stretch of the imagination. A V880 is a pretty beefy box and can easily handle more than what is configured here. This goes along with what I have read about uPARs, we didn’t have the hardware or the time to “play” with this when 5L 5.2 hit the street (at that time it was hardware dependent).
Yeah its pretty amazing that he can have 190 containers on an Ultra 10, but his table doesn’t give any insight into performance. The table:
Host …………| Container Count
——————-+——————
Ultra 10 …….|
1 CPU ………| 190 Containers
1GB RAM ….|
31GB Disk …|
——————-+——————
v880 …………|
4 CPUs ……..| 600 Containers booted
8GB RAM ….| with Apache web server
342GB Disk .|
(pardon the . on the lhs, osnews mangled my table) Are all of these Containers active and doing real work. I’m sure IBM could create the max number of uPARs on a box. If they put all but 1 to “sleep”, their “relative performance” numbers wouldn’t look so bad.
I’d like to see something similar to this:
http://blogs.sun.com/roller/resources/jclingan/global_create.gif
for the above boxes under load.
You’re right though, Solaris will always scale to a larger number of virtual operating system instances. It’s just inherent in their implementation…they have more “sharing”.
I can answer that question. Performance wasn’t the goal of those tests. The tests were purely driven by curiosity. On the Ultra 10, the limitation was RAM. The CPU utilization was pretty darn negligible IIRC.
I bumped up to a v880 that was less than 1/2 configured and wanted to run something something users might run, so I put apache in each zone. Again, RAM was the limitation with very little CPU utilization. I don’t recall the exact numbers.
The overhead of zones is nothing more than the overhead of a set of Unix processes. There is also only one Solaris kernel running on the box. Each zone has it’s own init daemon and the standard processes that tend to spawn from that. I stripped out all non-essential-for-my-test processes and got RAM overhead down to about 30MB/zone. On the CPU side, a box with 190 idle zones is still relatively idle resulting extremely low CPU overhead. When using sparse zones (which is what I was doing), all similar processes (across zones) share the same text segment mapped in from disk (at least I’m pretty darn sure that’s the case). Stack and heap are local to the zones. So each instance of “nscd” for example are sharing the same text segment, maximizing RAM utilization.
Sizing a server with 190 zones from a disk/network I/O perspective and a CPU perspective is not that different from sizing a box running one OS with 190 web servers. It depends on the load you are going to serve.
Hope this helps.
John Clingan
Sun Microsystems
http://blogs.sun.com/jclingan
> Anyone have a similiar numbers for Solaris Containers?
Solaris Containers/Zones do not have any overhead even if you run hundreds of Containers. If there is some overhead, it it pretty much negligible.
Umm, Solaris Containers are glorified chroot jails. The IBM scheme is a true virtualized CPU context with registers and the like, you might say it’s like VMWare supported in the hardware. Solaris Containers is not any of that.
Glorified jails is a bit of an over-simplification, but zones were modeled after jails. You are correct that Solaris Containers have to be positioned appropriately. There are many different virtualization technologies. Containers are lightweight but don’t provide as much failure isolation as domains for example. Combining virtualization technologies (containers with domains, vmware, Xen (some day), clustering containers, etc) gives you the best of both worlds.
As a side note, what I like most about Containers is that they functionally work the exact same way on SPARC and X86, on old hardware and new hardware. That gives end users the ability to deploy containers using existing assets, and choice based on what their availability needs/cost structure allows.
John Clingan
Sun Microsystems
http://blogs.sun.com/jclingan
> Solaris Containers/Zones do not have any overhead even if you run hundreds of Containers. If there is some overhead, it it pretty much negligible.
Performance is relative. Real numbers for Power5, and a generalization of “negligible” for Solaris isn’t a very fair comparison. In real life, nothing is free, especially hundreds of something.
I’m not partial to either side, but the original post was pretty one sided. I figured I’d challenge it.
Does anyone have similar numbers for Solaris?
<Quote>
For the eServer p5 510 with 1 CPU, 1 GB of RAM, 2 10/100/100 Ethernet ports and 2 73 GB disks $4,799.00 base configuration: (http://www-03.ibm.com/servers/eserver/pseries/hardware/entry/510_91…..)
SunFire X4100 server with a single AMD Opteron, 1 GB of RAM, 4 10/100/1000 Ethernet ports and 2 73 GB disks $3,185.00 base + disks:
(http://store.sun.com/CMTemplate/CEServlet?pm=101437:106bbbbc7a3:-69…..)
Looks to me like the standard IBM (outrageous) pricing.
</Quote>
Hard to imagine that you guys are comparing an x86 platform server vs. a POWER processor server.
How price competitive do you want it to be ? If you want to be fair, why not compare it against the xSeries range or the AMD processor based class of servers.
POWER is in the same league as Sun’s SPARC processor which contain features that x86 based processors do not have.
IBM being able to sell their POWER processor based servers somewhere near the price point of an x86 platform is quite price competitive to me.
Are you the real fluffybunny, or did you just grab the name? Because if I remember correctly, fluffybunny is in jail for hacking various web sites.
I am curious if IBM is just eating the extra cost of the POWER processor on the lower end offerings just to sell them over Sun or Dell machines. A quick look at store.sun.com shows that a minimally configured V210 or V240 is competitive with IBM’s offerings so it really doesn’t make any difference which ones I mentioned.
Further an article posted here a couple of days ago points out some of the shortcomings of DLPARs on Linux, where if I use Zones and Containers it doesn’t make any difference what hardware I use as long as Solaris 10 supports it.
> POWER is in the same league as Sun’s SPARC processor which contain features that x86 based processors do not have. IBM being able to sell their POWER processor based servers somewhere near the price point of an x86 platform is quite price competitive to me
That is true, the Power systems are in the same league as Sun UltraSparc machines, but only when running AIX and not Linux on it. First off Linux on Power is still a half-assed piece of crap compared to AIX — a whole host of features is not available and I would be damned if I chose Linux over AIX on Power machines.
If we’re talking prices and AIX is in the picture, than throw in another $750 per processor and about $500 per uPAR you want to run with AIX on the box. Licencing along for AIX can make a difference if we’re talking about price sensitive buyers. Compare that to Solaris being completely free regardless how many hundres of Containers you run under it.
> Umm, Solaris Containers are glorified chroot jails. The IBM scheme is a true virtualized CPU context with registers and the like, you might say it’s like VMWare supported in the hardware. Solaris Containers is not any of that.
So? Functionally it is still the same thing — you’re able to virtualize numerous OS instances with either approach. Only Solaris Containers are much, much more efficient and usable at that — you can sqeeze out a lot more out of the same hardware using Solaris Containers than either Power hypervisor or VmWare stuff. Administering Solaris Containers is also a no-brainer that can save you real dollars in admin costs, neither Power hypervisor virtualization nor VMWare can claim that — you still end up maintaining completely disjoint OS instances and the only thing you save is hardware, which is peanuts compared to admin expenses. Given a choice my vote goes to Solaris Containers any time of the day.
> If you have over $10,000 worth of AIX applications and you want them to run faster, it’s cheaper to buy a POWER5 than to re-buy all your applications for AMD64.
Speaking of application, did you know that there are only about 200 *certified* applications that can run on AIX 5.3 and Power5? Compare that with a few thousand for Solaris and SPARC.
<Quote>
> If you have over $10,000 worth of AIX applications and you want them to run faster, it’s cheaper to buy a POWER5 than to re-buy all your applications for AMD64.
Speaking of application, did you know that there are only about 200 *certified* applications that can run on AIX 5.3 and Power5? Compare that with a few thousand for Solaris and SPARC.
</Quote>
Looking at your explaination saying that *AIX 5.3* has only 200 *Certified* applications against *Solaris and SPARC* is kind of lame. You forgot to mention *Solaris 10* only, since we are talking about the latest iteration of Operating Systems aren’t we ?
Furthermore, please post the links from where you got the information instead of just mentioning it because it doesn’t give you credibility just saying something without the information to back it up, Anonymous.