Linked by Thom Holwerda on Fri 29th Jan 2010 16:08 UTC
Oracle and SUN "Several of the concerns about Oracle's acquisition of Sun have revolved around how Unix technologies led by Sun would continue under the new ownership. As it turns out, Solaris users might not have much to worry about, as Oracle executives on Wednesday affirmed their commitment to preserving the efforts. In the case of Solaris, Oracle had already been a big supporter of the rival Linux operating system. Oracle has its own Enterprise Linux offering, based on Red Hat Enterprise Linux. For Oracle CEO Larry Ellison, the idea that Linux and Solaris are mutually exclusive is a false choice."
Permalink for comment 407185
To read all comments associated with this story, please click here.
Member since:


Actually Linux scale far better than both Solaris and/or AIX. The largest single image computers are either old Irix boxes, or new numa intel boxes running linux. In fact most of the numa scheduling and cell migration from Irix was ported to Linux long long time ago.

Not correct. I reiterate: Linux scales good on large clusters, a bunch of PC on a fast network. But on a single machine with many CPUs, Linux does not scale good.

Back a while ago, Linux scaled to 2-4 CPUs. Here are Linux scalability experts dispelling the FUD that Linux does not scale good.,289142,s...
"Greenblatt: Linux has not lagged behind in scalability, [but] some [Unix] vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux's] horizontal scaling. The National Oceanographic and Atmospheric Administration (NOAA) has had similar results in predicting weather. In the commercial world, Shell Exploration is doing seismic work on Linux that was once done on a Cray [supercomputer].
Terpstra: Accusations have been made that Unix and Windows scale to far greater numbers of processors than the Linux 2.4 kernel can. While this is true, a bare claim like this makes little sense unless it is placed within the context of deployment [needs]. Today, Linux kernel 2.4 scales to about four CPUs. Still, one should consider whether a four-CPU server is needed for departmental file and print serving in the average company. After all, there are an average 45 users per server."
They talk about Linux scales really well, at the end they say:
"With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling."
To scale up to many CPUs on a single machine takes decades, it is not easily done. Linux DOES scale well on a large cluster, just what Google uses, or Amazon, etc. That is HORIZONTAL SCALING. Linux sucks Vertical scaling (one big computer).

Look at this new SAP benchmark, which uses 48 cores. Linux utilize 87% of all CPUs, while Solaris utilize 99%. That is another proof that Linux does not scale well even today. That is the reason Solaris scores higher on SAP (despite using slower CPUs and slower RAM):

Linux 87% CPU

Solaris 99% CPU

Linux may EXIST on old Irix boxes with lots of CPUs, but what do you think those boxes do? Serve thousands of users, or calculate stuff? The mere existence of Linux on Irix is not a proof that Linux scales well. Most probably, the scaling sucks bad. As according to the links above.

Here we see that on recent Linux v2.6.27, Linux had some severe problems with 64 socket machines, now Linux is not 250 times slower anymore in some circumstances.

I wonder how many problems Linux has not fixed yet?

The thing that both AIX and Solaris have going for them is that they both have their own proprietary integrated platforms (SPARC and POWER systems) which provide most of the "magic" regarding fault tolerance, and other enterprise-like facilities.

No that is not true. Solaris has Self Repair functionality in software, with ~40% less hardware problems. Which is what collected data says. And functionality in software always beats functionality in hardware, it is easier to patch software, easier to add new functionality, etc. With software you can add 100MB new code easily. In hardware, you have to swap parts and so on.

But from a processing scalability perspective, sorry neither AIX nor Solaris can hold a candle against Linux. However, as I said in other enterprise centric features both platforms are far more mature than linux, but it is mostly due to the specialized HW they run on...rather than just the software itself.

Ehrm. That new SAP benchmark I showed, is on x86. Not on specialized SPARC hardware. So, wrong again.

Here is a Linux zealot that switches to Solaris, because Linux does not cut it, when going to large scale work load:
"What got us using OpenSolaris was Linux’s (circa 2005) unreliable SCSI and storage subsystems. I/Os erroring out on our SAN array would be silently ignored (not retried) by Linux, creating quiet corruption that would require fail-over events. It didn’t affect our customers, but we were going nuts managing it. When we moved to OpenSolaris, we could finally trust that no errors in the logs literally meant no errors. In a lot of ways, Solaris benefits from 15 years of making mistakes in enterprise environments. Solaris anticipates and safely handles all of the crazy edge cases we’ve encountered with faulty equipment and software that’s gone haywire"
Here is another guy that switches to Solaris, and on the same hardware, sees lots of improvement:

BTW, some of the largest enterprise systems, like Amazon... run almost exclusively on linux: from web fronts, load balancers, to even the DB backends. With some sprinkles of solaris/ORACLE at the very very deep backend. Granted, computers are just tools. And for plenty of applications, Solaris and AIX are far better suited than Linux. But in the same sense Linux may not have some of the specific capabilities of those systems. Labeling linux as immature or not ready for the enterprise is just silly.

Yes, they use a bunch of computers on a network. Linux is run on a low utilization, because otherwise it crashes.

Maybe you heard about Linux being buggy and bloated? What does Linus T actually mean, when he says that Linux is bloated? What does Andrew Morton mean when he say that "the code quality is declining"? What does Dave Jones mean when he is saying that "the kernel is going to pieces"? What does Alan Cox mean, when he say that "the kernel should be fixed"? I mean, several Linux kernel developers are talking about bloat and bugs and declining quality. I wonder what they actually mean?

Do you realize the fact that those supercompuers have some of the largest and most complex I/O subsystems, file serving, and database/mining applications?

Where do you think the globs of data come to generate all those globs of flops? Pixie dust?

Yes, those supercomputers also have much more cpus than big iron. But you must understand that Super computers and big iron is very different. Supercomputers have simple structure. They only do one thing: calculate. That is easy to do.

And yes, those supercomputers are considered big iron. In fact, there is no bigger iron than a supercomputer. Heck, the largest IBM Z10 system looks puny when compared to a large CRAY, which yes runs using AMD processors and Linux in all its glory.

*sigh* Read here, and see that super computers and big iron are very different.
"[Super computers] tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks."
The difference is similar to a GPU vs a CPU. A CPU is general purpose, and much more complex than a GPU which is extremely fast on one thing. You do realize that GPU and CPU are different, and likewise, super computers and big iron is different.

Some more information.

Reply Parent Score: 2