Oracle executives talked up on Thursday the planned Solaris 11 release due in 2011, with the Unix OS upgrade offering advancements in availability, security, and virtualization. The OS will feature next-generation networking capabilities for scalability and performance, said John Fowler, Oracle executive vice president of systems, at a company event in Santa Clara, Calif. “It’s a complete reworking of [the] enterprise OS,” he said. Oracle took over Solaris when the company acquired Sun Microsystems early this year.
In less than a year Oracle has performed a “complete” rework of Solaris? Sure.
Sun were already doing a lot of work before the acquisition so its not so much that Oracle did the work, its that the work was in flight and Oracle are delivering it.
Is that what he said?
Yes, according to the article it is exactly what he said:
“It’s a complete reworking of [the] enterprise OS,” he said.
Are you blind? Complete rework of Solaris? Complete rework of the enterprise OS? See any different words here?
Oracle didn’t need to rework Solaris … it’s been evolving nicely on it’s own (transparency/OSS issues aside). I look forward to maintaining it at many of my customer sites over the next decade. It’s a solid enterprise OS selection and will be for the foreseeable future.
BTW, Solaris 11 Express is both production ready and Oracle supportable, as corrected in the parent article.
Maybe you call up John Fowler, Oracle executive vice president of systems, and tell him he is waisting money because Solaris is so damn fantastic it needs no rework or whatever.
What a rude response. All that ctl alt delete said was that it been progressing nicely and in a methodical fashion.
I think he is refering to Solaris 10. Solaris 11 is a complete rework, compared to S10.
For instance, S11 scales much better now, up to thousands of threads and cpus. S10 scaled very good earlier, the best of all OS – for instance, it could handle 512 cpus (each thread is treated as an individual cpu) in Sun T5440 without problems.
But Oracle will release an Solaris server in 2015 that has 16.384 threads, which need a rework of scaling, it will be driven by S11. No OS can scale to that many threads/cpus today.
For instance, recently IBM released their POWER7 cpu. Their biggest POWER7 server released a couple of months ago, has something like 256 cores or so. The mature IBM Enterprise Unix AIX, could not handle that many cores. AIX was rewritten (according to several articles on IT-web sites, such as theregister.co.uk) to handle the biggest POWER7 server with as little as 256 cores. The step from 256 cores, up to 16.384 cores is huge and needs a complete rework.
Linux supercomputers are basically a bunch of PCs on a fast network. That is a different kind of scaling which is easy to do. But to scale to thousands of threads in one single server, is difficult. No one can scale to that many cpus today.
IMHO IBM does not use AIX on its heavy boxes but really big iron is controlled b specifically written z/OS
The big System Z is basically an array of smaller computers and then there is one z/OS on top of that which commands those smaller computers. Each small box however may contain its separate OS (like AIX or Linux if you will).
Now Solaris is targeted to the same level as z/OS is so it can handle lots of resources.
And then there is HPUX and thats pretty much all about mainframe business.
IBM uses z/OS on their Mainframes, yes. But the Mainframe cpus are dog slow, much much much slower than x86 cpus.
In IBM’s fastest boxes, they use POWER7 (which happens to be a really fast cpu in some workloads)
It seems a bit premature to proclaim as fact that Solaris 11 scales up to “thousands of threads and CPUs” when there aren’t yet any 1000+ core servers on which Solaris will run (unless someone’s gotten Solaris x86 to boot on an Altix UV – that’d be a very cool thing to see benchmarked).
Further, at least according to Oracle’s product page, the largest configuration for the T5440 has 256 threads. Do you have a link to any benchmarks that show scaling up to 256-way SMP on a single solaris instance? It’d be really interesting to see.
While MOST Linux supercomputers listed on the Top500 are indeed clusters, the Altix 3000 & 4000-series from SGI are not clusters but large NUMA machines (same “breed” of system as the big boxes from Oracle, IBM,etc) and scaled beyond a 1000 CPUs years ago. Setups with of up to 4096 CPUs in a single instance were sold by SGI as far back as 2006.
Linux scaling _used to be_ rather sad compared to the high end UNIXes and compared to Solaris and IRIX in particular. That rapidly started changing once SGI got into the game and decided to replace IRIX with Linux. Today, both Solaris and Linux scale very well. I believe Solaris still has the edge in scaling linearly up to the largest systems a single instance will run on, but Linux is supported on considerably larger setups, thanks to SGI.
Solaris 10 scales up to hundreds of cpus. Each thread is treated as a cpu, by Solaris. So if you see 256 threads, it is the same thing as 256 cpus.
Earlier there was at least one large Solaris server weighing 1000kg with 144 cpus, sold by Sun. Maybe it was the Enterprise 10000 model? Can not remember. But that was decades ago and driven by an old Solaris version which back then scaled to hundreds of cpus.
In 2015, Solaris 11 will drive this new server with 16.384 threads. That is one of the reasons S11 kernel was reworked, to scale further up from several hundreds of cpus, to many thousands of cpus (threads).
When I talked about large Linux servers just being a cluster, I was actually thinking about the SGI’s ALTIX servers, and they are the large clusters I was referring to.
If you look at the impressive Altix benchmarks, you will see that it is the same kind of workload run on a cluster.
http://www.c0t0d0s0.org/archives/6751-Thank-you,-SGI.html
“Perhaps those benchmarks published by SGI finally deliver a few nails for the coffin of the reasoning of some fanboys that Linux scales better than Solaris, because there are systems with thousands of cores out there. Linux scales on this system exactly like a cluster. And that means for non clusterable tasks in one word: Not.”
Regarding Linux scaling. Here we have a bunch of Linux scaling experts that “dispells the FUD from Unix vendors that Linux does not scale well” in an article:
http://searchenterpriselinux.techtarget.com/news/929755/Experts-For…
“Linux has not lagged behind in scalability, [but] some vendors do not want the world to think about Linux as scalable. The fact that Google runs 10,000 Intel processors as a single image is a testament to [Linux’s] horizontal scaling.
Today, Linux kernel 2.4 scales to about four CPUs
“-With the 2.6 kernel, the vertical scaling will improve to 16-way. However, the true Linux value is horizontal scaling [that is: clusters].
Q: Two years from now, where will Linux be, scalability-wise, in comparison to Windows and Unix?
A: It will be at least comparable in most areas”
According to the Linux experts, Linux scales to 10.000 cpus in one single image in the current v2.4, and in Linux 2.6 the kernel will improve to 16-way? Isn’t that a bit strange?
The ALTIX machine sold in year 2006 with 4096 cores, was using Linux v2.6 (which had only been released 2 years earlier). I find it extremely hard to believe that in v2.4 Linux scaled bad (2-4 CPUs) and two years later it suddenly scales to 4096 cores in the ALTIX machine? It takes decades to scale well. The only conclusion is that ALTIX machine is a cluster, otherwise Linux would have not chance to scale to 4096 cores in two years.
Linux scales well on large clusters, yes. But that is not Big Iron. When people says Linux scales well (which it does) then they talk about clusters – that is scaling Horizontally.
In other words; Linux scales well HORIZONTALLY, but still not good at VERTICAL scaling (which Solaris excels at on the large Solaris servers).
The Linux kernel developers only have access to their desktop computers, with 1-2 cpus. They have no access to big servers weighing 1000kg and 100s of cpus and TB of RAM (who would pay that many million USD?), so it is difficult for them to test Linux on large configurations. Everything in Linux is tailored to 1-2 cpus and a few GB of RAM – it is a desktop OS.
This is taken from the ext4 main developer:
http://thunk.org/tytso/blog/2010/11/01/i-have-the-money-shot-for-my…
“Ext4 was always designed for the “common case Linux workloads/hardwareâ€, and for a long time, 48 cores/CPU’s and large RAID arrays were in the category of “exotic, expensive hardwareâ€, and indeed, for much of the ext2/3 development time, most of the ext2/3 developers didn’t even have access to such hardware. One of the main reasons why I am working on scalability to 32-64 nodes is because such 32 cores/socket will become available Real Soon Now, and high throughput devices such as PCIe attached flash devices will start becoming available at reasonable prices soon as well.”
He says that soon 32-64 cores will become available (in a couple of years) so he started to work on trying to scale as high as 32-64 cores.
From 32 cores, it is a far stretch to 100s of cpus or cores. Let us not even start to talk about Altix 4096 cores.
The only big servers I know of, are old and mature Unix, Solaris with 144 cpus, and IBM big POWER servers with 64 cpus. None of the big Unix vendors has ever offered servers bigger than those. It sounds weird that SGI offers Linux servers in two years time, from 2-4 cores up to 4096 cores. Something is strange. Or, it is just a cluster.
Another reason Linux has problem, is that Linux cuts corners and cheats, just to win benchmarks. Linux does not obey correct Unix standards, which mean your data can be corrupt. This is a bad thing:
http://milek.blogspot.com/2010/12/linux-osync-and-write-barriers.ht…
“This is really scary. I wonder how many developers knew about it especially when coding for Linux when data safety was paramount. Sometimes it feels that some Linux developers are coding to win benchmarks and do not necessarily care about data safety, correctness and standards like POSIX. What is even worse is that some of them don’t even bother to tell you about it in official documentation “
Nice. We have to wait until 2015 to verify your claims. Vapor ware!
The more realistic explanation is that some data structures were reworked to accommodate a system with 16k threads. Then marketing ran with the big number.
This statement from the article is so nebulous that the great improvement might be as dull as support for RoCEE. WooHoo another driver that is meaningless without buying a ridiculously expensive pcie adapter.
Your arguments would be more convincing if people could read your references. Registering is a nonstarter.
You’re mixing things up again. 10,000 cpus is for a cluster of machines. 16-way is for a single machine.
2.4 and 2.6 were developed in parallel for many years. Wikipedia has a nice timeline:
http://en.wikipedia.org/wiki/Linux_kernel
2.4 continues to be developed today. 2.5 development started in 2001. 2.6 development started in December 2003. So your 2 years number is pretty meaningless.
Also, one of the biggest improvements in 2.6 was to remove the ‘big kernel lock’, which helped scalability immensely.
Ext2 was developed in 1993. Of course linux developers didn’t have access to such hardware. It was only a few hobbyists then. Ext3 was developed in 2001 which is about the time that the corporate world started embracing linux. So Ted is right, ext2/3 didn’t have access to such hardware.
Today things are different. There are many paid engineers. The story was here on osnews:
http://www.osnews.com/story/23636/Who_Really_Contributes_the_Most_t…
Weren’t you paying attention? There is a nice table here:
http://news.cnet.com/8301-13505_3-10288910-16.html
Even your beloved Sun, Oracle are on the list. Obviously they have access to such hardware.
[q]From 32 cores, it is a far stretch to 100s of cpus or cores. Let us not even start to talk about Altix 4096 cores.
The only big servers I know of, are old and mature Unix, Solaris with 144 cpus, and IBM big POWER servers with 64 cpus. None of the big Unix vendors has ever
You know, on the public Oracle roadmaps there are servers with 16.384 threads and 64TB RAM. I dont think such servers would have been on the roadmap, without a reason. Solaris has been doing scalability for decades. 100s of cpus and threads for decades. Now it is time to move up to thousands of cpus and threads. No other cpu vendor has that many threads, except Oracle with Niagara cpus. Therefore Oracle sees the need for their next gen servers to rework Solaris now. You never know what Oracle have in their labs today.
Oh, and how about BTRFS? The ZFS killer? Have you read the mail lists about all bugs and data corruptions? Not many, if any, of the ZFS features have been implemented in BTRFS yet. We will have to wait until 2020 to verify the BTRFS developers claims. Vapor ware!
Anyway, I did some quotes where the Linux experts claim that v2.6 is going to scale to 16 cores, from v2.4 with 2-4 cores. I dont see how Linux will scale up to thousands of cores in two years – especially as Linux developers dont have access to such big servers.
I am not mixing anything. I am quoting the Linux experts. I also explained that Linux scales well in a cluster (horizontal scaling) but bad vertically (single big servers). I wrote that. I dont mix up anything, but Linux fans do.
They say that Linux scales to 10.000 cores (for instance Google) and concludes that Linux scales excellent. That is wrong conclusion, I say, because Linux scales well on a cluster. But there is no single server with 10.000 cores. Again, Linux scales well on a cluster. But on a single server, Linux scales good to 16-32 cores. Today. As Tso confirms below.
How is my “two years number meaningless”? 2.6 development started in early 2004 according to a graphical timeline I saw. Two years later ALTIX servers were sold. It is impossible to scale well from 16 cores up to 4096 cores in two years.
Sure. And when did they remove the BKL? Wasnt it two months ago? Year 2010? So maybe Linux scales better today. How can the Linux experts claim Linux scaled excellent back in 2003 when the interview was made?
“Yes, Linux scales excellent! It scales to 10.000 cores on Google. It also scales to 2-4 cores in v2.4 and to 16 cores in v2.6. Yes, Linux scales perfect! 16 cores is remarkably good! Linux scaling FTW!!”
Of course, if the Linux experts thinks that 16 cores is excellent scaling, then I understand they think that 32 cores today is more than excellent.
Linux fans think “Linux have never crashed on my desktop – it must be perfectly stable. It also scales well on my dual cpu PC – it scales perfect” and then go out and rant about how good it scales.
So how can Linux scale well if they dont have access to big servers?
I dont see how you make the jump:
“There are paid developers => they have access to such hardware”
At my company, not many have access to big servers, even though we have such servers.
Many Linux devs are concerned with user stuff, lag, etc.
I’m sure the above posters are right about a lot of it being a continuations of Sun’s own developments prior to being acquired, but who’s to say it’s only been a year on Oracle’s part. Lot’s of the relevant code has been available to Oracle to play with since long before they absorbed Sun. It’s entirely possible they were working on tweaks to the platform for their own ends, much like they’ve tailored some of RHEL’s sources as a platform for their products.
when Oracle releases source code.
Then the Solaris distros from the community can catch up. As can FreeBSD (with ZFS and DTrace), Apple (DTrace) etc
Me too
I wonder how many of these features were planned but due to a lack of resources they were put on the back burner but now Oracle has the resource to put into many of these ideas we’re seeing such projects come back on the front burner. From what I have heard they’re hiring more programmers and engineers which hopefully will lead to many of the ideas put to the background now coming forward.
I am disappointed that two projects, tickless kernel and migration away from HAL haven’t received the attention that they need. There was much talk about moving to upower/udisk but nothing has actually evolved beyond just talking on mailing lists and blogs around a year ago. Yes I understand that Solaris is primarily focused on big servers but developers also use Solaris and want not only a robust desktop environment but also something that is pleasant to use with the niceties that Mac OS X and Windows has such as good power management for laptops etc.