Linked by Thom Holwerda on Tue 17th Sep 2013 22:04 UTC, submitted by garyd
General Development

ZFS is the world's most advanced filesystem, in active development for over a decade. Recent development has continued in the open, and OpenZFS is the new formal name for this open community of developers, users, and companies improving, using, and building on ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos communities, including Matt Ahrens, one of the two original authors of ZFS, the OpenZFS community brings together over a hundred software developers from these platforms.

ZFS plays a major role in Solaris, of course, but beyond that, has it found other major homes? In fact, now that we're at it, how is Solaris doing anyway?

Permalink for comment 572668
To read all comments associated with this story, please click here.
RE[4]: Solaris is doing well
by Kebabbert on Sat 21st Sep 2013 11:05 UTC in reply to "RE[3]: Solaris is doing well"
Kebabbert
Member since:
2007-07-27

Do you know if it's *really* a linux problem instead of an x86 SMP scalability problem? I honestly don't think x86 can scale efficiently beyond 8 cores under any OS.

You mean to say x86 does not scale beyond 8 sockets, not 8 cores. Sure, there are no larger SMP servers than 8 sockets x86 today, and has never been. SGI UV1000 server, is actually a NUMA cluster with 100s of sockets x86. But it is HPC cluster, so it is ruled out from this discussion because we discussing SMP servers, not HPC clusters.

Regarding scalability of Linux. If you look at SAP benchmarks using 2.8GHz Opteron cpus and fast RAM sticks, Linux has 87% cpu utilization on 8-socket server. That is quite bad cpu utilization. Solaris on same opteron cpus but slower at 2.6GHz and slower RAM sticks, gets 99% cpu utilization and beats Linux on SAP benchmarks. Solaris is using slower hardware, and beats Linux. 8-sockets is the maximum what Linux has been tested for, and Linux does not handle 8-sockets well. Linux Big Tux with 64 sockets, had 40% cpu utilization shows experimental benchmarks from HP, so HP could not sell Big Tux. So with 8 sockets, Linux had 87% cpu utilization, at 64 sockets Linux had ~40%. I guess at 16 sockets Linux will have cpu utilization at 70%, and rapidly fall off. Because Linux has not been tested nor optimized to handle 16 sockets - how could Linux scale 16 sockets well, the hardware does not exist!


My understanding is that linux will run on the same Sparc architectures that Solaris does:

Yes, it will. But how well? HP tried Linux for their 64 socket server, with awful results. I believe Linux will stutter and be very bad at 96 socket SPARC servers.


Do you have a benchmark of an apples to apples comparison between solaris and linux on the same processors (ignoring that such processors are not being sold with linux)?

There are benchmarks with Linux and Solaris on same or similar x86 hardware. On few cpus, Linux tends to win. On larger configurations, Solaris wins. But that is expected, because all Linux kernel devs sit with 1-2 socket PCs at home. Not many Linux devs has access to 8 socket servers. Linux vs Solaris on same hardware:
https://blogs.oracle.com/jimlaurent/entry/solaris_11_outperforms_rhe...

https://blogs.oracle.com/jimlaurent/entry/solaris_11_provides_smooth...



Mind you solaris *could* be better than linux for high end deployments.

There does not exist high end Linux deployments, and has never had. So for high end deployments, you have no other choice than go to Unix servers with 32 sockets or larger, from IBM, Oracle or HP. So I would be very very surprised if Solaris not was better than Linux. Unix kernel devs have for decades tested and tailored the kernel for 32 sockets and above - of course Unix must handle large servers better?


I'm genuinely curious about it, and if you have any evidence (benchmarks & case studies) that would be very informative to me.

People routinely runs Unix on large 32 sockets, or larger, for decades. So Unix should be comfortable running large servers without effort, I suspect.


For that matter, I'm very curious about the scalability of 64 core shared memory systems in general regardless of OS. Correct me if I'm wrong, but it seems to me that it would scale badly unless it were NUMA (or it had so much cache that it could effectively be used as NUMA).

The canonical example of a large SMP workload, is running databases in large configurations. As a kernel developer explains and talks about NUMA, SMP, HPC, etc:
http://gdamore.blogspot.se/2010/02/scalability-fud.html
"....First, one must consider the typical environment and problems that are dealt with in the HPC arena. In HPC (High Performance Computing), scientific problems are considered that are usually fully compute bound. That is to say, they spend a huge majority of their time in "user" and only a minuscule tiny amount of time in "sys". I'd expect to find very, very few calls to inter-thread synchronization (like mutex locking) in such applications...

...Consider a massive non-clustered database. (Note that these days many databases are designed for clustered operation.) In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory. These kinds of systems spend a lot more time doing work in the operating system kernel. This situation is going to exercise the kernel a lot more fully, and give a much truer picture of "kernel scalability"

Reply Parent Score: 2