Linked by Thom Holwerda on Tue 17th Sep 2013 22:04 UTC, submitted by garyd
General Development

ZFS is the world's most advanced filesystem, in active development for over a decade. Recent development has continued in the open, and OpenZFS is the new formal name for this open community of developers, users, and companies improving, using, and building on ZFS. Founded by members of the Linux, FreeBSD, Mac OS X, and illumos communities, including Matt Ahrens, one of the two original authors of ZFS, the OpenZFS community brings together over a hundred software developers from these platforms.

ZFS plays a major role in Solaris, of course, but beyond that, has it found other major homes? In fact, now that we're at it, how is Solaris doing anyway?

Permalink for comment 572698
To read all comments associated with this story, please click here.
RE[5]: Solaris is doing well
by Alfman on Sat 21st Sep 2013 16:00 UTC in reply to "RE[4]: Solaris is doing well"
Alfman
Member since:
2011-01-28

Kebabbert,

"You mean to say x86 does not scale beyond 8 sockets, not 8 cores. Sure, there are no larger SMP servers than 8 sockets x86 today, and has never been."

Actually I meant this in the context of SMP versus NUMA. You said "All 32 socket Unix servers share some NUMA features, but they have very good RAM latency, so you treat them all as a true SMP server". I'd really like to know the difference between x86 NUMA and "Unix server true SMP", since as far as I know SMP requires NUMA in order to scale efficiently above 4-8 cores without very high memory contention. Saying that Solaris servers are different sounds an awful lot like marketing speak, but maybe I'm wrong. Can you point out a tangible technical difference?

"There are benchmarks with Linux and Solaris on same or similar x86 hardware. On few cpus, Linux tends to win. On larger configurations, Solaris wins."

"Linux vs Solaris on same hardware:"


I thank you for looking these up. I really wish they were using *identical* hardware and only switching a single variable between tests (instead of switching the OS AND the hardware vendor).

Still, the benchmarks are interesting.

This shows a glaring scalability problem with RHL. We're left to infer that RHL has a scalability problem compared to the Solaris chart on the same page.
http://blogs.oracle.com/jimlaurent/resource/HPDL980Chart.jpg

However another chart on a different blog post (on different hardware) doesn't show the scalability problem under RHL.
http://blogs.oracle.com/jimlaurent/resource/HPML350Chart.jpg

So was the problem with Red Hat Linux, was it the hardware, OS, software, the number of cores? We really don't know. Surely any employee worth his salt would have performed the benchmarks in an apples to apples hardware/software configuration, why weren't those results posted?

As before, I'm not asserting that Solaris isn't better, it very well may be, but it would be naive to trust Oracle sources at face value.


"Consider a massive non-clustered database. In this situation, there will be some kind of central coordinator for locking and table access, and such, plus a vast number of I/O operations to storage, and a vast number of hits against common memory."

I'd think this design is suboptimal for scalability. A scalable design would NOT have a single central coordinator, there should be many (ie one per table or shard) running in parallel even though it's not clustered. To be optimal on NUMA software should be specifically coded to use it, however you are probably right that vendors are treating it as clustered nodes instead. They haven't gotten around to rewriting the database engines to take advantage of NUMA specifically.



Can you disclose whether you are connected to oracle?

Edited 2013-09-21 16:06 UTC

Reply Parent Score: 2