Bart Smaalders of Sun Microsystems has open sourced libmicro, a portable set of microbenchmarks designed to measure performance of basic system calls and library functions. This framework proved invaluable during Solaris 10 development as a way of identifying areas where performance was lagging behind other Operating Systems, as well regressions between Solaris builds. You can join the libmicro discussion at the OpenSolaris forums.
good to see the solaris people again taking a leadership stance.
Huh? How is this taking the leadership stance? Rudimentry low level OS performance analysis tools like this have been available for nearly 10 years, since the release of lmbench (and there were no doubt many before then too).
Aside, I’d love to see a comparison of Solaris 10 against Linux.
Here there is a comparison using MySQL to benchmark both (and other) OS’s.
Solaris is fast, almost as fast as Linux.
http://software.newsforge.com/article.pl?sid=04/12/27/1243207&tid=7…
Firstly: That article has several flaws pointed out in the comments. Secondly: supersmack results are not present for Solaris due to porting issues so with regards to Solaris it does not carry much weight
> Aside, I’d love to see a comparison of Solaris 10 against Linux.
What’s there to compare? Solaris 10 kicks Linux (RedHat/SuSE) on pretty much every count. Performance wise, technology/feature wise, and even price-wise (Solaris 10 is almost two times cheaper to license and support than RedHat or Linux on the same hardware). Solaris 10 is an undisputed leader in the OS space right now.
> What’s there to compare?
Err.. is this a trick question? … performance? Or did you think this libMicro compares license costs?
> Solaris 10 kicks Linux (RedHat/SuSE) on pretty much
> every count. Performance wise
So “they” keep telling me. But apart from a few rigged benchmarks from Sun, I haven’t seen the pudding. That’s why I’m interested (not that these microbenchmarks would do much to proving overall performance either way, especially not things like network stack performance or SMP scalability).
Feel free to do the benchmarks yourself. You can download Solaris 10 for free.
I would but it doesn’t work on my HW (dual Nocona Xeon or dual PPC64 G5).
Oh and the plethora of Linux benchmarks aren’t rigged? I read a document recently that Novell wrote about the “advantages” of SuSe Linux over Solaris 10, and what I found interesting is every time Solaris had a clear advantage, the paragraphs were real short. In other words, SuSe Linux cannot compete with Solaris in areas such as virtualization (for example). Novell then infuses Solaris 10 GA and OpenSolaris in the vain hopes of confusing the less informed reader in the hope of getting a person to believe that SuSe Linux is somehow better than Solaris.
Quite frankly I don’t care about benchmarks because the vast majority of them do not disclose the kind of performance data that I would want to see to determine the merit of the test. If a particular benchmark test consumes 90% of the CPU on one OS and 8% on another OS, then you can weigh the benchmark and the performance results together and come to a much better conclusion.
Where I work at we get serious visibility from RedHat and I think they are clearly aware that we are not going to be a pushover to sell to. They are aware that we are a Sun shop (Solaris/Trusted Solaris), and in the Trusted OS space no Linux distro can touch Trusted Solaris. I have personally been in three seminars with RedHat employees and their “anti-Sun” tone has changed considerably. If you want to sell Linux to a Solaris administrator, you had better be prepared to answer some deep questions about how a Linux distro is going to do something equal to or better than Solaris. We are not interested in the “Linux or nothing” crap, either show us distinct advantages over Solaris or go away.
Libmicro has definite value as a tool to measure performance of code written for Solaris. And which performance tools are you referring to that have been around for “nearly 10 years”?
outperformed on every count by Suse 9 on our own tests, with some common benchmarks and some of our own code. this is trying btoh gnu and sunone studio compilers on solaris, and icc/pgicc/gcc/pathcc on linux.
solaris is now, and always has been, utter junk. a nightmare to manage and maintain, with none of the performance desired here in the real world. only sun’s bullshit marketing keeps it afloat. if we had trusted our sun reps and bought into thier equipment recently (with solaris on it), i’d probably be out of a job.
What’s there to compare? Solaris 10 kicks Linux (RedHat/SuSE) on pretty much every count.
I agree. But there has always been better alternatives than Linux (like BSDs), if one considers design / engineering / technological merit.
It just doesn’t make sense why big companies go linux. I’ve sincerely tried, read, put in a lot of effort to make it work at some point. It’s simply impossible to have a stable and sensible Linux system that runs. Try to upgrade any of the the kernel/glibc/gcc, or install a rpm package that is not for your distro and see. After a while you’re just content that it is actually running and never bother with tuning.
In the past I worked on Solaris and Digital UNIX machines and they run flawless. The documentation was perfect. I’ve used with admiration and awe as a computer science student.
I’m hoping Solaris would bring back what it means to have a real UNIX machine to computing enthusiasts, and at a great cost (like the new Sun workstation series).
…install a rpm package that is not for your distro and see.
And how does Solaris deal with random Linux RPMs? It doesn’t. The choices for software are: pkg-get with its limited library, sfw with an even more limited library, NetBSD’s pkgsrc which works better on the BSDs or Linux, or installing from source. When I had trouble compiling the quik bootloader for my oldworld Mac, I used rpm2targz to harvest a binary from one of those random RPM packages, and it works great. What would I have done on Solaris? Oh yeah, recent versions don’t support PPC. I should buy one of those “great cost” Sun workstations and have myself “a real UNIX machine”. And yes, home usage does matter. Why learn yet another UNIX OS? New admins will use what they know. How do you think Windows got so popular?
I’m not trying to troll. All systems have strengths and weaknesses. Solaris is great for high availability, backwards compatibility, performance tuning (DTrace), and same-OS virtualization (Zones). Right now Linux has more software available and from what I’ve seen is more popular for embedded systems and inexpensive HPC. The BSDs have security (Open), portability (Net), and performance without sacrificing stability (Free, DragonFly) covered. I tend to ignore the benchmarks and FUD and use the best tool for the job, even if that means running Windows sometimes.
Thought the article was going to be about NewOS.
“I have personally been in three seminars with RedHat employees and their “anti-Sun” tone has changed considerably.”
Probably because OpenSolaris has muted their main “we’re the OSS good guys, big bad closed-source Sun is evil” talking point.
Solaris is the only modern system from the original UNIX family tree to be open source, which really levels the playing field with Linux, and, coincidentally, Linux and Solaris together have buried Microsoft’s relevance going forward. These are interesting times.
“Solaris is the only modern system from the original UNIX family tree to be open source…”
Replying to myself to thwart the flames: yes, I do know about Free/Open/NetBSD, but we’re talking about systems with vast vast install bases and enormous mindshare among businesses.
And FreeBSD is what then? Last I read, it’s the most popular, more so than even Linux, operating system for webservers.
Time to make a call to “NewOS” develo… I mean, Haiku developers… ;]
Best bit of the blog was about the jitter bug.
not that I understood most of it, but the approach used looked rather cool.
children children… stop fighting
we can live in a world with linux and solaris
honestly, I have never seen anything difinitive that gave linux or solaris 10 an advantage over the other… they have their strengths and weaknesses but they mostly even out for the vast majority of people.
now… on to those benchmark results, one major beef about those results.. those were Gentoo Benchmarks, nobody uses Gentoo in a production environment (well maybe some) and Gentoo runs a hell of a lot faster then redhat/novell if the person installing it knows what they are doing
point is.. these should have been redhat and or novell used for benching
My point exactly, who is going to use Gentoo in a production environment? My other beef with this set of benchmarks is the lack of information about how the operating systems were installed. It is difficult at best to repeat or independently verify a test with no information on how the various operating systems were installed. In the case of Solaris 10, which install cluster was used? Was X running on any of the OS’ during the testing? Too many questions and not enough answers.
I have never seen anything difinitive that gave linux or solaris 10 an advantage over the other… they have their strengths and weaknesses but they mostly even out for the vast majority of people.
You probably think DOS 6.22 is even too.
Wake up people! Solaris is so far ahead of everybody else in the OS race that you either show some hard proof that $FAVORITE_OS is close to Solaris or STFU.
>now… on to those benchmark results, one major beef
>about those results.. those were Gentoo Benchmarks,
>nobody uses Gentoo in a production environment (well
>maybe some) and Gentoo runs a hell of a lot faster then
>redhat/novell if the person installing it knows what
>they are doing
Err, no.
RedHat and Novell both employ scores of kernel and other core (eg. glibc, gcc) developers, and performance is one of their top priorities.
It is very unlikely that Gentoo will run measurably faster, if at all. Do you think that if compiling things with a set of flags would make it “a hell of a lot faster”, that they would not do that?
So, my major beef about those results is that they cause Solaris zealots to make up all sorts of silly excuses about why they got beat
Is what I said an excuse, well I don’t think so. If using an unsound test metodology (by comparing apples and oranges) is what it takes to get Linux to “win” a benchmark test, then as a Linux supporter I would be concerned, very concerned. Because I could very easily (and have seen it) where the Linux zealots cried foul when their OS lost. And as a Solaris “zealot” I am not pointing out just that Solaris lost, but that the test was carried out incorrectly. Just check out the commentary on Slashdot by the BSD users on how the performance of NetBSD could have been improved.
If you are going to test an operating system, then test it correctly using sound scientific methodology. That means the conditions of the test are exactly the same for all players. Which means no compilation of kernels if you cannot do it to all of them, because that could potentially give the operating systems you compiled an unfair advantage (whether you planned it that way or not) and change the conditions of the test. Also seelcting an operating system in Beta (Solaris) and unstable (BSD) doesn’t fit any testing methodolgy I would use (and I have 15 years of testing experience behind me). So the test (and I have said it before) is invalid on its face because you are not comparing the same thing.
If Linux users want people (like me) to look at Linux more seriously, then they should also be pointing out the mistakes in this testing and the questionable conclusions drawn by it. Otherwise people like me are just going to raise the “Bullshit flag”, and continue to do so until tests are done that are based on seeing how the OS responds based on a set of criteria which can be independently verified and repeated.
>Is what I said an excuse
Yes.
>If using an unsound test metodology (by comparing apples
>and oranges) is what it takes to get Linux to “win” a
>benchmark test, then as a Linux supporter I would be
>concerned, very concerned. Because I could very easily
>(and have seen it) where the Linux zealots cried foul
>when their OS lost. And as a Solaris “zealot” I am not
>pointing out just that Solaris lost, but that the test
>was carried out incorrectly.
No it was not. Actually, the guy went to the *most* trouble tuning the Solaris install – even contacting the performance guys at Sun.
The fact that he missed an NetBSD option does not mean the test was incorrect. He described all tuning settings made, and for those settings, the results are valid. Considering the Solaris install was very tunded, and Linux using defaults, I conclude that Linux is faster on that test. You would have to be unusually dull to contend that.
Aside, he redid the NetBSD tests with tuning directions from netbsd developers, and things improved slightly, but not much.
Lastly, you would really have to be on death’s door before you went to Slashdot readers for advice about anything.
>If you are going to test an operating system, then
>test it correctly using sound scientific methodology.
>That means the conditions of the test are exactly the
>same for all players. Which means no compilation of
>kernels if you cannot do it to all of them, because
>that could potentially give the operating systems you
>compiled an unfair advantage (whether you planned it
>that way or not)
Bzzt. Wrong. You can install whatever you want and do whatever you like, so long as you describe the test setup in a way that the results are reproducable by a reader with access to the same equipment. This is the scientific method.
What’s wrong with compiling a kernel anyway? Just because you can’t compile S10’s kernel doesn’t mean that other systems have to stick to that.
>Also seelcting an operating system in Beta (Solaris)
>and unstable (BSD) doesn’t fit any testing methodolgy
>I would use (and I have 15 years of testing experience
>behind me). So the test (and I have said it before) is
>invalid on its face because you are not comparing the
>same thing.
Bzzt, wrong again. The test is valid for what was tested. And why you would want to compare the same thing seems absurd to me – I can already tell you the results will be the same.
>If Linux users want people (like me) to look at Linux
>more seriously,
No actually after that last comment, I don’t want you to look at Linux more seriously. You seem like a raving lunatic with a sub-chimpanzee IQ. So what I want you to do is stay the hell away from it please.
I don’t know where you got your education, but before you start saying that someone has a “sub-chimpanzee IQ” I would be very careful. Every scientific test I have either seen or read prepares the test subjects in the same manner so that no particluar sample gets an unfair advantage in the test.
The MySQL benchmark test regardless of what you say is invalid on its face. As soon as the tester picked Gentoo Linux as the distribution to test and starterd compiling the test was invalid, since he could not do the same to Solaris 10. That would be like me preparing a film test (background in photgraphic sciences) and processing one at 85 degrees and the other at 104 degrees and trying to say they are both the same, they are not and the test is invalid. The test is also not repeatable because specifics of how the indvidual operating systems were installed is not available.
As far as the “raving lunatic” comment goes, well if that was the case I wouldn’t have lots of happy people at http://www.sysadmintalk.com that I took the time out of my day to help them. Are you just another Linux zealot who can’t stand when somebody is right, and instead of coming up with an argument using technical merit you resort to insults.
Anytime you want to come out from behind your anonomizer and talk scientific testing (and resort to something more intelligent than your lame attempt at insults) with me, I am ready.
>The MySQL benchmark test regardless of what you say is
>invalid on its face. As soon as the tester picked Gentoo
>Linux as the distribution to test and starterd compiling
>the test was invalid, since he could not do the same to
>Solaris 10.
Sorry, that’s utter nonsense.
That’s like saying the test is invalid because Linux is using the Linux kernel, which is faster than the Solaris one (for example).
If you don’t like some condition of the test that doesn’t make it invalid as much as you would like to think so. The author stated the exact test setups involved, and presented results according to the measured performance of those setups. This is the scientific method. Anybody in the world can take that document and reproduce the results.
>The test is also not repeatable because specifics of
>how the indvidual operating systems were installed is
>not available.
I believe the author stated that defaults were used unless otherwise specified, and that he was willing to clarify any point that was unclear. You can’t get much better than that.
>Anytime you want to come out from behind your
>anonomizer and talk scientific testing (and resort to
>something more intelligent than your lame attempt at
>insults) with me, I am ready.
No thanks. I’m sure your slashdot buddies can cater to your “science” discussions.
Which install cluster was used to install Solaris 10 Build 69? You only have five choices. And what is a default install? You see lots of questions and not a whole lot of answers. If you are so confidident that the results can be reproduced, reproduce them for us. I would be very interested in seeing the results.
You must subscribe to the school of “lets rig the test so that the sample we want to win wins”. If I remember correctly the NetBSD option was not so trivial (either SMP or thread related). That is what I call junk science. Another poster pointed out that compiling Gentoo with various options can make it significantly faster than other Linux distributions.
Some pretty intelligent people post on Slashdot, too bad you don’t think so but considering what you call proper science, you would be “bitchslapped” into oblivion if you posted on Slashdot. It must be nice to sit in your “ivory tower” and insult the world with your half-baked bullshit. Maybe that is why you don’t want to come out from your anonymizer.
>Which install cluster was used to install Solaris 10
>Build 69? You only have five choices. And what is a
>default install?
Dude, you’re the only one seeing conspiracy theories here. I’m sure the guy would be very happy to provide the answer to any of your questions if he has omitted them.
>You see lots of questions and not a
>whole lot of answers. If you are so confidident that
>the results can be reproduced, reproduce them for us.
>I would be very interested in seeing the results.
I told you, S10 neither runs on my Nocona nor my dual G5.
But you are the one contending the results, ergo you provide the counterexample.
>You must subscribe to the school of “lets rig the test
>so that the sample we want to win wins”.
No I don’t think so.
>If I remember
>correctly the NetBSD option was not so trivial (either
>SMP or thread related). That is what I call junk
>science.
Huh? The NetBSD option he “failed” to use was an undocumented and off-by-default value that supposedly made threaded apps use more than one CPU.
The fact that he documented the process well enough for the netbsd guys to catch this and suggest a retest was testament to his good process (not that the rerun helped them much).
Another poster pointed out that compiling Gentoo with various options can make it significantly faster than other Linux distributions.
>Another poster pointed out that compiling Gentoo with
>various options can make it significantly faster than
>other Linux distributions.
Well if you read the benchmark, gentoo was not compiled with “various options” of the type that might trade performance for stability.
So what does that have anything to do with what we’re talking about? You’re creating all these straw men in your desperate attempt to “win” this little argument.
Trying to “win”, I don’t think so. But here are the posts in question from Slashdot:
Useless Benchmarks (Score:4, Informative)
by Anonymous Coward on Friday February 11, @02:13AM (#11639061)
From the article: I used the GENERIC configurations unmodified, expect for above-mentioned changes and adding SMP support.
FreeBSD’s GENERIC kernel config is for i486. If he’d commented out two lines, he could’ve tested for i686, which is what a P3 is. As it is, these benchmarks aren’t helpful at all, because the optimizations assume a machine inferior to what’s actually being used. He failed to eliminate enough variables for these to be meaningful.
Re:Useless Benchmarks (Score:4, Insightful)
by setagllib (753300) Alter Relationship on Friday February 11, @03:00AM (#11639293)
Entirely right, and some user-space optimization could have gotten the final few percent in too. He installed stock BSDs and recompiled their kernels straight, didn’t tweak any options that weren’t necessary to run the suite, and compared to a Linux optimized from the ground up (Gentoo + his knowledge of Linux itself). Real clever benchmark.
That NetBSD performed worse than FreeBSD for disk IO is really strange. I have never seen this happen in any of the machines I’ve tried both on (hint: a lot), so either he has a very exotic disk controller which isn’t supported properly (weird) or there’s a disturbance in the force. Members of the mailing lists are talking it over with him now, and a follow-up should arrive eventually.
I would have liked to see results of FreeBSD 5-STABLE too, because he compared a refined Linux and a solid NetBSD to a FreeBSD release that was deemed not-ready-for-benching-let-alone-production on day 0, which gave it little chance. It’s interesting to see if the claims 5.4 will be much better hold water.
—
This space for rent
Procedural problem with NetBSD multiprocessor (Score:5, Informative)
by Bushcat (615449) Alter Relationship on Friday February 11, @03:08AM (#11639321)
It seems like the performance of NetBSD will be re-evaluated, so expect the results to be recast in the next few days.
See the message thread titled “NetBSD performance” at http://software.newsforge.com/article.pl?sid=04/12 /27/1243207 [newsforge.com]: an anonymous reader asks “Did you enable PTHREAD_CONCURRENCY? You have to set that variable to the number of CPUs in your system, else you won’t be able to run more than one thread at a time, even you have more than one…”. He replies “Sunofa. The $PTHREAD_CONCURRENCY environment variable wasn’t set, as I had no idea it was an option. … It could very well be the issue. In the next few days I’ll re-run the NetBSD tests with that set.”
So I’m not the only one questioning the results. My guess would be the tester did not spend a great deal of time researching various options under the BSD’s. Or are you going to dismiss these posts as “irrelevant” because they cam from Slashdot? Bottom line is, the test at best is questionable.
Conspiracy theories, if the test was “fully” documeted as you say it is, I would not have to ask the questions I am asking because those questions would have been answered.
Oh yeah by the way, nice cop out “I told you, S10 neither runs on my Nocona nor my dual G5.”
>So I’m not the only one questioning the results. My
>guess would be the tester did not spend a great deal >of time researching various options under the BSD’s.
>Or areyou going to dismiss these posts as “irrelevant”
>because they cam from Slashdot? Bottom line is, the
>test at best is questionable.
There is nothing wrong with the results because they are the result of the tests applied to the systems described.
Also, the author made attempts to get in contact with the various BSD (and Solaris) teams in order to tune performance, as well as rerunning the test with suggestions from the NetBSD guys.
Secondly, I can show you freebsd and netbsd mailing list posts where kernel devlopers from both camps concede that Linux runs MySQL faster than their operating systems.
Lastly, the main point in contention is that Linux is faster than Solaris here. Why do you keep bringing up BSD results?
I might agree that one is not able to make a statement such as “Linux is faster than FreeBSD” with these results, precisely because of the tuning reasons (also, Linux wasn’t tuned at all). Not that this makes the results invalid. They are valid for what they test.
But, what I can say is that Linux is faster than the Solaris 10 build tested, because the guy went to Sun and worked through performance tuning.
Let’s examine the install instructions for Solaris 10 in regards to this test:
Solaris Express (build 69)
For Solaris 10, the same kernel was used for both tests. The file system used was UFS with logging enabled (ZFS will not be available for a few more months).
The GCC 3.3.2 package I pulled off of the Sun Freeware site had been built under an older beta release of Solaris Express. It had several header issues and wouldn’t compile anything, and Solaris does not ship with a compiler. I found and used a fix from the Sun forums (it’s also now posted on the Freeware site), and after that, MySQL compiled cleanly.
Getting Super Smack compiled, however, was not a productive endeavour. Super Smack uses flock(), which isn’t very well supported in Solaris (and only then through BSD compatibility). Attempts to use libucb failed along with everything else I tried, so I currently do not have any Super Smack results for Solaris 10.
To compile software on Solaris you have to install the SUNWprog (Developers Support) install cluster to get make and the required libraries. If I am not mistaken GCC shipped with Build 69, along with a number of other GNU binaries and to install it you would have to install Solaris using the Full Distribution or SUNWCall install cluster. Where in that article does he specify which install cluster was used? So explain to me how easy it is going to be to reproduce the test again?
He didn’t contact Sun until he witnessed a performance issue:
The Solaris issue
I ran into a strange issue with Solaris 10 for the 10M row SysBench tests. While Solaris had done very well for the 1M tests, it did extremely poorly in the 10M tests, getting the lowest score by far — roughly 3.6 transactions per second, which is lower than even NetBSD 2.0’s results. This was roughly one-seventh the Linux scores, and didn’t seem to make any sense. I checked with Peter Zaitsev, and he put me into contact with Jenny Chen of Sun, and we proceeded to try to figure out what the cause of the bad performance was.
Chen recommended mounting the file system without logging, and to set the sticky bit (chmod +t) to the InnoDB data files and logs. None of those steps seemed to help, so I explored some more.
The final answer I found in the legendary book Sun Performance Tuning: Java and the Internet by Adrian Cockcroft and Richard Pettit. The solution was mounting the data partition with the forcedirectio option. This prevents the file system from being cached at all by Solaris. When I ran the test, the swap drive was completely idle, and the results were dramatic: 21 transactions per second versus 3.6 transactions per second. Oddly enough, setting the sticky bit actually hurts performance by about 1 transaction per second. I get about 20 with +t, and about 21 without +t. It doesn’t make any difference in these tests if the file system is mounted with logging enabled or disabled either. The noatime option also had no effect.
Although forcedirectio isn’t mentioned in the MySQL documentation, I see that the directio option is specifically recommended on page 161 of Sun Performance Tuning for situations with a large data file (the InnoDB data file was 2.6 GB for the 10M row test). While the book is from 1998 and only covers up to Solaris 2.6, it covers expertly the principles of performance tuning which would apply to all operating systems.
Among the operating systems tested, this caching scenario seems to be unique to Solaris 10. None of the other operating systems, including NetBSD 2.0 (which also had a bad showing), saw any swap activity during the 10M row tests.
So here is where my questions start, so were the Solaris tests re-run after making the forcedirectio change? My guess would be no.
My point is simply this, since I have found multiple problems in terms of the methodology used (nothing like using operating systems in development (Solaris, NetBSD) against “production ready” operating systems for starters. That by itself makes the test invalid. The lack of documentation as to the installation method employed further complicates any hope of repeatability. So based on this I say the test is invalid, whether you agree or not is not important.
I don’t see where you have backed up anything you have said with nothing more than your opinion, or is this a case of “don’t confuse the issue with facts”. And it was you who started hurling the insults, so who has the problem? I’m not supposed to get upset when someone insults me because “they didn’t get their way”? I call it as I see it and whether you like what I say is irrelevant. So if you care to have an intelligent discussion the the merits of this benchmark test, fine. If you are going to resort to your childish nonsense, don’t waste my time!
Oh, and another thing.
>Some pretty intelligent people post on Slashdot, too
>bad you don’t think so but considering what you
>call proper science, you would be “bitchslapped” into
>oblivion if you posted on Slashdot.
Groan. I might have guessed you’re a slashdot weenie. Yeah yeah whatever buddy, I’m sure you and all your friends would bitchslap me etc etc. Now why don’t you pack up your shit and take yourself and your science back there where you belong, eh?
>It must be nice to sit in your “ivory tower” and
>insult the world with your half-baked bullshit.
Funny. You get very upset when your unfounded assertion that the benchmark test is invalid, is completely torn apart. You make a lot of noise, but you can’t actually dispute my reasoning.
Must be the mating call of the loser.
>Maybe that is why you don’t want to come out from your
>anonymizer.
What the hell do you keep going on about anonymizer for? What anonymizer? Where are the black helicopters?
“So, my major beef about those results is that they cause Solaris zealots to make up all sorts of silly excuses about why they got beat ”
I don’t see why it matters TO ME AS A USER because Linux can be how damn fast / whatever it wants to be, it’s still piece of shit to me since there are no good Linux dists and the whole crap is developed in such an unstable way. (I don’t mean version x.y.z of linux can’t be stable, i mean the whole development-path makes it unstable, to unstable for me atleast.)
Well, if you consider how bad most/all Linux distributions really are Gentoo isn’t that bad for a Linux production server, it’s probably less troubles than most others, with debian based dists as the possible exception.
lol, i use gentoo in a production environment but i know i am far from the norm, i don’t know anybody else who does
anyway….. i just don’t want anyone to think i am a solaris troll.. i don’t even use it although i have in the past.
“And FreeBSD is what then? Last I read, it’s the most popular, more so than even Linux, operating system for webservers.”
Nearly every hosting company I checked out had two options: Windows and Linux. Given those options, of course I chose Linux. Solaris would have been a nice alternative, though.
> It’s simply impossible to have a stable and sensible
> Linux system that runs.
WTF? Are you from another planet?
That statement is sheer stupidity.
We’ve been here before, do we really need to go over it again.
That benchmark showed that a pre-FCS Solaris was out-performed by a Linux distro. I find it interesting that the comparison was not run *after* FCS (and one of the filesystem throughput inhibitor bugs was fixed).
Really folks, the idea behind the release of libmicro was “hey, we found this really useful while developing Solaris 10, maybe you might find it useful in your own development (be it OpenSolaris, *BSD, Linux, …”.
alan.
“Wake up people! Solaris is so far ahead of everybody else in the OS race that you either show some hard proof that $FAVORITE_OS is close to Solaris or STFU.”
Actually, what Solaris is doing is adopting the best stuff from the FOSS world and integrating it with the Solaris kernel and userland. This means that Sun’s hundreds of millions of $$$ in R&D also has the FOSS R&D added to it…Solaris will _always_ be ahead of the game.
For example: Solaris 10/OpenSolaris has SMF, Zones, etc. etc., but also has/will have GNOME, GRUB, GCC, etc.
It’s the best of both worlds.
“RedHat and Novell both employ scores of kernel and other core (eg. glibc, gcc) developers, and performance is one of their top priorities.
It is very unlikely that Gentoo will run measurably faster, if at all. Do you think that if compiling things with a set of flags would make it “a hell of a lot faster”, that they would not do that?”
Actually, their top priority is decent performance but with acceptable stability, too. There are lots of compiler flags that improve performance but are risky. Some break IEEE standards to improve performance, others make assumptions about memory managment that the kernel might not do well with, etc. I’ve personally seen performance flags cause applications to _crash_, probably due to very hard to find bugs either in the app or in the compiler.
For distros that care about not throwing their userbase a big fat curve ball, they _do_ hold back on the compiler flags. What Gentoo does is allow people to choose whether to take risks, and some people take those risks successfully…and some others people don’t.
That’s why people can make Gentoo run demonstratably faster than Red Hat/SuSE.
One just can’t help but wonder where these Solaris fanboy goons are coming from… with statements like “Solaris is so far ahead of * it isn’t funnay!!!”
Most people using Solaris are trained bank, telco and big-business systems administrators… and I’m willing to bet real money that less than 1/10th of a percent of them give a crap about anything OSNews has to say.
Well, the guy who wrote the comment you quoted happens to be a FreeBSD zealot. So it is probably a case of sour grapes that Linux is so far ahead of FreeBSD these days.
If I recall correctly, the initial topic was about Sun releasing (under an OSI approved license – CDDL) a benchmarking suite that we found useful in devloping Solaris 10 and that we are still using.
It’s turned into a “my OS is better than your OS” pissing competition.
For crying out loud people, all that Sun has done is to release somethng that we found useful in case anyone else might also find it useful.
The code is there, use it if you find it useful, don’t if you don’t. It’s as simple as that.
Can we *please* get back on topic???
Isn’t this what open source is supposed to be about?
alan.
Come on this is OSNews, where you don’t have to be on topic, technically correct, or even right, just post! I would love to see intelligent discussions here, rather than the drivel that is posted by some.
Unfortunately comments like yours either get ignored or drowned out by the trolls who despite their lack of knowledge or experience seem to think their comments are more on topic than the actual discussion. Someday that might happen, but I would not expect it too soon.
Libmicro looks really cool, is this something that can be used with earlier versions of Solaris or is this a Solaris 10 and up tool only?
If I recall correctly, the initial topic was about Sun releasing (under an OSI approved license – CDDL) a benchmarking suite that we found useful in devloping Solaris 10 and that we are still using.
It’s turned into a “my OS is better than your OS” pissing competition.
For crying out loud people, all that Sun has done is to release somethng that we found useful in case anyone else might also find it useful.
The code is there, use it if you find it useful, don’t if you don’t. It’s as simple as that.
Can we *please* get back on topic???
Isn’t this what open source is supposed to be about?
alan.
So your vote is for Solaris?!
So your vote is for solaris?
In the context of the intial post I fail to see what my operating system preference has to do with anything.
Bart released a test-suite that he found useful and the only comments I am seeing here about who thinks their OS is best and old benchmarks.
Here’s a novel idea…
Why don’t folks download the toolset and give it a try and then make constructive comments about how it might be improved.
We are all interested in playing with open source aren’t we?
Or are folks more interested in taking what should be a good technical forum and turning it into a pissing competition? (That’s a rhetorical question. I’m not sure I want to know the answer)
alan.
Interesting…
…
OpenSolaris Performance Principles
We feel that it is important to state major principles that guide the development and discussion. For the OpenSolaris performance work we suggest the following set of guiding principles:
If another major system is faster than OpenSolaris, it is a bug.
…
http://opensolaris.org/os/community/performance/
(I didn’t have time to read every comment here, although I skimmed them all. Pardon me if any of this is redundant)
– The Solaris team is working hard to make things auto-tuning and fast-out-of-the-box as much as possible. We’re not done yet!
– It is possible to run Super-Smack on Solaris; and I’ve worked with the article’s author since its publication on that. See my ‘blog entry on this topic: http://blogs.sun.com/roller/page/dp?anchor=smacking_super_smack_int…
– libmicro is portable– it is designed to compile under different compilers, and on different system. I’m sure patches would be appreciated by the author if certain tests do not compile or work properly on your system.
– Some have noted that libmicro might not be useful given the existence of lmbench, etc. I disagree. Hopefully Bart will write a detailed article about all of the good things. If you care about benchmarking, download it, try it, and see for yourself. It should compile and run smoothly on Linux systems, AFAIK.