DTrace developers Bryan Cantrill and Adam Leventhal have posted presentations covering Advanced DTrace Tips and Tricks and The DTrace pid provider on their blogs over the last few days. Update: RHEL benchmarks against Solaris.
DTrace developers Bryan Cantrill and Adam Leventhal have posted presentations covering Advanced DTrace Tips and Tricks and The DTrace pid provider on their blogs over the last few days. Update: RHEL benchmarks against Solaris.
just as I knew. Solaris 10 beats linux on the server.
First, RHEL != “linux”
Secondly, are you looking at the same benchmarks as I am?
32bit
Trans/sec
RHEL beats Solaris by 20%
Max open TCP connections/sec
Solaris beats RHEL by 1%
Mac TCP connections/sec
RHEL beats Solaris by 18%
64bit
Trans/sec
Solaris beats RHEL by 4%
Max open TCP connections/sec
Solaris beats RHEL by 0.3%
Mac TCP connections/sec
Solaris beats RHEL by 3%
Nice, Solaris posts marginal performance gains in 64bit machines, gets obliterated in 32bit and you call that beating “linux”?
Whoops, Mac = Max
Nice, Solaris posts marginal performance gains in 64bit machines, gets obliterated in 32bit and you call that beating “linux”?
May be you didn’t get the memo the world moved to 64 bits ages ago and enterprises would be most interested in 64-bit deployments. Just look at the OSNews front page for the story on 64-bits in the pc industry.
In case, you also don’t remember the MySQL benchmarks that you and your fellow linux advocates were touting had just as much margins and were run against a pre-beta Soalris 10 build. However, that didn’t stop you from bringing up those benchmarks.
Solaris Express 2/05 also has even more performance enchancements for amd64. I expect Soalris 10 update 1 to have them, should be interesting.
Solaris went form “Slowlaris” to beating the so called fastest Enterprise linux distro.
I thought Solaris 10 was soooo much faster? Where are the fanboys?
These benchmarks confirm that the 2.6 linux kernel is on par or better than the latest commercial offerings – excellent.
But something tells me these benchmarks aren’t that helpful.
If you’re servicing over 2000 TCP/IP connections per second then you’re probably moving over 4 terabytes of traffic a day (assuming ~20KB per connection). Think google. Your internet connection and bank account is likely to become the bottleneck at those rates.
Let’s see some massively parallel disk benchmarks – ZFS vs. Reiser[3/4], XFS, and ext3. Let’s see some 2D and 3D benchmarks; do ATI and Nvidia released optimized drivers for Solaris? Let’s see some 100+ multi-threaded CPU bound benchmarks. Let’s see some massive NFS and gig-E network throughput tests.
Actually Sun published a filesystem performance paper ages ago, comparing several filesystems, with Solaris 9 4/04 on x86 v’s Suse with 2.6.5 kernel, which were about equivalent at the time. It was done on identical hardware configurations.
It might be a bit long for the Linux zealots to read and accept though, being a 21 page white paper rather than a four line osnews or slashdot post.
See http://www.sun.com/software/solaris/whitepapers.xml and look for the File System Performance Paper
It details all the setups, tunings etc and could be replicated if you had the hardware (which I seriously doubt any of the slashkiddies have)
Let me predict the counter arguments – they should have run it on 2.6.11 with x,y and z patch applied (yes thats what enterprises use as well) and its not a real paper because it came from Sun. Covered everything?
> If you’re servicing over 2000 TCP/IP connections per second then you’re probably moving over 4 terabytes of traffic a day (assuming ~20KB per connection). Think google.
google already use Linux (RHEL) 🙂
>
> google already use Linux (RHEL) 🙂
>
No Google use a highly modified and stripped down version of Redhat with custom kernel patches as mentioned in the recent articles.
Whats more curious is that Google sell an appliance using these kernel modifications, but don’t seem to have returned all the code to the opensource community. Is this not a violation of the GPL?
In case, you also don’t remember the MySQL benchmarks that you and your fellow linux advocates were touting had just as much margins and were run against a pre-beta Soalris 10 build. However, that didn’t stop you from bringing up those benchmarks.
Two wrongs don’t make a right… but three lefts do
Actually Sun published a filesystem performance paper ages ago, comparing several filesystems, with Solaris 9 4/04 on x86 v’s Suse with 2.6.5 kernel, which were about equivalent at the time. It was done on identical hardware configurations.
Read the PostMark script they used. The file sizes range from 10kb to 20kb, and there is a maximum of 20,000 files. That’s a working set of 300MB on a 1GB machine. As a result, these operations were done in memory. It really doesn’t prove the performance of the filesystem, it just shows that Solaris has particularly fast transaction routines. Ext2 would post numbers like these all the time for benchmarks that operated entirely in memory — it was limited just by CPU time. When it hit the disk, however, it’s performance came in line with other filesystems.
First, you really don’t need to hide behind a proxy.
Next, on Page 19 of fs_performance.pdf :
<QUOTE>
A file system and its performance do not exist in a vacuum. The file system performs only as well as the hardware and operating system infrastructure surrounding it, such as the virtual memory subsystem, kernel threading implmentation, and device drivers.
<QUOTE>
No doubt, UFS is excellent but the performance of a FS alone in peak condition dosent translate into real world performance of an OS.
> Next, on Page 19 of fs_performance.pdf :
It’s a Sun’s paper, guy !
> First, you really don’t need to hide behind a proxy.
actually I do, I could get fired from my job for making pro sun comments from a work ip address.
Do you have benchmarks comparing SLES 9 SP1 to RHEL 4? Didnt think so. I guess you like to throw things around without back them up
Did you read the article???
“Solaris 10 kept pace with the record-breaking numbers”
Intresting that you ignored the rest of my post. So I assume you agree with me and don’t want to admit it?
<<Did you read the article???>>
Ofcourse I did, how else did I calculate and post the % performance?
<<“Solaris 10 kept pace with the record-breaking numbers”>>
Then I ask you, give me the SLES 9 SP1 benchmarks? Where are these “records”?
Read the PostMark script they used. The file sizes range from 10kb to 20kb, and there is a maximum of 20,000 files. That’s a working set of 300MB on a 1GB machine. As a result, these operations were done in memory. It really doesn’t prove the performance of the filesystem, it just shows that Solaris has particularly fast transaction routines.
Your explaination would be correct is UFS worked the way you say it doesn. UFS uses a 8-16MB highwater mark before flushing to disk. UFS doesn’t work like ext2. Sun is very paranoid about data inegrity they would sacrfice performance over data loss. Sorry the whole data could not have possibly been in memory for logging ufs.
Then I ask you, give me the SLES 9 SP1 benchmarks? Where are these “records”?
Email the author of the article with that request. We are discussing the article, my post was made in context of the article.
Notice I called it the “so called” fastest.
<<Email the author of the article with that request. We are discussing the article, my post was made in context of the article.>>
I see, you want me to find, possibly non-existent, benchmarks to prove your point? Um no, sorry it dosent work that way. You see, you make a comment, back it up and I make a rebuttal and I back it up.
Simple, eh?
<<Notice I called it the “so called” fastest.>>
<QUOTE>
Solaris went form “Slowlaris” to beating the so called fastest Enterprise linux distro.
</QUOTE>
Ah, got me there. But Solaris still dosent clearly beat “linux” or RHEL 4.
Your still ignoring the rest of my post! >:(
”
No Google use a highly modified and stripped down version of Redhat with custom kernel patches as mentioned in the recent articles. ”
while what you are talking about is true for some caching systems the rest of the them use RHEL 3 stock product and some with gfs enabled
I see, you want me to find, possibly non-existent, benchmarks to prove your point? Um no, sorry it dosent work that way. You see, you make a comment, back it up and I make a rebuttal and I back it up.
What is there to backup? I backed what I said with the article.
Redhat is touting RHEL 4 as the release to displace solaris from the enterprise. People have always claimed soalris is slow compared to linux on x86. This benchmark goes to show that Solaris is on par or better than the linux on x86 that is supposed to have performance enchancements and features to displace Solaris from the enterprise.
What’s more running it on Sun’s x86 hardware it is even faster.
> Sun is very paranoid about data inegrity they would sacrfice performance over data loss.
ext3 too :
http://www.redhat.com/archives/fedora-test-list/2004-July/msg00034….
extract :
– “The test is really pretty simple. It is hooked to a machine that cycles the power. It runs for 5 minutes and then the power is turned off for 1 minute (to simulate the plug being pulled). (…) In the rc.local file it starts the
fsstress test http://ltp.sf.net/nfs/fsstress.tgz and the three scripts below (to simulate writing into a log file) on each of the SATA drives.
(…)
The ext3 have almost a perfect record with the write cache off: I have run over 300 cycles on the two drives and only had two corrupted lines in the output files. So out of 600 total cycles on the two drives there were only two lines with bad data, I think that is a pretty good record.”
Your point. Did you see ext3 mentioned in Rayiner’s discussion?
Solaris by defaults truns off the writecache’s on SCSI drives and has for the longest time. There for Solaris filesystems have been notorious for being slow.
“This benchmark goes to show that Solaris is on par or better than the linux on x86 that is supposed to have performance enchancements and features to displace Solaris from the enterprise. ”
you guys dont know to read or what?
<<What is there to backup? I backed what I said with the article.>>
You still havent shown me the benchmarks proving that the authors comment of “record-breaking” is valid. Your citing an incomplete source. And even if that were valid, it still dosent show Solaris surpassing “linux”/RHEL4 as you have already said.
<<Redhat is touting RHEL 4 as the release to displace solaris from the enterprise. >>
Considering the actions that Sun has taken, namely trying to opensource Solaris, and the strange/contridcting verbal attacks coming from Sun’s managment, RH seems to be succeeding.
<<People have always claimed soalris is slow compared to linux on x86.>>
And it has been.
<<This benchmark goes to show that Solaris is on par or better than the linux on x86 that is supposed to have performance enchancements and features to displace Solaris from the enterprise. >>
Hate to break it to you but performance isnt the only attribute of an OS.
You remember Netware, right? You remember how Win NT came in and canabilized Netware’s marketshare? The fact that NT was easier to administrate was the reason MS succeded.
Adminstarting Solaris is incredibly painful unless your a UNIX guru, compare this to the sexiness of RH’s redhat-config tools, sleek GUI installer, among other items. Sun just dosent get it. Their trying to target a shrinking miniorty, hardcore UNIX admins.
Unless ofcourse you think Netware and NT4.0 were equal in stability, reliability and performance?
—-
Alright, I really have to go to bed now. Good night, I’ll post tomorrow.
> Solaris by defaults truns off the writecache’s on SCSI drives and has for the longest time.
It’s not the point. Writecache enabled, SCSI should support a power down. It’s not the case for IDE (which also does not always respect write order).
btw, Linux >= 2.6.8 use “cache flushes” when available.
Hate to break it to you but performance isnt the only attribute of an OS.
I think Soalris trumps RHEL4 in features as well.
<<I think Soalris trumps RHEL4 in features as well.>>
Hey, I’m still here.
You just skipped 5+ lines of my rely 😮
<QUOTE>
You remember Netware, right? You remember how Win NT came in and canabilized Netware’s marketshare? The fact that NT was easier to administrate was the reason MS succeded.
Adminstarting Solaris is incredibly painful unless your a UNIX guru, compare this to the sexiness of RH’s redhat-config tools, sleek GUI installer, among other items. Sun just dosent get it. Their trying to target a shrinking miniorty, hardcore UNIX admins.
Unless ofcourse you think Netware and NT4.0 were equal in stability, reliability and performance?
</QUOTE>
It’s not the point. Writecache enabled, SCSI should support a power down. It’s not the case for IDE (which also does not always respect write order).
btw, Linux >= 2.6.8 use “cache flushes” when available.
Surprise, Solaris does that too!!! Not only it did it to 4 years ago and backported it to 2.6 and 2.5.1.
http://sunportal.sunmanagers.org/pipermail/summaries/2001-November/…
ou just skipped 5+ lines of my rely 😮
Because it was irrelevant!!!. GUI administration tools don’t make an OS. Solaris has SunMC, smc and amny GUI admin tools too.
Have you ever used soalris?
Sun does get it. Soalris 10 has a new installer. Soalris also has Predective self healing, SMF all of which make admintering a box much easier.
> Surprise, Solaris does that too!!! Not only it did it to 4 years ago and backported it to 2.6 and 2.5.1.
No no.
http://sunportal.sunmanagers.org/pipermail/summaries/2001-November/…
This bug is related to bug 4337637; which results to write-data still in the disk cache not being flushed as a result of a shutdown.
This problem is only seen in IDE disks since SUN doesn’t support disk write-caching for SCSI drives.
The solution involves writing an entry point in the IDE driver (dad) to explicit issue a disk cache flush command (devo_reset dev_ops).
This also requires changes in the kernel to call this entry point upon shutdown.
Linux sync/flush at shutdown since many many years.
<<Because it was irrelevant!!!. GUI administration tools don’t make an OS.>>
Yes they do! NT 4.0 vs Netware is an excellent example of why.
<<Solaris has SunMC, smc and amny GUI admin tools too.>>
All of which are inferior to redhat-config toolset and Novell’s YAST.
<<Have you ever used soalris? >>
Yes I have! I had to go and borrow a friend’s HP because Solaris 10 would not install on my Cicero (Futureshop housebrand) or my 99′ Compaq!
<<Sun does get it. Soalris 10 has a new installer. >>
I’m starting to think you havent used Solaris now. The Sol10 installer is terrible.
<<Soalris also has Predective self healing, SMF all of which make admintering a box much easier.>>
*sigh* Goes back to my NT4.0/Netware argument. The “superior” OS lost.
Adminstarting Solaris is incredibly painful unless your a UNIX guru, compare this to the sexiness of RH’s redhat-config tools, sleek GUI installer, among other items. Sun just dosent get it. Their trying to target a shrinking miniorty, hardcore UNIX admins.
Funny point of view. Are debian and slackware less of an OS becuase they don’t have sexy GUI installers and admin tools? What about caldera they had a sexier GUI installer and admin tools than redhat did back in the days, they even put tetris into the installer. Where are they now?
So if I put a sexier gui installer on DOS, would that make it a better OS. Let’s see you have run out of points and are rambling.
Solaris 10 is a serious competition to Linux on x86, interms of performance and features. It is also cheaper for enterprises to run and it is also going to be fully opensourced. End of story.
<<Funny point of view. Are debian and slackware less of an OS becuase they don’t have sexy GUI installers and admin tools? >>
No they are less of an OS because there is no certification or commercial support. Were trying to compare apples to apples here buddy.
As a side note, Debian is much easier to maintain than Solaris and the new Sarge installer is excellent aswell.
<<What about caldera they had a sexier GUI installer and admin tools than redhat did back in the days, they even put tetris into the installer. Where are they now?>>
I don’t see the point of your argument. Fancy GUIs don’t guarantee a company revenue, its only a selling point.
<< Let’s see you have run out of points and are rambling.>>
Sure I have, is that why you still havent fully replied to my post on the first page?
<<Solaris 10 is a serious competition to Linux on x86, interms of performance and features. >>
I never said Sol10 would not be competition for Linux. The benchmarks IMO show that RH can hold their own against Sun. I’m waiting to see what happens.
<<It is also cheaper for enterprises to run and it is also going to be fully opensourced.>>
Buddy I’m waiting. Good luck to Sun if they succed with their products, I doubt it.
> So if I put a sexier gui installer on DOS, would that make it a better OS.
Sexier, who care ?
Is it possible to install Solaris with the root partition in lvm ?
With RHEL (or Fedora) it’s very easy (and sexy but I don’t care).
> It is also cheaper for enterprises to run
bullshit. RHEL is provide with proxy, web, mail server (unlimited license), development tools, lvm, etc…
How much cost Veritas for solaris ?
RHEL don’t need Veritas.
Your analysis of the link I posted is incorrect. IDE drives flush thier writecache every few ms. However, there can be a case that the system is shutting down and the drive cache isn’t flushed before the data is flushed by the drive automatically.
After this fix Solaris flushes the cache on shut down. Solaris always syncs the fliesystems before shutdown.
the sequence is this
sync files systems
pull power.
Sync the fliesytem puts data in the write cache of the drive if the power if pulled before the automatic flush the data can be corrupted. The patch below fixes that by explictly flushing the cache before pulling power.
This bug is related to bug 4337637; which results to write-data still in the disk cache not being flushed as a result of a shutdown.
This problem is only seen in IDE disks since SUN doesn’t support disk
write-caching for SCSI drives.
The solution involves writing an entry point in the IDE driver (dad) to
explicit issue a disk cache flush command (devo_reset dev_ops).
This also requires changes in the kernel to call this entry point upon
shutdown.
Which should be done in the following routine:
http://access1.sun.com/patch.public/cgi-bin/readme2html.cgi?patch=1…
The above is related a to bugid
4435428: EIDE disk with write-cache enabled should be flushed before power-off
The Soalris IDE driver has flushed the write cache for 4 years now before power is pulled from the drive. SCSI caches are disabled in Solaris.
Goodnight all, this time I really do have to go.
bullshit. RHEL is provide with proxy, web, mail server (unlimited license), development tools, lvm, //’etc…
How much cost Veritas for solaris ?
RHEL don’t need Veritas.,
Solaris ships with a vloume manager called disksuite and has for a long time.
Solaris 9 4/04 improved ufs logginf perfromance and coupled with disksuite has removed the need to use veritas on solaris.
Solaris 10 ships with development tools like gcc, sendmail, apache, staroffice, mozilla sun java enterprise system(which has all the things you liset )
“Solaris ships with a vloume manager called disksuite and has for a long time. ”
i suspect you havent used rhel 4. understand how lvm by default works there first
i suspect you havent used rhel 4. understand how lvm by default works there first
I haven’t used LVM under redhat but I fail to see the point as to how this has anything to do about the Veritas discussion. or the soalris doesn’t ship with a volume manager argument.
Solaris does ship with logical volume manager called disksuite. I don’t recall any discussion about feature comparisons of rhel 4 lvm vs disksuite.
I am curious now what are the major differences between LVM and DiskSuite.
Does RHEL 4 ship with stuff similar to DTrace, Zones, SMF, Predective Self healing?
> The Soalris IDE driver has flushed the write cache for 4 years now before power is pulled from the drive.
As Linux does since the beginning.
Note that ext3 does not have problem with writecache enabled as long as the write order is guaranted.
With SCSI linux provide write barrier, it’s done with SCSI ordered tags.
For IDE (which do not provide write order guarantees), Linux/ext3 need “cache flushes” :
http://lwn.net/Articles/21923/
The attached patch implements write barrier operations in the block layer and for IDE, specifically. The goal is to make the use of write back cache enabled ide drives safe with journalled file systems.
No problem with writecache and ext3/Linux.
It’s an old (and outdated : 20 July, 2000) document but very very interesting :
http://olstrans.sourceforge.net/release/OLS2000-ext3/OLS2000-ext3.h…
if we can set that in the block device layer so that the Linux internal device reordering queues and the disk’s reordering queues all observe and honor that barrier operation, then we can keep the pipeline going to the disk, streaming the data to the disk at full speed, without waiting synchronously for the completion of these transactions.
“I am curious now what are the major differences between LVM and DiskSuite. ”
flexibility. device mapper works way better here including multipath and its is enabled by *default*
“Does RHEL 4 ship with stuff similar to DTrace, Zones, SMF, Predective Self healing?”
smf is a replacement for sysv init. rhel4 uses sysv init mechanisms and chkconfig. far more simple than smf.
rhel4 has dprobes, kprobes, ltt and oprofile instead od dtrace.
zones functionality which is similar to freebsd jailshell is provided by cpusets. admittedly its not marketed as well as
zones
http://www.bullopensource.org/cpuset/
morever LSM has a bsdjail module thats shipped with rhel4 too
“Predective Self healing?”
i havent actually understand what this marketing terms actually means in terms of practical functionality but it might similar to redhat HA.
there are other features like redhat gfs and cluster suites which lacks equivalents in solaris 10
http://www.redhat.com/software/rha/gfs
http://www.redhat.com/software/rha/cluster
alteast redhat people dont lie about stuff like jonathan from SUN does that redhat is a “proprietary” OS and not LSB compliant both of which are blatant untruths
As usual somebody (regardless of publication) posts benchmarks of a limited set of OS functionality and the “war” starts. Also as usual the benchmark is surprisingly short on details as to how it was set up which makes me question the results. So before anyone get’s fired up, the whole point of any kind of test should be so that the test can be repeated by anyone and not just post results to start a flame war.
If you were able to assemble the same hardware and software and conduct the tests in a similar fashion the results should be similar. The benchmark tests that do not have “full disclosure” I ignore because there are too many opportunities to “juice” the results one way or the other.
As a Solaris administrator, I am not concerend about the results. Benchmarks and production workloads are entirely different matters. The lack of GUI tools doesn’t bother me either, since for the most part I am not allowed to use a GUI. And as far as GUI tools go, both RedHat and Sun have a long way to go before they beat IBM (smit and smitty for AIX).
> I am curious now what are the major differences between LVM and DiskSuite.
With lvm (lvm2) you don’t need DiskSuite nor Veritas.
In fact, it’s device-mapper (provide raid[0-5] support) and lvm2.
Comparisons of Disksuite vs Volume Manager(Veritas) :
http://www.eng.auburn.edu/pub/mail-lists/ssastuff/sdsvxvm.html
lvm2 :
http://sources.redhat.com/lvm2/
lvm howto :
http://www.tldp.org/HOWTO/LVM-HOWTO/index.html
Device-mapper :
http://sources.redhat.com/dm/
GFS :
http://sources.redhat.com/cluster/gfs/
Your explaination would be correct is UFS worked the way you say it doesn. UFS uses a 8-16MB highwater mark before flushing to disk. UFS doesn’t work like ext2. Sun is very paranoid about data inegrity they would sacrfice performance over data loss. Sorry the whole data could not have possibly been in memory for logging ufs.
That’s just rubbish, and was probably coined as some sort of excuse. It usually is.
There are many factors that influence when a system flushes to any sort of filesystem, and flushing in 8 – 16 MB chunks does not increase data integrity (not familiar with UFS really). It’s just downright crap. Statistically, if you flush to disk regularly then there is more chance of a filesystem being in an inconsistent state and data being inconsistent or damaged. If you can flush intelligently, especially on a running server, then you make sure things are done atomically and make sure things are consistent. You’re never going to make data loss go away totally under such circumstances, so the priority should then be to make sure it doesn’t affect anything else.
Let’s just face it – Sun handles this in a pretty crappy way.
This thread was all but empty until the benchmark story got posted. Wow.
You know, if real people in the real world cared so much about these benchmarks, then no real work would ever get done. Instead of setting up real servers for real people, they would be patching and upgrading and rewriting everything to get on the right side of the fanboy flames, because anything else would destroy their little egos.
Someone earlier mentioned that this benchmark is equivalent to serving terabytes a day in bandwidth. In short, this benchmark applies to no one, except perhaps the few hundred people in the world running networks that require it.
Meanwhile, I still using a five year old PC and an even older Sun workstation…because they work, benchmarks be damned.
I think benchmarks bring out the worst in people, especially the ones from SPEC.
So I’ve joined this one a bit late, kind of glad of it, out of curiousity (I’m not going to go into the silly filesystem or benchmark debate – its pointless, it doesn’t matter who is right or wrong, neither side will say I got it wrong) did anyone actually read the DTrace presentations?
In particular Adams pid provider presentation should be read by everyone, developer or sysadmin. Its an excellent incite into an amazing piece of tech.
Exactly, show me some long term performance data on a set of machines running an OS and an application for a year, not something cooked up in a few days in a test environment.
,[i]There are many factors that influence when a system flushes to any sort of filesystem, and flushing in 8 – 16 MB chunks does not increase data integrity (not familiar with UFS really). It’s just downright crap.<?I>
If you are not familiar with UFS please refrain from commenting on it. It doesn’t lend you any credibility.
I never mentioned that was the only way ufs flushes to disk. You assumed I did. My bringing up the watermark was to refute the point that the entire working set was cached in memory. That is all, it was to meant to be a defintive whitepaper on ufs flushing.
You and other linux trolls decided to get into a pissing contest of linux features.
one even brought a non existant write ordering problem that only affects linux’s driver implementation a general IDE problem. i happen to know a thing or too about IDE.
The others have started a feature comaprison of meaningless features such as GUI sexiness and lvm ease of use.
like others have said this is a waste of time.
Exactly, show me some long term performance data on a set of machines running an OS and an application for a year, not something cooked up in a few days in a test environment.
agreed. I know i said I’d try not to feed the trolls. it was late and i was tired.
Yeah, it’s OK. I see this stuff and think “boy is this going to fire some people up”. An example of what I would like to see some data on is the web site that had the streaming video of the Global Flyer coming in for a landing. According to the guy at work watching it (new guy, no clearance) 80 Gigabits a second was being transmitted from the servers used for that event! Now there would be an interesting story server performance, regardless of OS.
Note that ext3 does not have problem with writecache enabled as long as the write order is guaranted.
Please stop posting links without understanding how things work first.
The problem linux is trying to solve is on devices that support tagged queuing of commands.
As per my knowledge, tagged queueing was introduced to IDE land with the advent of SerialATA. SerialATA is to be refered to as such by name in discussions so as to not confuse people in reference to ATA.
Different protocols, different terminology. SerialATA drives in legacy ATA mode do ‘ support tagged queuing and ATA protocol surely doesn’t support it sowrite ordering is not a problem there.
With SCSI linux provide write barrier, it’s done with SCSI ordered tags.
SCSI has supported tagged queuing for a while. Cache ordering doesn’t matter on Solaris becuase the write cache is turned off.
SerialATA drives in legacy ATA mode do ‘ support tagged queuing and ATA protocol surely doesn’t support it sowrite ordering is not a problem there.
SerialATA drives in legacy ATA mode do not support tagged queuing and ATA protocol surely doesn’t support it sowrite ordering is not a problem there.
smf is a replacement for sysv init. rhel4 uses sysv init mechanisms and chkconfig. far more simple than smf.
Does chkconfig support automatic fault recovery of services, parallel services starting at bootup, services dependency enforcement?
rhel4 has dprobes, kprobes, ltt and oprofile instead od dtrace.
I believe a redhat engineer has stated that none of them are equivalent to Dtrace.
zones functionality which is similar to freebsd jailshell is provided by cpusets. admittedly its not marketed as well as
zones
Sorry cou sets is not the samething as zones. Solaris has had processor sets for a while.
http://developers.sun.com/solaris/articles/solaris_processor.html
I’ll take mine with mustard and chili and relish and ketchup
here are other features like redhat gfs and cluster suites which lacks equivalents in solaris 10
http://www.redhat.com/software/rha/gfs
http://www.redhat.com/software/rha/cluster
These are addon products not shipping with RHEL 4 in the price of admission.
Sun has had a clustering solution for a while as well.
Ease of use for the LVM won’t be concern for Solaris after ZFS is released.
“Because it was irrelevant!!!. GUI administration tools don’t make an OS”
Can’t you just use webmin? works on both linux and solaris.
You’re kidding, right?
I don’t think you really understand the operation of Postmark. Postmark does lots of transactions on a relatively small number of files at a time. Even with a 16MB watermark, that means that many files will be created, edited, and deleted long before 16MB of writes is queued up. That’s why logging UFS shows such a speedup over non-logging UFS — it doesn’t have to touch the disk nearly as often, because most metadata writes are canceled before they need to be written to disk. The point is that by using such a small dataset, you’re testing mostly the journaling implementation, not the filesystem itself. The results of the posted benchmark are meaningless if you’re dealing with real datasets which are larger than memory.
I wasn’t suggesting that UFS operated entirely in memory like ext2. What I was pointing out was the similarity between these graphs and graphs for benchmarks involving ext2.
The point is that by using such a small dataset, you’re testing mostly the journaling implementation, not the filesystem itself. The results of the posted benchmark are meaningless if you’re dealing with real datasets which are larger than memory.
I still don’t follow. If your data set is larger than avaiable memory, and you are constantly touching the disk, aren’t you measuring I/0 interface performance?
You might be able to measure the quality of the on-disk allocation schemes but not get the enitre picture on FS performance.
PostMark measures a particular application of filesystems in newtwork workload scenarios. I don’t see a problem with it as a filesystem benchmarks.
Let me ask it this way:
How is going to disk a complete measure of Filesystem performance?
I don’t think you really understand the operation of Postmark. Postmark does lots of transactions on a relatively small number of files at a time. Even with a 16MB watermark, that means that many files will be created, edited, and deleted long before 16MB of writes is queued up.
I don’t think you understand it either. Here is the description of PostMark.
http://www.netapp.com/tech_library/3022.html#section2
PostMark generates an initial pool of random text files ranging in size from a configurable low bound to a configurable high bound. This file pool is of configurable size and can be located on any accessible file system.
Once the pool has been created (also producing statistics on continuous small file creation performance), a specified number of transactions occurs. Each transaction consists of a pair of smaller transactions:
Create file or Delete file
Read file or Append file
……
http://www.netapp.com/tech_library/3022.html#section4
Large numbers should be selected for the initial file pool to provide a realistic working set and to prevent caching effects from hiding performance deficiencies. Baseline results were obtained at the 1,000 and 20,000 file levels. The number of transactions should also be large (10,000+) to allow the system to attain a state of equilibrium.
From the benchmark whitepaper the intial pool size would be 200MB-300MB and then 25000-500000 transcations are done on the pool.
Once the intial pool has been created the watermark for ufs will be exceed and the files will be committed to disk.
I still don’t follow. If your data set is larger than avaiable memory, and you are constantly touching the disk, aren’t you measuring I/0 interface performance?
Partially, but the whole point of the filesystem is that it organizes data in a way that makes maximum use of what I/O interface performance is available.
You might be able to measure the quality of the on-disk allocation schemes but not get the enitre picture on FS performance.
What you’re really interested in is how good the filesystem is at allocating data, how much it has to touch disk to do metadata operations, the extent to which its data structures allow it to quickly retrieve specific portions of files. If your dataset fits in memory, you’re not really testing these things. To the extent that benchmarks are useful in gauging the performance of real tasks, these benchmarks aren’t really useful. For example, if you’re running something like a squid server, file server, or media server, your datasets aren’t going to fit in memory, and these benchmarks won’t really tell you how the system will perform.
PostMark measures a particular application of filesystems in newtwork workload scenarios.
PostMark is a good measure of filesystem performance. It’s just that the script they used had a relatively small dataset.
Once the intial pool has been created the watermark for ufs will be exceed and the files will be committed to disk.
Yes, the filesystems will touch the disk when the pool is initially created. But that’s a fraction of the duration of the benchmark. The bulk of the time is spent doing the transactions themselves, and as described in the quote you listed, many of the operations are create/delete pairs. In a good journaling implementation, if you delete a file soon after creating and appending to it, the writes queued up from those two operations will be canceled and thus not count towards the watermark. Thus, you can do a lot more than 16MB of transactions without hitting a 16MB watermark.
What you’re really interested in is how good the filesystem is at allocating data, how much it has to touch disk to do metadata operations, the extent to which its data structures allow it to quickly retrieve specific portions of files. If your dataset fits in memory, you’re not really testing these things. To the extent that benchmarks are useful in gauging the performance of real tasks, these benchmarks aren’t really useful. For example, if you’re running something like a squid server, file server, or media server, your datasets aren’t going to fit in memory, and these benchmarks won’t really tell you how the system will perform.
Are you questioning PostMarks vaildity as a benchmark or just filesystem benchmarks in general?
You still haven’t convinced me that while running PostMark majority of the transactions are completed without I/O hitting the disk.
Please point me to a Postmark run that has no difference in numbers regardless of the I/O interface in use.
Yes, the filesystems will touch the disk when the pool is initially created. But that’s a fraction of the duration of the benchmark. The bulk of the time is spent doing the transactions themselves, and as described in the quote you listed, many of the operations are create/delete pairs. In a good journaling implementation, if you delete a file soon after creating and appending to it, the writes queued up from those two operations will be canceled and thus not count towards the watermark. Thus, you can do a lot more than 16MB of transactions without hitting a 16MB watermark.
I don’t think Postmark is as simplistic as you claim it to be.
Are you questioning PostMarks vaildity as a benchmark or just filesystem benchmarks in general?
I’m not questioning the validity of PostMark, I’m questioning the validity of the particular settings chosen for the run used in the paper. Aside from maybe a web-server, I can’t think of a heavy-duty application that would have such a small dataset. Say you’ve got a server holding home accounts for thousands of users. You’re going to have much more than 300MB of data active on such a server.
You still haven’t convinced me that while running PostMark majority of the transactions are completed without I/O hitting the disk.
Just do the math. UFS + logging posted a score of 22500 transactions per second. The Sun StorEdge 3310 disk array is capable of, at best, a few thousand transactions per second. Ergo, the majority of transactions must have been completed in memory.
Postmark is essentially a metadata benchmark, because every other operation is a create or delete. With a small dataset, it’s almost the ideal thing to show off a good journaling implementation. However, it does not necessarily describe performance under a wide range of conditions. If you get the chance, grab a copy of Dominic Giampalo’s book “Practical Filesystem Design”. If I remember correctly, he has a bunch of PostMark runs in there, which should put the meaning of thse benchmarks in context.
Please point me to a Postmark run that has no difference in numbers regardless of the I/O interface in use.
Eh? This makes no sense to me.
I think you’re both working toward good answers –
So long as you’re working with a group of files that the kernel can jam in the data-cache section, it’s ultimately up to the file system if it actually writes that data to the physical disk.
I recall XFS behaves similar to what raptor mentions – if you create a small file, add and modify some data inside it and then delete it, XFS doesn’t actually touch the physical disk.
On the other hand, if you’ve got 4GB of memory and are only using a postmark dataset of 300MB – yeah, the bulk of those operations are going to take place in memory and you’ll be stressing the VM and filesystem cache performance more than the disk or the file systems ability to efficient layout files on the physical disk. This is a perfectly valid benchmark if those are the parameters in which you work.
Likewise, I’m also interested in how efficiently a file system can physically write or read data to the disk given hundreds of concurrent reqeusts. The only way yo achieve this is by working with data sizes roughly 2x the amount of free memory.
For example:
I have roughly 300 Opterons w/ 4GB at my disposal, along with some fat gige NFS attached storage devices.
umount and mount a local EXT3 partition. Copy a 220MB kernel source tree residing on the EXT3 mount to a new location on the same mount. This operation will go about 5% faster than if you used an XFS partition. (noatime enabled)
Now NFS mount EXT3 and XFS based partitions (residing on two idential RAID arrays) to 64 machines each connected via gigE.
Have each of the 64 machines copy a local 220MB kernel source tree to unique directories on the respective mounted NFS partition. Keep in mind that this creates roughly 12GB of data on the NFS server which is roughly 3x the machine’s memory.
You’ll find that the jobs writing to the XFS-based partition consume less than half the time (wall time) than the jobs writing to the EXT3-based partition.
It really says that we have a horse race folks. That both Sun and Red Hat are fighting for turf in the same backyard, and they both stack up to one another fairly well on a bullet point comparison. Neither one is glaringly superiour to the other.
Thus it come down to a matter of taste, marketing, and price. If you’re a committed RHEL customer, have RH deployed throughout your company, have RH admins and what not, there’s little to no reason to switch to Solaris.
But if you’re coming off of a Windows shop and looking for a Unix solution, it would be foolish to not consider both companies offerings.
And this is the key. Where as before, RH was a pretty much de facto solution in the Unix space on the x86, Sun has stepped up and is offering a very competitive product. Sun WILL take marketshare away from Red Hat, as Suns mousetrap is pretty much just as good.
Anyone in the middle of making this kind of decision should check to see if their applications run of either or both RH and Solaris and give both options considerable thought.
I’m just saying that performance can differ greatly if your data size exceeds or is below that of the free memory (dcache) size; both tell interesing and valuable stories.
And I’d like to see ZFS put through similar paces, on the same disks as as XFS, Reiser, and EXT3.
frankly, whether the TCP/IP stack can handle 108,000 vs. 109,000 TCP/IP connections is not going to change my mind in buying RHEL or Solaris. I’m sure someone else cares about these things.
Lets focus on today’s bottlenecks and see how well the OS can aleviate them.
I love how maybe one comment in this entire thing has actually had anything to do with the main subject of the article (namely DTrace). Haven’t you guys hashed benchmarks to death in enough other threads already?
I know of projects that would take into consideration changing the OS if that alone would boost performance by 5%.
Luckily, DTrace can help you boost performance even more, potentially much more.
Just do the math. UFS + logging posted a score of 22500 transactions per second. The Sun StorEdge 3310 disk array is capable of, at best, a few thousand transactions per second. Ergo, the majority of transactions must have been completed in memory.
Reread the paper, I think the Graph in the introduction has a typo. Read chapter 4 for the detailed results.
UFS+logging posts 3500 trans./sec at a load of 25000 a degrades to 2250 trans/sec @500000.
I think they accidentally added two extra zeros to the performance numbers at the highest loads to get the simplified view.
Check out the detailed results, it’s so obvious.
Reread the paper, I think the Graph in the introduction has a typo. Read chapter 4 for the detailed results.
Oh, I see. It’s not a typo, it’s just a poorly-done graph. Whoever made up the graph accumulated the scores at each load and reported it as an “aggregate” score. Unfortunately, he didn’t realize that it’s fairly nonsensical to report an “aggregate” that has units of “transactions per second”. The way the graph reads, you’d think Solaris could handle 22,500 transactions per second. The lower numbers of 3,500 to 2,700 make a lot more sense. It’d still like to see benchmarks with larger datasets, for the simple reason that my experience with Postmark shows that for larger loads, the sort of deltas you’re seeing here become much less pronounced.
Sorry Linux fanboys, but in this particular case RHEL has absolutely no business case against Solaris — RHEL is absolutely uncompetitive with Solaris on pretty much any ground, take performance, features, pricing, TCO, security, stabilitiy, maturity, you name it. Solaris is a much more technically advanced product that offers a number of groundbreaking features promissing to save some real dollars to the customers and Solaris does it for significantly less than RHEL, I don’t see how any sane customer would have chosen RHEL over Solaris. RedHat’s rumblings poised to take market share from Solaris is just wishful thinking. Again RedHat does not have a good enough product on its hands to compete with Solaris — RHEL 4 is a serious yawn inducer, they should have called RHEL 3.1 and certainly not RHEL 4 as there are no significant enough improvements to warrant a major version change. The competitive gap between RHEL and Solaris 10 is so wide it is not funny. RedHat will more than likely loose this battle.
Oh, I see. It’s not a typo, it’s just a poorly-done graph. Whoever made up the graph accumulated the scores at each load and reported it as an “aggregate” score.
It seems unlinkely that aggregating the scores would lead to a perfect, numbers at highest load * 100. I am too lazy to add those up now, eye balling it, makes it looks like a typo.
Or it was scaled by 100 to exagerrate the performance difference. I personally thing the graph should be corrected. but this the first time I have seen this paper.
It’s definitely an accumulation. If it was just performance at highest load x 100, then ReiserFS would be at 6500, instead of about 7600. I added up the numbers for ext3, and ended up with 9910, which is just about what is listed in the graph. The same is true of UFS nolog.
For a lot of things, there are still solid advantages to Solaris. First, everything in Solaris is integrated well and it works out of the box. Everything they advertise is there aching to be activated and used. Second, the documentation for Solaris is unparalleled. It is possible to pass Sun’s certification exams after having just their documentation and a system pr two to play with. Third, they are honest about their hardware support, and categorize hardware into support levels that people can take seriously.
For professionals who have to get work done and don’t want to dick around with patches or hunting down “works for me” anecdotes in mailing lists or wondering if a FAQ is obselete, Solaris is hard to beat, IMO.
> RHEL is absolutely uncompetitive with Solaris on pretty much any ground, take performance, features, pricing, TCO, security, stabilitiy, maturity, you name it.
Blah blah blah…
> and Solaris does it for significantly less than RHEL
Only Solaris Express is free for non commercial use !
Solaris Express does not provide update ! Only fresh install is supported.
I prefer Fedora or Debian or Mandrake or anything but Solaris.
> RedHat will more than likely loose this battle.
Sun already loose. Their pathetic OpenSolaris (patent inside ; what do you expect with DTrace only ?) prouve that.
> Only Solaris Express is free for non commercial use !
> Solaris Express does not provide update ! Only fresh
> install is supported.
First off Solaris 10 has a free binary rtu license – for commercial and non commerical use, please see
http://www.sun.com/software/solaris/binaries_program.xml
Solaris 10 is free – you pay for support if you so wish.
Solaris Express is an early access version of Solaris, the common argument used is that Fedora is an early access version of Redhat, I don’t belive that an upgrade path to RHEL exists from Fedora either.
You can however upgrade from Solaris 9 to Solaris 10, which while possible with RHEL3 to RHEL4 is actually not recommended by Redhat themselves see https://www.redhat.com/docs/manuals/enterprise/RHEL-4-Manual/x8664-m…
> I prefer Fedora or Debian or Mandrake or anything
> but Solaris.
Thats perfectly fine, and its your choice – use what ever you find to be the most usefull, no one is telling you to use Solaris. But don’t say that everyone has to use what you want to use, diversity and choice are necesserary.
It depends on the benchmarks. In some benchmarks Solaris beats Linux, in some Solaris wins.
ODDLY ENOUGH… Most of the benchmarks that linux wins involves software that is tailor-made for Linux and not for solaris.
Kinda unfair. Btw- who cares about 32-bit x86 nowadays?
> Solaris 10 is free – you pay for support if you so wish.
You are right for Solaris 10.
But wrong for Solaris express. Sorry for the confusion (my confusion).
btw, RHEL is free.
You can download it :
http://www.redhat.com/software/rhel/eval/
And free(dom) :
http://www.centos.org/
> First off Solaris 10 has a free binary rtu license – for commercial and non commerical use
Like RHEL. But RHEL provide proxy, web, mail server, all develpement tools, and more.
If you need a proxy for solaris 10 there is no more “free binary rtu license” :
http://globalspecials.sun.com/dr/sat4/ec_MAIN.Entry16?SP=10024&PN=2…
From the Solaris license :
(d) Commercial Use. You may use Software internally for your own commercial purposes.
(e) Service Provider Use. You may make Software functionality accessible (but not by providing Software itself or through outsourcing services) to your end users in an extranet deployment, but not to your affiliated companies or to government agencies.
…
(d) Unless enforcement is prohibited by applicable law, you may not decompile, or reverse engineer Software.
(f) You may not publish or provide the results of any benchmark or comparison tests run on Software to any third party without the prior written consent of Sun.
(h) Unless otherwise specified, if Software is delivered with embedded or bundled software that enables functionality of Software, you may not use such software on a stand-alone basis or use any portion of such software to interoperate with any program(s) other than Software.
RHEL don’t have such things. btw, Fedora and RHEL have quite the same EULA/Licence. The difference is the licence of RHN (support).
> I don’t belive that an upgrade path to RHEL exists from Fedora either.
And you can not update from Solaris(sparc) to Solaris(x86) or Solaris 10 to Solaris Express, etc…
What do you meen ?
You have errata with Fedora and you can update from Fedora N to Fedora M (M>N).
btw, where can I get errata for Solaris 10 ?
> while possible with RHEL3 to RHEL4 is actually not recommended by Redhat themselves
Call this honesty.
hmmm, I didn’t make any comments on the price of RHEL, but there you go. I would point out that the free url you have above is a 30 day free evaluation. But I’m nit picking.
I’m confused as to what you mean by proxy server – are you referring to squid in RHEL or something else? Squid is available on the Solaris companion cd if you require it – no one is asking you to buy the Sun ONE proxy server.
You could also download it from blastwave or sunfreeware.
Updating from sparc to x86 doesn’t quite make sense as they are different architectures. Upgrading from Solaris 10 to Solaris Express (the new Nevada train) can be done – errata, not sure, I haven’t consulted the release notes.
anyway, theatre beckons, time to go watch things about Francis Bacon ( http://www.ak05.co.nz )
Can’t you just use webmin? works on both linux and solaris.
Webmin is great…if you understand and plan for Webmin’s one serious drawback; it can wreck your configuration. I don’ t mean to say that Webmin will wreck your config each and every time though you should plan for that chance.
Also, if your admins don’t know how to do something, they also might not know why they would want (or not want) to do it. That alone can cause problems with any config program.
> Upgrading from Solaris 10 to Solaris Express (the new Nevada train) can be done
Upgrading from Fedora 3 to RHEL can be done (but like Solaris not supported).
> Upgrading from Fedora 3 to RHEL can be done (but
> like Solaris not supported).
Ah okay, Solaris Express and Fedora are pretty similar to each other in that way then .
in 6 months from now, I’m pretty sure me and most others will see Solaris perform about 25-50% better than Red Hat. ASsuming the current pace of development, that is a likely future at least.
good job Sun…
If you are not familiar with UFS please refrain from commenting on it. It doesn’t lend you any credibility.
Please don’t use that as cover for not knowing what you’re talking about – and I’m talking about generally there. Yep, I admitted I didn’t know a lot about how UFS handles things, but I responded to the way you said it handled things. If what you wrote was crap, please let us know.
I never mentioned that was the only way ufs flushes to disk. You assumed I did.
What were you talking about then?
You and other linux trolls decided to get into a pissing contest of linux features.
Blah, blah, blah.
I don’t see how any sane customer would have chosen RHEL over Solaris. RedHat’s rumblings poised to take market share from Solaris is just wishful thinking.
Wher’ve you been for the past five years!? They’ve already done it, which is why Sun has been in quite a bit of financial trouble.
Sun’s money woes are not just the result of Linux “taking over the world”, and anybody who thinks otherwise needs to take a look around. Cisco if I remember correctly took a hit as well, and they have survived despite competition from companies such as Extereme Networks.
What I think the Linux crowd tends to ignore is the following:
Seasoned administrators with years of experience with Sun hardware and software are readily available in many areas. This reduces training costs because you only have to “brush up” on new skills based on changes to various products. An example of this is the 6320 disk array we have (Fibre Channel), managing it is similar to managing T3 arrays using the same commands.
Now bring in a whole bunch of new hardware and a different operating system. Will you get “better” performance, I don’t know. That will depend on how sharp your administrative staff is, and how quickly they adapt to the new environment. My educated guess is that it would take at least a year or more before you start to see any benefit from the changes assuming you “rip out” an existing Sun environment (architecture independent) and “switch” to Linux. There are many no so subtle differences between the two operating systems. Just as I don’t expect a RedHat admin to totally understand Solaris, I don’t expect a Solaris admin to totally understand RedHat Linux.
I still say the acid test is whether or not a company who “dumps” Sun, or IBM, or HP continues to use Linux after five years (usually the timeframe for a hardware refresh). If they do and it works for them, great. My money is on they will not, because the performance is not there despite quoting numerous benchmarks. As I have also stated before, benchmarks and production workloads are entirely different.
I have RHEL 4 on one of my machines at home now for evaluation. Based on some of the numbers I am getting using iozone and checking system performance with sar, I am not impressed. Admittedly my system is not SCSI, but it definitely makes me wonder what our systems at work (RHEL 3 ES) would perform like in under similar tests on the systems we have at work that use SCSI.
<Please don’t use that as cover for not knowing what you’re talking about – and I’m talking about generally there. Yep, I admitted I didn’t know a lot about how UFS handles things, but I responded to the way you said it handled things. If what you wrote was crap, please let us know. [/i]
I know what I ma talking about and ia know for a fact you don’t.
I already let you know what you wrote was crap.
What were you talking about then? </>
Blah Blah.
[i]Wher’ve you been for the past five years!? They’ve already done it, which is why Sun has been in quite a bit of financial trouble.
Blah Blah.
David is the worse kind of linux troll on Osnews I would avoid further discussions with him. You will go no where.
btw, RHEL is free.
You can download it :
http://www.redhat.com/software/rhel/eval/
Sure, RHEL is free… my foot!
Free 30-day Evaluation Subscription
to Red Hat Enterprise Linux 4
“Sure, RHEL is free… my foot! ”
rhel is NOT a software product but a subscription with SUPPORT
NOBODY provides support for free.
open solaris might be free. solaris costs for support too
[i]rhel is NOT a software product but a subscription with SUPPORT
NOBODY provides support for free.
open solaris might be free. solaris costs for support too<?I>
That’s the not the point of debate, The debate is can a company download RHEL for free and use it to run thier business with the 30 day evaluation license regardless of support?
“That’s the not the point of debate, The debate is can a company download RHEL ”
hello?. a software subscription obviously cannot be used outside the SLA. if you want a product go download centos(.org)
centos is similar to opensolaris
solaris with support is similar to RHEL
got the point?
hello?. a software subscription obviously cannot be used outside the SLA. if you want a product go download centos(.org)
centos is similar to opensolaris
solaris with support is similar to RHEL
got the point?
Aparently you didn’t. Solaris is free for commercial use as well and a company can go and download and use Solaris 10 without a SLA, That makes it more free than RHE, the whole point under debate.
“Aparently you didn’t. Solaris is free for commercial use as well and a company can go and download and use Solaris 10 without a SLA”
so is cent os which is exactly the same as RHEL if you just want the product and not support.
so is cent os which is exactly the same as RHEL if you just want the product and not support.
It is not RHEL and no redhat branded. Sorry, you are way off here.
“It is not RHEL and no redhat branded. Sorry, you are way off here.”
the software is the same. just because they remove trademarked images doesnt mean i am way off.
raptor is the worse kind of Solaris troll on Osnews I would avoid further discussions with him. You will go no where.
People, Solaris has always had great performance and scalability on Sparc. It’s just better now.
Sun greatly boosted their x86 (and especially AMD64) performance so that they can compete better with Linux on low-end boxes. Yes children, low-end – we are considering the quad opterons since they’re so darn cheap compared to the other biggies we normally buy.
It is obvious that most people posting here have never worked in a serious enterprise environment, and I know I will get flamed for this but it’s true. Once you can appreciate the considerations enterprises have, then you can make informed decisions.
Pissing contests where either one or the other are a bit faster mean nothing. You need to test them with YOUR workload. You also need to see if they have the reliability and support you need.
The new features in 10 are fantastic, and this is coming from someone that used to do Linux kernel development and has used pretty much every major OS out there.
Oh, it positively flies on SPARC.
I urge everyone to reserve judgement until ZFS comes out.
Linux zealots: you need to give it a chance.
Solaris zealots: give RHEL (or SuSE) a chance.
They are both very decent products but Sun has my vote in the serious enterprise space.
One problem of course is that there is VERY little support from third-party vendors for x86. We need netbackup but it’s not coming any time soon.
D
Nice, couldn’t come up with any thing original I presume. I don’t think you know the meaning of troll.