Linked by Thom Holwerda on Tue 1st May 2007 18:09 UTC, submitted by ghen
Talk, Rumors, X Versus Y "There are many factors which affect Website availability and performance from end user perspective, namely ISP Internet connection, server location, server parameters, programming language, application architecture and implementation. One of the critical parameters is a selected Operational System (OS). Most users often need to select between Linux and Windows, two popular choices for web servers. By providing free monitoring service, we at collected large amount of data to perform a unique analytical research examining OS correlation with uptime and performance."
Order by: Score:
Interesting read
by Bit_Rapist on Tue 1st May 2007 18:32 UTC
Member since:

I found this to be an interesting article. I like the fact that they state that downtime can equal money without making outlandish claims as to the amount.

Good read.

Now let the OS pissing matches begin in the forums. ;)

Reply Score: 3

RE: Interesting read
by butters on Tue 1st May 2007 19:16 UTC in reply to "Interesting read"
butters Member since:

This is pretty much what I would have expected. I might not have guessed that NetBSD had the fastest average response times, but I would have put it at the top for average uptime in a blanket statistical survey. Interesting article, indeed, but the title is click-bait. NetBSD clearly dominated this analysis, probably due to its common usage in serving static pages. Linux and Windows are more commonly used in serving dynamic content, hence more software to patch and more ways to fail.

Reply Score: 5

OSX Eplanation ?
by TLB_TDR on Tue 1st May 2007 18:38 UTC
Member since:

Any explanation for OSX poor performances ?
Not so much security updates (comparing to Windows).
Was it OSX server ?

Anyway, nice 'true' BSD performance !

Edited 2007-05-01 18:40

Reply Score: 1

RE: OSX Eplanation ?
by kaiwai on Tue 1st May 2007 20:58 UTC in reply to "OSX Eplanation ?"
kaiwai Member since:

There is no kernel differences between the server and desktop editions, just the middleware layer.

As for the performance issues, its been covered several times over several forums; it basically comes down to crappy implementation in the kernel such as molloc, threading, and numerous other things.

The problem is that the kernel has been designed for 'teh snappy' first, which is the noticeable thing on the desktop, and throughput second priority.

Ultimately its a combination of a design decision and basically "we can't be stuffed" fixing it issue - with that being said, however, I'd say that with the limited number of engineers that they do have available, they're limited in scope of what they can do.

Reply Score: 4

RE[2]: OSX Eplanation ?
by TLB_TDR on Tue 1st May 2007 21:13 UTC in reply to "RE: OSX Eplanation ?"
TLB_TDR Member since:

The performance problem of OSX is well known, and I'm not surprised with those results.

I just wondered why the uptime is quite low.
My personal experience gives better results for OSX, but, as usual, the better you know an OS, the better uptime you get...

Edit : new security update for Quicktime, and yes, you need to reboot ;-)

Edited 2007-05-01 21:22

Reply Score: 1

RE[3]: OSX Eplanation ?
by Johann Chua on Wed 2nd May 2007 03:19 UTC in reply to "RE[2]: OSX Eplanation ?"
Johann Chua Member since:

Seriously speaking, is there any need for QT on a server install of OS X?

Reply Score: 2

RE[4]: OSX Eplanation ?
by Kroc on Wed 2nd May 2007 09:29 UTC in reply to "RE[3]: OSX Eplanation ?"
Kroc Member since:

Quicktime Streaming Server? Quicktime is a core part of the OS, handling pretty much all video and media processing.

Reply Score: 3

RE[4]: OSX Eplanation ?
by coolestuk on Thu 3rd May 2007 15:13 UTC in reply to "RE[3]: OSX Eplanation ?"
coolestuk Member since:

Just FYI -- on OS X, if there is no monitor attached, then the windowing system is simply not loaded. This even applies when the os is not OS X server. So, whatever hard disk space is being taken up by Quicktime, it's not going to affect performance (unless of course that server is actually using Quicktime).

Reply Score: 1

RE: OSX Eplanation ?
by manix on Wed 2nd May 2007 00:25 UTC in reply to "OSX Eplanation ?"
manix Member since:

I as far as I know the OsX kernel is a micro kernel. Minix also has a micro kernel.

The concept of the micro kernel is great. Each time the kernel needs to access a device, it will call a user mode device driver. This way a bad device driver shouldn't make it crash.

However, this also means that each time the kernel accesses a device, it has to do what's called a context switch. For each context switch the system has to save the context, register values, of the current process. This is very expensive in terms of CPU time. That's probably why Linux and *BSD systems use traditional kernels and thus perform better.

Reply Score: 2

RE[2]: OSX Eplanation ?
by anduril on Wed 2nd May 2007 02:06 UTC in reply to "RE: OSX Eplanation ?"
anduril Member since:

Mach really isn't a micro-kernel. Its kinda a bastardization of both (can pull other links too if needed). The issue with OSX is more an issue with threading which has been shown in more than a few comparative benchmarks. OSX was designed more to be a user OS first, and a server OS somewhere as a distant second.

Reply Score: 1

RE: OSX Eplanation ?
by anduril on Wed 2nd May 2007 02:08 UTC in reply to "OSX Eplanation ?"
anduril Member since:

What would be interesting to see is how many of the sites hosted on macs are hosted by your every day job, on a normal home connection. That alone could cause the uptime issues (hardly the most stable connections) or how many hosting sites using macs are running mac minis or the like instead of xservers.

Personally, I really wouldn't want to host on OSX just because of the threading and performance issues it has vs. Linux/BSD/Solaris and its somewhat slower patch time but it is a great user OS no doubt.

Reply Score: 1

Not useful for selecting OS
by unoengborg on Tue 1st May 2007 18:39 UTC
Member since:

There are many factors that determine the uptime of the system one of them is the selected OS, but I think things like admin skills selected hardware, and hardware configuration is more important.

Reply Score: 3

RE: Not useful for selecting OS
by Filip on Tue 1st May 2007 18:51 UTC in reply to "Not useful for selecting OS"
Filip Member since:

Indeed, that's why they used these sample sizes. Unless there is a correlation between selected OS and the variables you name, the comparison is a useful one.

Now, I can imagine such correlations exist. But that only raises another question. Why would Windows attract incompetent admins? Or why would Windows be installed on flimsy hardware?

Reply Score: 5

by diegocg on Tue 1st May 2007 19:44 UTC
Member since:

Is there a mirror somewhere?

Reply Score: 2

RE: OSNews'ed
by sbergman27 on Tue 1st May 2007 19:55 UTC in reply to "OSNews'ed"
sbergman27 Member since:

Is there a mirror somewhere?

Shoulda used NetBSD! ;-)

Reply Score: 3

Wrong title !
by mefisto on Tue 1st May 2007 20:20 UTC
Member since:

only thing i noticed , was : only NetBSd has stable performance !

Reply Score: 2

Sure Linux is better tha Windows, but...
by amadensor on Tue 1st May 2007 20:29 UTC
Member since:

NetBSD ROCKS!!! Too bad my site is on FreeBSD.

Reply Score: 1

Minix performance
by Invincible Cow on Tue 1st May 2007 20:48 UTC
Invincible Cow
Member since:

I think the results of Minix is interresting. It is very new as a real-world operating system, yet its uptime comes to close to that of the "big boys".

Also, the response time is interresting. It's very close to the others (in fact better than OS X(!)) and if they've used ACK (default) the response time would probably be better than Windows if GCC was used instead.

Reply Score: 1

RE: Minix performance
by Earl Colby pottinger on Tue 1st May 2007 22:35 UTC in reply to "Minix performance"
Earl Colby pottinger Member since:

I get just the opposite opinion. Minix's downtime is huge compared to the others. Why? Updates?

Comparing Minix which is alwas on the bottom of the list to the next best OS on the same list.

Week 14:
Mac OS X 96.70 = 3.3% downtime.
Minix 88.70 = 11.3% downtime or over 3x greater.

Week 15:
Mac OS X 97.82 = 2.18% downtime.
Minix 94.77 = 5.23% downtime or over 2x greater.

Week 16:
Solaris 97.88 = 2.12% downtime.
Minix 95.04 = 4.96% downtime or over 2x greater.

Minix has a long way to go.

Reply Score: 1

Too many unknowns to be very useful
by galvanash on Tue 1st May 2007 20:56 UTC
Member since:

Hate to dis this article as I am sure they had good intentions, but these numbers are very shaky. Particularly the ones showing Net/Open BSD and Solaris as having the top uptimes and lowest response times.

The fact of the matter is that the typical website served on those 3 OSs is VERY different from the typical site served on Linux/Windows/OSX...

The vast majority of BSD and Solaris sites are either completely static websites or mostly static websites with a bit of CGI or perl thrown in. In a nutshell most are old and simple. Solaris will of course have a significant number of JSP sites, but even still there are ALOT of old static websites still chugging along on older Sun hardware - and most of them are static.

Linux/Windows/OSX, however, are almost always fully dynamic websites using PHP/ASP/ASP.NET/JSP/etc. These are of course just stereotypical observations, but even if they are wrong the numbers given simply dont take this type of variable into account and the results will obviously be heavily skewed because of it.

Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt...

Reply Score: 5

sanctus Member since:

I would like to see some data backing that up. My experience shows me quite different use than yours and I'm sure they're both valid.

Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt...

Well, that goes for every benchmark/compare.

Reply Score: 1

anduril Member since:

Any survey like this is going to have to be taken with a grain of salt. Unless they query each host for exact OS versions, patches, software running, pages served, etc. it'll never be "accurate." However, it serves as a rough estimation.

It's still valid though as a stepping stone (one of many) towards a determination of what OS to use.

Reply Score: 1

jayson.knight Member since:

I normally don't agree with 'casual' observations such as the ones you make above, but in this case I'm inclined to agree.

Dynamic websites are almost always going to be put on a reputable web server since they are usually some sort of revenue generating entity (e-commerce, corporate sites, etc) and thus will almost always be run on either IIS or Apache (Windows and Linux respectively. Yes I realize Apache runs on almost anything, but it's almost always Linux for anything critical as that's the "native" platform).

As the previous poster said, with dynamic sites comes issues of deploying A) updates to the site itself (which in the case of JSP/ASP.NET means a recompilation which causes downtime) and B) updates to the underlying infrastructure (JVM/.Net updates, OS updates, etc) which should both add up to the +- .5% differences seen in the charts.

Lastly, it's hard to take something this general too seriously. I think the reports should further be drilled down into the underlying technology being used for the site (CGI/JSP/ASP.NET etc) as well as the type of site. If it's a simple 100 page static HTML site, it'll run forever. Web applications (i.e. something that uses a database) are more complex, and therefore more prone to uptime swings.

Reply Score: 3

Soulbender Member since:

"The vast majority of BSD and Solaris sites are either completely static websites or mostly static websites with a bit of CGI or perl thrown in. In a nutshell most are old and simple."

Really. And you know this how? If you're going to dis someones research at least do your own research better.

"Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt."

Just like you should with EVERY benchmark.

Reply Score: 4

the conclusions are barely useful
by ssamak on Wed 2nd May 2007 00:20 UTC
Member since:

Their conclusion:
statistically Linux based servers provide better availability and response speed than Windows servers.

What variables did they leave out that can change this conclusion?
1. Whether a site is overloaded or underloaded by site visits - this obviously affects response times, and its not an OS issue if proper planning is not in place.

2. What work does each site do? as someone mentioned before, dynamic vs static pgaes.

3. What is a 'site'? A single machine? A network-load balanced set of machines? An ip address? etc.

4. Is it a critical website or a site that is used during business days only? How admins treat downtime will affect how much they will allow it.

They are comparing apples to oranges when they compare what OS is more suited to be a hosting OS, without setting up the same scenario for each web server.

Personally, this was a useless analysis.

Reply Score: 3

Interesting - but data too limited
by anomie on Wed 2nd May 2007 00:50 UTC
Member since:

Actually I think this is an interesting analysis (keeping in mind - as others have mentioned - that many variables are unaccounted for), but I'd be interested in seeing a lot more than three weeks of data.

If, within that three week period, there were critical patches issued for [insert OS] that required a reboot, the numbers may be artificially low for that OS. If this could be averaged over twelve months, vendors with frequent critical patches would still take a beating, but that would be more legitimate than having the misfortune of issuing such a patch during their brief monitoring period.

Reply Score: 1

Member since:

How do they measure response time, exactly? That's as much a function of the distance in space as well as hops from the starting location and back as anything else, so if a given group of servers was closer, it would give different results. There was no mention of that at all, if they adapted for that or not.

Also, which language is used has little to do with stability: the quality of the implementation of the language and the software written in it is what matters for uptime and response time, and not the language itself, with exceptions for languages that aren't likely to be used for such a problem domain to start with. Besides, that claim that the language matters was not backed up at all by the article: it mentioned it at the top, but didn't do anything with that claim afterwards.

Also uptime issues are partially affected by the number of patches and the time required to install them: a system that isn't updated often only truly means (by itself) that the system isn't updated often, not that it doesn't have many potential patches that may be available. It may simply be that those systems that aren't down for patches are left unpatched, so how do you properly account for that without knowing what patches were put out and whether or not they were installed for the appropriate systems? Also, for all they can tell, the system had downtime (non-responsive) for reasons unrelated to OS patches: they might have needed to take a system down for database maintenance, backup of other data, hard drive or other hardware failures, etc. and not all servers have the same level of hardware redundancy to fail over gracefully, so in effect a lot of these numbers have little practical meaning unless all that information is factored in correctly.

Reply Score: 2

Meaningless test
by ssa2204 on Wed 2nd May 2007 02:25 UTC
Member since:

All together a worthless article as the survey the conducted is lacking in variables such as location, infrastructure, type of business, type of website hosted, etc. I think it is pretty clear that the average mainstream SMB is not going to be using NetBSD, more likely Windows or Linux.

The proper method to conduct such test would be of course to test each server side by side within the same infrastructure conditions. In tests I have seen in the past, OSX did not do as bad, nor did IIS as well.

In short I would not rely on this so called test any more than I would rely on Microsofts own marketing department to help me decide. In the end I think many professionals will end up using what they are comfortable with so long as it will not have an impact.

Reply Score: 1

Odd selection
by Yoke on Wed 2nd May 2007 10:44 UTC
Member since:

What an odd survey. It has almost as many Minix hosts as Solaris, NetBSD, and OpenBSD hosts put together.

Reply Score: 1

by xmv_ on Wed 2nd May 2007 14:33 UTC
Member since:

it seems that they mesure the uptime between reboots..
in that case, if you have a high uptime it means you haven't upgraded your kernel. Given the number of security updates, Linux should be far behind, or they simply didn't apply updates or they are using e.g. a 2.4 kernel ...
Or if by uptime they mean that the server always answer, well, my server runs for monthes and beside kernel updates it hasnt have had a single SECOND of downtime.
But then, so did my non-linux servers.

Also the response time mesurement is rather vague..

benchmarking and statistic are really something not easy to deal with ;)

Reply Score: 1

by Nicram on Thu 3rd May 2007 09:44 UTC
Member since:

It's all pathethic, where is VMS?!?!?!

Reply Score: 1

New forum
by ayeomans on Fri 4th May 2007 09:38 UTC
Member since:
2005-11-14 have started a forum to debate this further:

Reply Score: 1