“There are many factors which affect Website availability and performance from end user perspective, namely ISP Internet connection, server location, server parameters, programming language, application architecture and implementation. One of the critical parameters is a selected Operational System (OS). Most users often need to select between Linux and Windows, two popular choices for web servers. By providing free monitoring service, we at mon.itor.us collected large amount of data to perform a unique analytical research examining OS correlation with uptime and performance.”
Linux vs. Windows: OS Impact on Uptime, Speed
About The Author
Follow me on Twitter @thomholwerda
2007-05-01 7:16 pmbutters
This is pretty much what I would have expected. I might not have guessed that NetBSD had the fastest average response times, but I would have put it at the top for average uptime in a blanket statistical survey. Interesting article, indeed, but the title is click-bait. NetBSD clearly dominated this analysis, probably due to its common usage in serving static pages. Linux and Windows are more commonly used in serving dynamic content, hence more software to patch and more ways to fail.
Any explanation for OSX poor performances ?
Not so much security updates (comparing to Windows).
Was it OSX server ?
Anyway, nice ‘true’ BSD performance !
Edited 2007-05-01 18:40
2007-05-01 8:58 pmkaiwai
There is no kernel differences between the server and desktop editions, just the middleware layer.
As for the performance issues, its been covered several times over several forums; it basically comes down to crappy implementation in the kernel such as molloc, threading, and numerous other things.
The problem is that the kernel has been designed for ‘teh snappy’ first, which is the noticeable thing on the desktop, and throughput second priority.
Ultimately its a combination of a design decision and basically “we can’t be stuffed” fixing it issue – with that being said, however, I’d say that with the limited number of engineers that they do have available, they’re limited in scope of what they can do.
2007-05-01 9:13 pmTLB_TDR
The performance problem of OSX is well known, and I’m not surprised with those results.
I just wondered why the uptime is quite low.
My personal experience gives better results for OSX, but, as usual, the better you know an OS, the better uptime you get…
Edit : new security update for Quicktime, and yes, you need to reboot 😉
Edited 2007-05-01 21:22
2007-05-02 3:19 amJohann Chua
Seriously speaking, is there any need for QT on a server install of OS X?
2007-05-02 9:29 amKroc
Quicktime Streaming Server? Quicktime is a core part of the OS, handling pretty much all video and media processing.
2007-05-03 3:13 pmcoolestuk
Just FYI — on OS X, if there is no monitor attached, then the windowing system is simply not loaded. This even applies when the os is not OS X server. So, whatever hard disk space is being taken up by Quicktime, it’s not going to affect performance (unless of course that server is actually using Quicktime).
2007-05-02 12:25 ammanix
I as far as I know the OsX kernel is a micro kernel. Minix also has a micro kernel.
The concept of the micro kernel is great. Each time the kernel needs to access a device, it will call a user mode device driver. This way a bad device driver shouldn’t make it crash.
However, this also means that each time the kernel accesses a device, it has to do what’s called a context switch. For each context switch the system has to save the context, register values, of the current process. This is very expensive in terms of CPU time. That’s probably why Linux and *BSD systems use traditional kernels and thus perform better.
2007-05-02 2:06 amanduril
Mach really isn’t a micro-kernel. Its kinda a bastardization of both http://www.roughlydrafted.com/0506.mk3.html (can pull other links too if needed). The issue with OSX is more an issue with threading which has been shown in more than a few comparative benchmarks. OSX was designed more to be a user OS first, and a server OS somewhere as a distant second.
2007-05-02 2:08 amanduril
What would be interesting to see is how many of the sites hosted on macs are hosted by your every day job, on a normal home connection. That alone could cause the uptime issues (hardly the most stable connections) or how many hosting sites using macs are running mac minis or the like instead of xservers.
Personally, I really wouldn’t want to host on OSX just because of the threading and performance issues it has vs. Linux/BSD/Solaris and its somewhat slower patch time but it is a great user OS no doubt.
There are many factors that determine the uptime of the system one of them is the selected OS, but I think things like admin skills selected hardware, and hardware configuration is more important.
2007-05-01 6:51 pmFilip
Indeed, that’s why they used these sample sizes. Unless there is a correlation between selected OS and the variables you name, the comparison is a useful one.
Now, I can imagine such correlations exist. But that only raises another question. Why would Windows attract incompetent admins? Or why would Windows be installed on flimsy hardware?
Is there a mirror somewhere?
2007-05-01 7:55 pmsbergman27
Is there a mirror somewhere?
Shoulda used NetBSD! 😉
only thing i noticed , was : only NetBSd has stable performance !
NetBSD ROCKS!!! Too bad my site is on FreeBSD.
I think the results of Minix is interresting. It is very new as a real-world operating system, yet its uptime comes to close to that of the “big boys”.
Also, the response time is interresting. It’s very close to the others (in fact better than OS X(!)) and if they’ve used ACK (default) the response time would probably be better than Windows if GCC was used instead.
2007-05-01 10:35 pmEarl Colby pottinger
I get just the opposite opinion. Minix’s downtime is huge compared to the others. Why? Updates?
Comparing Minix which is alwas on the bottom of the list to the next best OS on the same list.
Mac OS X 96.70 = 3.3% downtime.
Minix 88.70 = 11.3% downtime or over 3x greater.
Mac OS X 97.82 = 2.18% downtime.
Minix 94.77 = 5.23% downtime or over 2x greater.
Solaris 97.88 = 2.12% downtime.
Minix 95.04 = 4.96% downtime or over 2x greater.
Minix has a long way to go.
Hate to dis this article as I am sure they had good intentions, but these numbers are very shaky. Particularly the ones showing Net/Open BSD and Solaris as having the top uptimes and lowest response times.
The fact of the matter is that the typical website served on those 3 OSs is VERY different from the typical site served on Linux/Windows/OSX…
The vast majority of BSD and Solaris sites are either completely static websites or mostly static websites with a bit of CGI or perl thrown in. In a nutshell most are old and simple. Solaris will of course have a significant number of JSP sites, but even still there are ALOT of old static websites still chugging along on older Sun hardware – and most of them are static.
Linux/Windows/OSX, however, are almost always fully dynamic websites using PHP/ASP/ASP.NET/JSP/etc. These are of course just stereotypical observations, but even if they are wrong the numbers given simply dont take this type of variable into account and the results will obviously be heavily skewed because of it.
Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt…
2007-05-01 10:03 pmsanctus
I would like to see some data backing that up. My experience shows me quite different use than yours and I’m sure they’re both valid.
Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt…
Well, that goes for every benchmark/compare.
2007-05-02 2:11 amanduril
Any survey like this is going to have to be taken with a grain of salt. Unless they query each host for exact OS versions, patches, software running, pages served, etc. it’ll never be “accurate.” However, it serves as a rough estimation.
It’s still valid though as a stepping stone (one of many) towards a determination of what OS to use.
2007-05-02 3:25 amjayson.knight
I normally don’t agree with ‘casual’ observations such as the ones you make above, but in this case I’m inclined to agree.
Dynamic websites are almost always going to be put on a reputable web server since they are usually some sort of revenue generating entity (e-commerce, corporate sites, etc) and thus will almost always be run on either IIS or Apache (Windows and Linux respectively. Yes I realize Apache runs on almost anything, but it’s almost always Linux for anything critical as that’s the “native” platform).
As the previous poster said, with dynamic sites comes issues of deploying A) updates to the site itself (which in the case of JSP/ASP.NET means a recompilation which causes downtime) and B) updates to the underlying infrastructure (JVM/.Net updates, OS updates, etc) which should both add up to the +- .5% differences seen in the charts.
Lastly, it’s hard to take something this general too seriously. I think the reports should further be drilled down into the underlying technology being used for the site (CGI/JSP/ASP.NET etc) as well as the type of site. If it’s a simple 100 page static HTML site, it’ll run forever. Web applications (i.e. something that uses a database) are more complex, and therefore more prone to uptime swings.
2007-05-02 6:31 amSoulbender
“The vast majority of BSD and Solaris sites are either completely static websites or mostly static websites with a bit of CGI or perl thrown in. In a nutshell most are old and simple.”
Really. And you know this how? If you’re going to dis someones research at least do your own research better.
“Im not saying that their numbers are completely without merit, but they should be taken with a few bucketloads of salt.”
Just like you should with EVERY benchmark.
statistically Linux based servers provide better availability and response speed than Windows servers.
What variables did they leave out that can change this conclusion?
1. Whether a site is overloaded or underloaded by site visits – this obviously affects response times, and its not an OS issue if proper planning is not in place.
2. What work does each site do? as someone mentioned before, dynamic vs static pgaes.
3. What is a ‘site’? A single machine? A network-load balanced set of machines? An ip address? etc.
4. Is it a critical website or a site that is used during business days only? How admins treat downtime will affect how much they will allow it.
They are comparing apples to oranges when they compare what OS is more suited to be a hosting OS, without setting up the same scenario for each web server.
Personally, this was a useless analysis.
Actually I think this is an interesting analysis (keeping in mind – as others have mentioned – that many variables are unaccounted for), but I’d be interested in seeing a lot more than three weeks of data.
If, within that three week period, there were critical patches issued for [insert OS] that required a reboot, the numbers may be artificially low for that OS. If this could be averaged over twelve months, vendors with frequent critical patches would still take a beating, but that would be more legitimate than having the misfortune of issuing such a patch during their brief monitoring period.
How do they measure response time, exactly? That’s as much a function of the distance in space as well as hops from the starting location and back as anything else, so if a given group of servers was closer, it would give different results. There was no mention of that at all, if they adapted for that or not.
Also, which language is used has little to do with stability: the quality of the implementation of the language and the software written in it is what matters for uptime and response time, and not the language itself, with exceptions for languages that aren’t likely to be used for such a problem domain to start with. Besides, that claim that the language matters was not backed up at all by the article: it mentioned it at the top, but didn’t do anything with that claim afterwards.
Also uptime issues are partially affected by the number of patches and the time required to install them: a system that isn’t updated often only truly means (by itself) that the system isn’t updated often, not that it doesn’t have many potential patches that may be available. It may simply be that those systems that aren’t down for patches are left unpatched, so how do you properly account for that without knowing what patches were put out and whether or not they were installed for the appropriate systems? Also, for all they can tell, the system had downtime (non-responsive) for reasons unrelated to OS patches: they might have needed to take a system down for database maintenance, backup of other data, hard drive or other hardware failures, etc. and not all servers have the same level of hardware redundancy to fail over gracefully, so in effect a lot of these numbers have little practical meaning unless all that information is factored in correctly.
All together a worthless article as the survey the conducted is lacking in variables such as location, infrastructure, type of business, type of website hosted, etc. I think it is pretty clear that the average mainstream SMB is not going to be using NetBSD, more likely Windows or Linux.
The proper method to conduct such test would be of course to test each server side by side within the same infrastructure conditions. In tests I have seen in the past, OSX did not do as bad, nor did IIS as well.
In short I would not rely on this so called test any more than I would rely on Microsofts own marketing department to help me decide. In the end I think many professionals will end up using what they are comfortable with so long as it will not have an impact.
What an odd survey. It has almost as many Minix hosts as Solaris, NetBSD, and OpenBSD hosts put together.
it seems that they mesure the uptime between reboots..
in that case, if you have a high uptime it means you haven’t upgraded your kernel. Given the number of security updates, Linux should be far behind, or they simply didn’t apply updates or they are using e.g. a 2.4 kernel …
Or if by uptime they mean that the server always answer, well, my server runs for monthes and beside kernel updates it hasnt have had a single SECOND of downtime.
But then, so did my non-linux servers.
Also the response time mesurement is rather vague..
benchmarking and statistic are really something not easy to deal with
It’s all pathethic, where is VMS?!?!?!
Mon.itor.us have started a forum to debate this further:
I found this to be an interesting article. I like the fact that they state that downtime can equal money without making outlandish claims as to the amount.
Now let the OS pissing matches begin in the forums.