This article documents the test results and analysis of the Linux kernel and other core OS components, including everything from libraries and device drivers to file systems and networking, all under some fairly adverse conditions, and over lengthy durations. The IBM Linux Technology Center has just finished this comprehensive testing over a period of more than three months and shares the results of their LTP (Linux Test Project) testing with developerWorks readers.
I’d have been more interested if the tested the latest stable kernel 2.6.
And how could such comprehensive tests over a 3 month span be done on something which was released last week???
Hmmm, perhaps they could have tested the test versions instead then. I mean 2.4.19 is old. Way too old. I doubt potential enterprises will be interested in how 2.4.19 handles under load. 2.4.22 would have been more informative. Or better yet, the test versions of Linux, prior to its stable release.
Yep. That’s a great idea. Test beta software for stability. Brilliant! Seriously, though, 2.4 will be around for at least another couple of years. No enterprise in their right mind will move to 2.6 until at least a few patch releases have gone by. Enterprise users are generally conservative, waiting for the bugs in new software to be ironed out before switching. As for 2.4.19 vs 2.4.22, note that 2.4.22 the changes between the versions are relatively minor, and aside from back-ported security patches, most installations are probably running earlier 2.4 kernels. Certainly, barring major regressions, 2.4.22 should be at least as stable as 2.4.19.
”
Success rate: 95.12 percent
Zero critical system failures
”
Not bad for the 2.4.19 kernel considering the 2.4 branch is on 2.4.23stable.
Did you tried a test kernel 3 months ago? I did and it wasn’t far from being stable on every computer… It was even slower on mine ’till 2.6.0-test7.
I believe 2.4.22 was not available when they started the tests. It came out a little bit more than 3 months ago (08/25). The report is dated 12/17. Finally, I’ve heard the disk I/O performance with 2.4.20 wasn’t terrible so I guess that’s why they finally decided to use 2.4.19. Then again, maybe they only used SuSE’s stock kernel…
if the tested the latest stable kernel 2.6.
Everybody is eagerly waiting for 2.6 stable but there is no such version on the horizon, just rumors.
Test beta software for stability. Brilliant! Seriously, though, 2.4 will be around for at least another couple of years. No enterprise in their right mind will move to 2.6 until
For serious procuction systems 2.4.19 will be here for a couple of years.
But that does not mean everyone with a production system has to run the most proven 2.4 version. You could have a performance gain with the (pre-stable) 2.6 for some specific purposes (disk I/O, dual CPU, RAM, etc.) and get away with some minor 2.6 kernel errors. So it will have to do with the specific task being depolyed, IMO. The test only mentions networking heavy load. OTOH ib/ppc Linux kernel is not the most tweaked existing kernel.
Many people knows workload for plain file/printer networking is different from database query and java workload, for example.
As for memory management … Linux 2.4 is not the best kernel out there (FreeBSD is better).
A surprise comes from the swap percentage not being improved after the first 21 days !
That’s a big surprise for me, personaly; I would expected the kernel to not clean up the RAM memory accordingly without a reboot ! and then make a bigger use of the sawop memory ! Nice SUSE.
It came out a little bit more than 3 months ago (08/25).
Arr, I should go back to school. That’s 4 months ago. Anyway, perhaps it was unavailable when they started the tests… They didn’t said when they began the tests.
wouldve been better if ran the tests on x86 instead of powerpc. after all most linux machines are on x86, and more readers wouldve found the results more relevant. *shrugs*
Umm, 2.6.0 stable was released a few days ago. So, not rumors, but facts.
wouldve been better if ran the tests on x86 instead of powerpc. after all most linux machines are on x86, and more readers wouldve found the results more relevant. *shrugs*
Note: It was a dual processor 64 bit P4 running the 64 bit SLES distro. IBM’s aim him was obviously to examine the stability in a 64 bit enterprise type environment.
What I missed in the article was any indication of context: how do these results compare with what would be expected from a Unix OS, for instance?
How do they compare with other OSes? QNX for instance?
These are stability numbers. They aren’t supposed to be compared to other operating systems. They show that Linux is stable over long periods of time, nothing more.
How does Linux compair with these same tests to HP-UX, Solaris, AIX? This information that an IT manager needs to make judgments about deploying linux.
1) These results are meaningful. The server was put to 99% load for 60 days. The fact that Linux handled it without any kernel errors shows that it is stable. Testing on another UNIX OS would have been kind of stupid, because the testsuite was the *Linux* Test Project — presumably, the test cases excercise some Linux-specific functionality in addition to the general POSIX functionality.
2) This was supposed to be a serious test on serious hardware. They used a big Power4 system for the test, because that’s the kind of hardware you have when CPU load is pegged at 99% over a period of two months.
3) They used the stock SuSE kernel, as they say in the article. This isn’t just any old 2.4.19, its a heavily patched, SuSE-customized 2.4.19. SuSE and RedHat kernels are generally finely-tuned for their workload, and contain backports and bug-fixes from more recent kernels, even from the 2.5.x series. On this kind of hardware, with this kind of high-end distro, you’re not going to be replacing the kernel for a (most likely unsupported) custom one.
Does anyone here actually know someone who runs Linux on a p650 (or higher) and if so what the hell they do with it ? I’ve always wondered this seems like the right thread to ask it in 🙂
Well I have a friend /w a 733mhz P3 that he uses as a file server(Slackware 9.0). He just slapped in a bigger harddrive(15GB -> 160GB) and he was good to go.
Also I have a AMD K6-2 450mhz that I use as a mail server(Debian). Its pretty sweet.
LOL! I don’t think he meant p650 as in P3-650MHz, but rather, the p650, a mid-range IBM Power4 server. In response to Tyr, I don’t know anyone personally who uses a p650, but IBM’s has sold a couple of thousand of them so far, all of them with the capability to run Linux or AIX (or both at the same time). Generally, these things are used to run big databases or business back-end applications.
From our experience at work we found the first very stable 2.4.x kernel to be 2.4.19. Most of our machines run that, we have a few with 2.4.23 which do some firewire tasks. Everything before 2.4.19 was pretty squirrely, probably due to the VM, and 2.4.20 again had VM issues (I had problems with the OOM killer).
I’m not surprised that 2.4.19 was the kernel they chose. IBM is trying to sell to people who want machines that stay up forever with high load, not machines that have the absolute highest throughput.
I would be surprised to see 2.6.0 being widely deployed on high traffic high relability machines for at least 4-6 months. There’s going to need to be a lot of work done to test how well SMP machines are able to handle high IO and memory bandwidth demands.
I wish I could play with this at work but I’m afraid we’ll probably be stuck using redhat 7.3 with kernel 2.4.19 for a very very long time since it currently ain’t a broke combination.
LOl my bad. it did look like p3-650.
“1) These results are meaningful. The server was put to 99% load for 60 days. The fact that Linux handled it without any kernel errors shows that it is stable. Testing on another UNIX OS would have been kind of stupid, because the testsuite was the *Linux* Test Project — presumably, the test cases excercise some Linux-specific functionality in addition to the general POSIX functionality.”
They show it is stable, but does that tell us anything? What if all OSes are stable?
What level of stability is considered acceptable? What level is usual in server OSes? How challenging are these tests? Are there OSes that would fail them? Or Linux kernel versions that would fail them?
As it is, this is no more significant than the fact that the power supply on the server being tested was still working after 90 days.
“Does anyone here actually know someone who runs Linux on a p650 (or higher) and if so what the hell they do with it ? I’ve always wondered this seems like the right thread to ask it in :-)”
I run it on an Athlon 1800+, it is a wonder for desktop usage. I wouldn’t code in Windows, save when I have to (.net).
Currently we run AIX on our p620’s, Oracle runs quite well on them.
We ran RedHat on one of the boxes a year ago. The RedHat operating system ran stable with the hardware as long as you don’t try to use software RAID, when software RAID is used then it goes down if you look at it wrong. Further, application support is actually better on AIX then Linux, compiling applications under RedHat was a bit problematic. Support for 64 bit PowerPC was less than stellar at the time, it may have improved with their latest release? But I still don’t understand what Linux offers you over AIX currently? There is a much larger AIX community to rely on, there are not many people that are running Linux on PPC64. Additionally, applications still have to be compiled for PPC64 unless they come with the installation, if you want to update a particular package you will need to re-compile, and that wasn’t an easy task.
But if you like Linux better, or you have to have an application that is Linux only, it does work – plus the latest RedHat release probably has a much better toolchain. Then again, perhaps I should have tried SUSE, it may have worked better? But the cost is not trivial for an enterprise edition of SUSE or RedHat – it costs more to use Linux on a P-series system, our eServer boxes came with AIX.
You can get an IBM-supported install of Linux with the pSeries servers these days. I would presume its no more expensive than getting the machine installed with AIX. As for why anyone would want to do so: IBM has indicated that Linux will eventually replace AIX on IBM hardware. It’ll be years down the road, of course, but IBM is moving in that direction.