Automating software testing allows you to run the same tests over a period of time, ensuring that you are really comparing apples to apples and oranges to oranges. In this article, Linux Test Project team members share their methodology and rationale, as well as the scripts and tools they use to stress-test the Linux kernel.
A few years of this and I’m sure the linux kernel will be the most solid thing on the block while still being highly functional and scalable. It already is very stable IMO but I’m sure that thanks to the nature of open source it will become even more refined and perfected.
Clearly testlab style work that is described here. This can be useful data for a big e-commerce company, but for ordinary home users, it’s nothing more than an indicator of the potential of a Linux 2.6 system.
The guy in the street might be bothered by compiler bugs, making a miscompiled driver non-working, and thus useless, or bad BIOS implementations, causing some weird lockups every now and then. There’s more flaky than rock-solid hardware out there, and the number of combinations of equipment, distro’s, program versions and tools used to build them, is infinite.
So a reliable kernel comes not from good software design or lab testing alone, but just as much from real-world experience, knowing what works, and what things have varying results in the field (and avoiding those!). Your computer isn’t mine, and their are millions of different machines out there, none of them working absolute flawless (maybe close, though).
I still prefer a full day of heavy 3D gaming as stress-test for newly built hardware/software configurations.
I think the stability/usability of the linux kernel could be better improved if someone would resemble the Microsoft Windows WHQL tests for linux drivers.
Like if WHQL certification ever meant anything.
For Matrox drivers, for example, that just meant “official” drivers took forever to be released, and usually I just downloaded the “latest” drivers, which were much faster and stable.
WHQL means a lot since it gives developers a standard way to test drivers. Even if they are unsigned, most of the released drivers were still tested with the WHQL test program.
Linux could definately use some kind of standard tests for it’s drivers which everybody can run. This would warrant some “base level” of quality.
It’s amazing to see the effort they are putting in trying to increaze the ‘quality’ of the linux kernel.
It seems that 2.6 is already very very stable and the fork to 2.7 should not be very far…
—
Paolo
http://www.cafeshops.com/paoloc
> How can you ever trust the stability of the Linux kernel
> if they have to make some “tests” (read: bias-laden
> opinion pieces) to say how “good” it is.
WTF ? How can you measure how “good” is a piece of software ?
—
Paolo
http://www.cafeshops.com/paoloc
WTF ? How can you measure how “good” is a piece of software ?
I don’t know. But I don’t do those tests anyway. Usually people complain later that the test was done “wrong” in some way, so I guess the testers don’t know either.
I don’t know. But I don’t do those tests anyway. Usually people complain later that the test was done “wrong” in some way, so I guess the testers don’t know either.
You are mistaken. Tests (done by the developers themselves, and by QA labs) are an integral part of finding real bugs and building software in just about any project today.
Any sizeable software project written without testing is basically going to be rubbish.
Egads man! Testing, from unit to user to stress and regression, is the most important part of development (next to good requirements and specifications
Nice to see a regular battery of tests being updated and run against the kernel. I am still not sure, however, what they are measuring.
I guess just the progressive improvement of the “numbers”?
Mike
Why would you trust something that’s not been properly tested?
Perhaps my context wasn’t made clear (again the enterprise argument). If i am a decision making SA for a large corporation deciding whether or not to consider Linux in my enviroment. do i use a established mature commercial Unix product with a proven track record in performance and stability or a fledgling OS in the early stages of developent where stability is in question due to the need of a special battery of tests to stress the system?
Are you just a janitor at IBM?
Does it matter? do you have any idea on big IBM is? Do you think that maybe my opinions of linux have nothing to do with my employement at IBM? when i was working as a contractor for the US Army with a kuwait.army.mil address did it matter what i did for them? What your says indirectly is you make character judgements based on posters domain names? how inane.
So according to the article, linux testing is up to each distro.? Well, I thought the open-source crowd had everything formalized: bug fixes, testing, general quality control, etc…
How long has linux been around, and they are just now getting around to formal testing procedures? This is really shameful and it is very embarrassing to say the least.
How do you (the linux community) expect us (users) to trust the quality, scalability and reliability of linux without a open source, community based, standards-based approach to testing? How can we be certain of the level of quality?
Apparently you have no idea.
The Kernel (as well as all open-source programs) are very vigorously tested.
Shows how ignorant you are.
Perhaps my context wasn’t made clear (again the enterprise argument). If i am a decision making SA for a large corporation deciding whether or not to consider Linux in my enviroment. do i use a established mature commercial Unix product with a proven track record in performance and stability or a fledgling OS in the early stages of developent where stability is in question due to the need of a special battery of tests to stress the system? ”
——————
every operating system is stress tested regardless of their maturity
saying Sun, Ibm, Hp don’t test ?
So according to the article, linux testing is up to each distro.? Well, I thought the open-source crowd had everything formalized: bug fixes, testing, general quality control, etc…
How long has linux been around, and they are just now getting around to formal testing procedures? This is really shameful and it is very embarrassing to say the least.
How do you (the linux community) expect us (users) to trust the quality, scalability and reliability of linux without a open source, community based, standards-based approach to testing? How can we be certain of the level of quality?
Go away. I was trolling here first. And I don’t try to sound like I got a stick in my behind.