Linked by Thom Holwerda on Tue 6th Dec 2005 13:18 UTC, submitted by cpina
Linux "Although having this reputation, my question was: was a RAID 1 system too slow? Was it slower than not having any RAID? In fact, there are people that say that it is the other way round - these people say that having software RAID 1 is faster than having just one hard disk drive. In any case, who is right?"
Thread beginning with comment 69671
To read all comments associated with this story, please click here.
Brilliant article, original research
by AndrewZ on Wed 7th Dec 2005 14:53 UTC
AndrewZ
Member since:
2005-11-15

I applaud the author for taking the initiative to perform experiments that the rest of us just talk about. These are very interesting conclusions. I never knew RAID1 in software was so slow.

I wish more people would take the effort to find out things for themselves instead of just carping.

Reply Score: 1

Robert Escue Member since:
2005-07-08

Unfortunately his tests only cover Linux, it does not look at Solstice DiskSuite and Solaris Volume Manager (Solaris), Veritas Volume Manager (multi-platform), and the volume managers used in AIX and HP-UX.

So his tests give a general indication of the performance of the volume manager in Debian only on his hardware.

Reply Parent Score: 1

nharring Member since:
2005-12-07

I can't applaud the author, since he obviously failed to understand what he was actually benchmarking. Read the detailed description of the tests the author invented.
For example: "read the copied files from /dev/null and return them to /dev/null." This doesn't test anything except the efficiency of your bit bucket. Either he explained it wrong or he doesn't understand what he's testing. Either way it damages credibility.
"Then, we unpack using tar -xf linux.tar inside the RAID (it is almost written because it is copied and the RAM of the system is enough to fit the file)." There's no such thing as "almost written". He means that its written to cache and scheduled for disk write, which will happen either when dirty buffers exceed a threshold or when a timer expires. With a write of this size that threshold will almost certainly be hit and the write will begin streaming to disk almost immediately.
/dev/null is also a bad data source for disk writes, since it creates sparse files which some systems handle in a special way that adds efficiency. /dev/urandom would be a much better test since its nonblocking and outputs data that can't be optimised.

While the concept of the article was good, the author seems to have not fully understood what he was testing and therefor ended up putting out more flawed benchmark results.

Reply Parent Score: 1

cpina Member since:
2005-12-05

Hi,

Thanks for your suggeriments/criticisms (and other people who rode it and liked too).

About /dev/null, it is a mistake in article! When I wrote it, I wrote /dev/null, I had to write /dev/zero, of course! (in some places). dd from /dev/null to a file is a "new empty file", my mistake. I will correct now, at least I will add a note on top of article.

About tar -xf linux.tar, some parts was written to disk, this is 100% sure. Maybe not whole Kernel, but some parts yes (I checked using vmstat, etc.). Else, how is possible that it depends of which RAID we are using?
In "real life" we will do tar -xf kernel.tar and it will takes more or less time if we have one RAID or another RAID. Maybe the explication is not perfect, and next time I will umount RAID device and add this time to tar time (to be sure that everything is on disk).

Thanks lot of,

Reply Parent Score: 1