Linked by theosib on Sun 14th Feb 2010 10:45 UTC
Linux

Recently, I bought a pair of those new Western Digital Caviar Green drives. These new drives represent a transitional point from 512-byte sectors to 4096-byte sectors. A number of articles have been published recently about this, explaining the benefits and some of the challenges that we'll be facing during this transition. Reportedly, Linux should unaffected by some of the pitfalls of this transition, but my own experimentation has shown that Linux is just as vulnerable to the potential performance impact as Windows XP. Despite this issue being known about for a long time, basic Linux tools for partitioning and formatting drives have not caught up.

Permalink for comment 409316
To read all comments associated with this story, please click here.
Alter your output buffer size
by pagerc on Sun 14th Feb 2010 15:12 UTC
pagerc
Member since:
2009-10-27

So using cp is about as braindead as rm -rf /* for testing disk io. Its all about the block size that's read/written which in the case of cp is 1 character at a time. Something like dd or tar would provide a better metric for streaming writes. tar -cpf - some_path/ | tar -xpf - -C /path/to/final/destination

Or you can use dd which allows you to slice and dice and adjust block sizes trivially, then you can write to a raw block device and see what it can do sans filesystem crap.

An interesting test would use variable block sizes of 512, 768, 1024, 2048, 4096, 8192, 16384, which will show an odd output block size at 768, and the performance of 1 and 2 bit shifts above and below the new block size. Just to show how brain dead a block size of 1 is, I am throwing that in here too.

for BS in 1 512 768 1024 2048 4096 8192 16384 ; do
for SKIP in 0 1 2 4 8; do
dd if=/dev/zero of=/dev/sdc bs=${BS} seek=${SKIP} count=1024k
done
done

Reply Score: 0