Linked by Thom Holwerda on Thu 27th Sep 2012 19:36 UTC
Apple I bought a brand new iMac on Tuesday. I'm pretty sure this will come as a surprise to some, so I figured I might as well offer some background information about this choice - maybe it'll help other people who are also pondering what to buy as their next computer.
Thread beginning with comment 536914
To view parent comment, click here.
To read all comments associated with this story, please click here.
Alfman
Member since:
2011-01-28

Werecatf,

"Also, as you can see even with two 7200RPM drives in striped RAID the system barely manages 200MB/s.

Does this answer your question? Also, I expect atleast a thank you :/ "

Yes, thank you for performing the experiment!

Were you surprised to see that much jitter in your graph? For all I know that could be normal. I expected the HDDs to do better. My own drives do an average 150MB/s at the edges, which would theoretically yield 300MB/s bandwidth in a raid. I'm guessing your disks could be slower due to a lower bit density and/or fewer platters. Or maybe your raid itself is a bottleneck?


With 15K disks, the bandwidth should double assuming no other component bottlenecks. So in theory 500+MB/s would be achievable using a raid of high performance drives.

Of course, it's futile for an HDD to compete with SSD on seek latency, which in my opinion is THE killer feature for SSD. HDD spindle speed is an inherent bottleneck. Although I've often blamed operating systems for over-reliance on small files during bootup. If these were packed in a 300MB archive, the OS could read everything into ram easily within 3-4s. The OS would rebuild the archive automatically based on self-assessed dependencies... this is totally tangential to our discussion, sorry.

I'll close by acknowledging your point, which is that SSDs are a win on performance.

Reply Parent Score: 2

WereCatf Member since:
2006-02-15

Were you surprised to see that much jitter in your graph?


The jitter is because the disks are in use. There's slightly less jitter on the disks that aren't in use, but there is still lots of jitter. Jitter is indeed perfectly normal, there is no way to avoid it -- it is simply a side-effect of the drives being mechanical objects.

My own drives do an average 150MB/s at the edges, which would theoretically yield 300MB/s bandwidth in a raid. I'm guessing your disks could be slower due to a lower bit density and/or fewer platters. Or maybe your raid itself is a bottleneck?


They do around 125MB/s individually, but I can't test them individually now unless I unpair the RAID and that would require relocating all the data somewhere else and all that. Not worth it.

With 15K disks, the bandwidth should double assuming no other component bottlenecks. So in theory 500+MB/s would be achievable using a raid of high performance drives.


Aye, that is true. But 15K disks aren't cheap, you'd save money just by buying two el cheapo SSDs and setting them up for RAID mirroring so that if one dies the other one will still function. Replacing the broken one wouldn't cost much, and you'd get the benefit of less heat, less noise and lower seek times. Also, as I said, even 15K disks have seek times at around 2-3 milliseconds, whereas SSDs generally have something around 0.02ms.

If these were packed in a 300MB archive, the OS could read everything into ram easily within 3-4s.


That is how many liveUSB/CD - systems work: they load the image to memory and only then execute its contents. This improves boot-up by several orders of magnitude, although it requires more memory.

The OS would rebuild the archive automatically based on self-assessed dependencies... this is totally tangential to our discussion, sorry.


No need to be sorry, plus I agree with you. Alas, I am not aware of any OS that is actually capable of doing that. It also doesn't seem like a feature worth putting much effort into anymore as flash-based storage media won't go away anymore. It's actually much more likely that in the future all mechanical drives will also incorporate some flash-media for OS-files and/or cache, something that completely eliminates the need for such an archive.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

WereCatf,

"Aye, that is true. But 15K disks aren't cheap you'd save money just by buying two el cheapo SSDs and setting them up for RAID mirroring so that if one dies the other one will still function."

I'd personally be worried about common failure modes of SSD in a RAID. It would be unusual for HDD components to die at the same time, but with SDD the cells could exhaust their lifetimes in parallel especially if they're mirroring the exact same writing patterns. Once uncorrectable data errors start appearing on one, it's very likely that severe data errors are already occurring on the other too. Maybe combining an old SSD with a new one could help avoid simultaneous failure?

From a technology point of view, I find it ironic that we'd be moving towards MLC and TLC NAND chips, which explicitly give up reliability in favour of doubling and tripling capacity, and then we'd turn around and use these less reliable chips in a RAID environment with the explicit intend of giving up capacity in favor of gaining reliability.

I'd be more tempted to use an SSD as a cache on top of the HDD. I know some manufacturers will sell a bundle like this for a high price, but I'm not sure if you can take a generic HDD and a generic SSD and setup them up this way on linux or windows?


"Replacing the broken one wouldn't cost much, and you'd get the benefit of less heat, less noise and lower seek times. Also, as I said, even 15K disks have seek times at around 2-3 milliseconds, whereas SSDs generally have something around 0.02ms."

Of course ;)

Reply Parent Score: 2