Linked by Thom Holwerda on Thu 27th Sep 2012 19:36 UTC
Apple I bought a brand new iMac on Tuesday. I'm pretty sure this will come as a surprise to some, so I figured I might as well offer some background information about this choice - maybe it'll help other people who are also pondering what to buy as their next computer.
Thread beginning with comment 536918
To view parent comment, click here.
To read all comments associated with this story, please click here.
WereCatf
Member since:
2006-02-15

Were you surprised to see that much jitter in your graph?


The jitter is because the disks are in use. There's slightly less jitter on the disks that aren't in use, but there is still lots of jitter. Jitter is indeed perfectly normal, there is no way to avoid it -- it is simply a side-effect of the drives being mechanical objects.

My own drives do an average 150MB/s at the edges, which would theoretically yield 300MB/s bandwidth in a raid. I'm guessing your disks could be slower due to a lower bit density and/or fewer platters. Or maybe your raid itself is a bottleneck?


They do around 125MB/s individually, but I can't test them individually now unless I unpair the RAID and that would require relocating all the data somewhere else and all that. Not worth it.

With 15K disks, the bandwidth should double assuming no other component bottlenecks. So in theory 500+MB/s would be achievable using a raid of high performance drives.


Aye, that is true. But 15K disks aren't cheap, you'd save money just by buying two el cheapo SSDs and setting them up for RAID mirroring so that if one dies the other one will still function. Replacing the broken one wouldn't cost much, and you'd get the benefit of less heat, less noise and lower seek times. Also, as I said, even 15K disks have seek times at around 2-3 milliseconds, whereas SSDs generally have something around 0.02ms.

If these were packed in a 300MB archive, the OS could read everything into ram easily within 3-4s.


That is how many liveUSB/CD - systems work: they load the image to memory and only then execute its contents. This improves boot-up by several orders of magnitude, although it requires more memory.

The OS would rebuild the archive automatically based on self-assessed dependencies... this is totally tangential to our discussion, sorry.


No need to be sorry, plus I agree with you. Alas, I am not aware of any OS that is actually capable of doing that. It also doesn't seem like a feature worth putting much effort into anymore as flash-based storage media won't go away anymore. It's actually much more likely that in the future all mechanical drives will also incorporate some flash-media for OS-files and/or cache, something that completely eliminates the need for such an archive.

Reply Parent Score: 2

Alfman Member since:
2011-01-28

WereCatf,

"Aye, that is true. But 15K disks aren't cheap you'd save money just by buying two el cheapo SSDs and setting them up for RAID mirroring so that if one dies the other one will still function."

I'd personally be worried about common failure modes of SSD in a RAID. It would be unusual for HDD components to die at the same time, but with SDD the cells could exhaust their lifetimes in parallel especially if they're mirroring the exact same writing patterns. Once uncorrectable data errors start appearing on one, it's very likely that severe data errors are already occurring on the other too. Maybe combining an old SSD with a new one could help avoid simultaneous failure?

From a technology point of view, I find it ironic that we'd be moving towards MLC and TLC NAND chips, which explicitly give up reliability in favour of doubling and tripling capacity, and then we'd turn around and use these less reliable chips in a RAID environment with the explicit intend of giving up capacity in favor of gaining reliability.

I'd be more tempted to use an SSD as a cache on top of the HDD. I know some manufacturers will sell a bundle like this for a high price, but I'm not sure if you can take a generic HDD and a generic SSD and setup them up this way on linux or windows?


"Replacing the broken one wouldn't cost much, and you'd get the benefit of less heat, less noise and lower seek times. Also, as I said, even 15K disks have seek times at around 2-3 milliseconds, whereas SSDs generally have something around 0.02ms."

Of course ;)

Reply Parent Score: 2

WereCatf Member since:
2006-02-15

It would be unusual for HDD components to die at the same time, but with SDD the cells could exhaust their lifetimes in parallel especially if they're mirroring the exact same writing patterns.


That is possible, but buying from two different manufacturers would most likely prevent simultaneous breakdown. Especially so if the cells themselves come from different factories and/or batches.

I'd be more tempted to use an SSD as a cache on top of the HDD. I know some manufacturers will sell a bundle like this for a high price, but I'm not sure if you can take a generic HDD and a generic SSD and setup them up this way on linux or windows?


You're in luck: http://bcache.evilpiepirate.org/ might interest you! It's a Linux-based implementation of a block layer cache that according to benchmarks works exceedingly well, plus it's easy to set up and there's a lot of settings you can tweak to your particular uses! As the wiki implies pairing bcache with e.g. a RAID6 setup does overcome the hefty penalty involved with random writes on a RAID6 and you get best of both worlds simultaneously -- of course, there are much simpler usecases for it, too, but this goes to show how powerful it can be when put to use.

There are some caching solutions for Windows, too, but I can't remember them. I only remember that all the ones I saw were quite inferior compared to bcache and that's why I never bothered to bookmark them. If you find a high-quality one I'd be interested in a link myself.

Reply Parent Score: 2