Linked by Thom Holwerda on Thu 27th Sep 2012 19:36 UTC
Apple I bought a brand new iMac on Tuesday. I'm pretty sure this will come as a surprise to some, so I figured I might as well offer some background information about this choice - maybe it'll help other people who are also pondering what to buy as their next computer.
Thread beginning with comment 537011
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

"My point was that for most people, you're unlikely to hit the FLASH p/e limit even with 3000 p/e cycles."

I don't disagree, for many people with typical usage flash will outlive the device, but it all depends on your writing patterns. I stand by my opinion that if you regularly update very large datasets, the lifespan of these newer 3K P/E NAND chips can be consumed quickly. Mind you, I'm not trying to imply that everyone is at such a high risk. The most troubling common use case that's been mentioned is using NAND for swap "just in case a badly behaving app exceeds available memory", well that badly behaving app could be trashing the NAND's cells for no good reason.

That might not seem so bad if you've only got a small swap file. However the flash controller will be busy re-provisioning healthy under-utilized cells (where static files reside) and replacing them with highly active swap pages. This is done to extend the average lifetime overall, however it implies even more writes than the swapping alone, and it puts the static data in additional risk by moving them to older cells. We mustn't overlook the controller's own writes for it's page tables. Unlike a HDD where the disks are clicking like crazy, with SDD you might not even notice.

I think people may not realise just how flaky these MLC NAND chips can be. Not to scare anyone, but just to provide some insight, here is a screen shot of the NAND dump for the last flash case I worked on, which was one of the more popular brands.

Here we read in 5 pages, each page being read 4 times and repeated on screen. (We're only seeing the first 100 or so bytes out of the full ~6K page.) The colors highlight inconsistent read errors on each page (not showing the write errors). This is perfectly normal in newer chips where the controller's ECC is designed to compensate on the order of 30 bit errors per 1KB. It's fine as long as the errors don't exceed the ECC as engineered

Never the less, due to the probabilistic nature of some of these bits when read, it becomes non-trivial to deterministically calculate how many bits are bad by reading them once as the controller does - therefore it's conceivable that the controller will write data in a page, *thinking* it's still correctable via ECC, only to find out the data actually contained more bit errors than it's ECC could compensate for. Of course the engineers should anticipate this and should mark the page bad even when there are correction bits to spare, however unless the flash is heavily over provisioned, the controller has to be conservative in order to not prematurely use up all it's spare's a delicate balance resulting from the conflicting goals brought up earlier.

Anyway, I hope the main take away is: enjoy the real benefits of SSD, but please remember to keep a backup.

Reply Parent Score: 2

WereCatf Member since:

I was thinking about this last night and it would be an exceedingly interesting project to try and measure what kinds of workloads yield what kinds of results. It's easy enough to automate such testing under Linux, for example, all you need to do is collect enough information about the different kinds of workloads and then accelerate them, then do the math when the drive goes bust about how much real-world time it would've taken. I'm actually formulating a way of collecting said information right now and how to apply it to a benchmark, too bad that I have no financial means of buying a bunch of SSDs and burning them through until they die.

Reply Parent Score: 2

Alfman Member since:

Yea, my geeky curiosity couldn't justify that experiment. "Donor disks" are useful for another purpose: reverse engineering the encoding/mapping algorithms used by a particular controller, which is immensely helpful in reconstructing sector data from a raw flash dump in a data recovery scenario.

But instead of talking about it I'll just drop this link.

Reply Parent Score: 2