Linked by Thom Holwerda on Mon 18th Jan 2010 16:57 UTC, submitted by wanker90210
Hardware, Embedded Systems ACM's latest journal had an interesting article about RAID which suggested it might be time for triple parity raid. "How much longer will current RAID techniques persevere? The RAID levels were codified in the late 1980s; double-parity RAID, known as RAID-6, is the current standard for high-availability, space-efficient storage. The incredible growth of hard-drive capacities, however, could impose serious limitations on the reliability even of RAID-6 systems. Recent trends in hard drives show that triple-parity RAID must soon become pervasive."
Permalink for comment 404962
To read all comments associated with this story, please click here.
by gilboa on Tue 19th Jan 2010 17:35 UTC in reply to "RE[4]: RAID Z"
Member since:

A. I wasn't trolling. I have nothing against ZFS or OpenSolaris (I use them both... not in production, though). I am -very- much against the people who automatically post "use ZFS instead" message as a response to each and every storage-related-news-piece.

B. As for your question, I see ZFS' lack of layer separation as major issue due to the following problems:

1. We have >30 years of experience with dealing with FS and volume manager errors. In essence, even if your FS completely screwed up huge chunks of its tables (no matter how many copies of said tables the FS stores), in most cases the data is still salvageable.

2. We have >20 years of experience in getting screwed by RAID errors. If something goes wrong at the array level and you somehow lose the array data/parity mapping or parts of it, the data is doomed. Period.

3. As such, I'm less afraid of trying new FS's, ext3, ext4, btrfs, ZFS. As long as I can access the on-disk data when everything goes to hell, I'm willing to take the chance. (Of-cause, as long as I don't get silent corruption that goes undetected for years...)

4. On the other hand, I want my RAID to be tested and tested again, and I want it to use as little code as humanly possible. (E.g. Linux SW RAID [1])

5. ZFS is relatively new and it combines 3 layers that I personally prefer them to be separate. A simple bug in one of the bottom layers (say, the pool management layer) can spell an end to your data in an recoverable way. And with a file-system as complex and relatively immature as ZFS (compared to say, ext2/3 or NTFS), this a -major- flaw.

D. Last and not least, while ZFS is -a- solution to improving the resiliency of RAID arrays, in my view, the OS lock-in, patent issues (that prevent other OS from implementing ZFS), and less than ideal implementation makes ZFS a far from ideal solution.

- Gilboa
[1] $ cat /usr/src/kernels/linux/drivers/md/*raid*.[ch] | wc -l

Edited 2010-01-19 17:47 UTC

Reply Parent Score: 2