Linked by Thom Holwerda on Mon 18th Jan 2010 16:57 UTC, submitted by wanker90210
Hardware, Embedded Systems ACM's latest journal had an interesting article about RAID which suggested it might be time for triple parity raid. "How much longer will current RAID techniques persevere? The RAID levels were codified in the late 1980s; double-parity RAID, known as RAID-6, is the current standard for high-availability, space-efficient storage. The incredible growth of hard-drive capacities, however, could impose serious limitations on the reliability even of RAID-6 systems. Recent trends in hard drives show that triple-parity RAID must soon become pervasive."
Thread beginning with comment 404924
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: RAID Z
by computeruser on Tue 19th Jan 2010 15:18 UTC in reply to "RE[3]: RAID Z"
computeruser
Member since:
2009-07-21

It cannot replace hardware RAID controllers.

For some applications, ZFS can be a better (cheaper) fit than hardware RAID.

Some people (like me) have major misgivings about the lack of separation between the disk level (E.g. Hardware / Software RAID) and FS layer. (E.g. NTFS, ext4, etc).

If you don't like it, then why don't you explain why you think it is a bad idea instead of trolling?

And last and not least, *Solaris is also an OS. And you don't select an OS just because it has a shiny file system.

No one said "Everyone should use Solaris because it has ZFS". The first post mentioning ZFS simply mentioned that ZFS now has triple parity in OpenSolaris.

Reply Parent Score: 1

RE[5]: RAID Z
by gilboa on Tue 19th Jan 2010 17:35 in reply to "RE[4]: RAID Z"
gilboa Member since:
2005-07-06

A. I wasn't trolling. I have nothing against ZFS or OpenSolaris (I use them both... not in production, though). I am -very- much against the people who automatically post "use ZFS instead" message as a response to each and every storage-related-news-piece.

B. As for your question, I see ZFS' lack of layer separation as major issue due to the following problems:

1. We have >30 years of experience with dealing with FS and volume manager errors. In essence, even if your FS completely screwed up huge chunks of its tables (no matter how many copies of said tables the FS stores), in most cases the data is still salvageable.

2. We have >20 years of experience in getting screwed by RAID errors. If something goes wrong at the array level and you somehow lose the array data/parity mapping or parts of it, the data is doomed. Period.

3. As such, I'm less afraid of trying new FS's, ext3, ext4, btrfs, ZFS. As long as I can access the on-disk data when everything goes to hell, I'm willing to take the chance. (Of-cause, as long as I don't get silent corruption that goes undetected for years...)

4. On the other hand, I want my RAID to be tested and tested again, and I want it to use as little code as humanly possible. (E.g. Linux SW RAID [1])

5. ZFS is relatively new and it combines 3 layers that I personally prefer them to be separate. A simple bug in one of the bottom layers (say, the pool management layer) can spell an end to your data in an recoverable way. And with a file-system as complex and relatively immature as ZFS (compared to say, ext2/3 or NTFS), this a -major- flaw.

D. Last and not least, while ZFS is -a- solution to improving the resiliency of RAID arrays, in my view, the OS lock-in, patent issues (that prevent other OS from implementing ZFS), and less than ideal implementation makes ZFS a far from ideal solution.

- Gilboa
[1] $ cat /usr/src/kernels/linux/drivers/md/*raid*.[ch] | wc -l
13660

Edited 2010-01-19 17:47 UTC

Reply Parent Score: 2

RE[6]: RAID Z
by Laurence on Tue 19th Jan 2010 18:27 in reply to "RE[5]: RAID Z"
Laurence Member since:
2007-03-26


D. Last and not least, while ZFS is -a- solution to improving the resiliency of RAID arrays, in my view, the OS lock-in, patent issues (that prevent other OS from implementing a kernel version of ZFS), and less than ideal implementation makes ZFS a far from ideal solution.


For the last time: there is no OS lock in. I've been patient with you but you keep on spouting this BS.

You keep moaning about choice and how people should be open to other file systems, but so far all I can see is you blithering on about how ZFS won't run on your favourite OS.

In fact, you're starting to come across as the type of person that many of the ZFS engineers at Sun were fighting against when drafting up what license to apply to their file system.
The sort of person that expect everyone to bend over and kiss the holy grail of GPL as if it was the second coming.

I mean seriously - you've made 2 good points and the rest of your posts have been a self-indulgent CDDL rant loosely masquerading as a scientific argument (and your rant is particularly worthless given high end virtulisation costs nothing these days)


Yes you don't like the joined-up layers of ZFS - but that's opinion. There's no "right" or "wrong" way - just a preferred way.
Yes you'd like to see more universal standards.
But lets not overstate the facts just so you get some attention while stood there on your soapbox.

Reply Parent Score: 3

RE[6]: RAID Z
by computeruser on Tue 19th Jan 2010 18:35 in reply to "RE[5]: RAID Z"
computeruser Member since:
2009-07-21

We have >30 years of experience with dealing with FS and volume manager errors. In essence, even if your FS completely screwed up huge chunks of its tables (no matter how many copies of said tables the FS stores), in most cases the data is still salvageable.

Have you ever heard of a backup?
We have >20 years of experience in getting screwed by RAID errors. If something goes wrong at the array level and you somehow lose the array data/parity mapping or parts of it, the data is doomed. Period.

Here is a link to the Wikipedia page on backups, in case you are not familiar:
http://en.wikipedia.org/wiki/Backup
3. As such, I'm less afraid of trying new FS's, ext3, ext4, btrfs, ZFS. As long as I can access the on-disk data when everything goes to hell, I'm willing to take the chance. (Of-cause, as long as I don't get silent corruption that goes undetected for years...)

Why take a chance when you can make backups?

ZFS is relatively new and it combines 3 layers that I personally prefer them to be separate. A simple bug in one of the bottom layers (say, the pool management layer) can spell an end to your data in an recoverable way. And with a file-system as complex and relatively immature as ZFS (compared to say, ext2/3 or NTFS), this a -major- flaw

So your complaint is nothing to do with the actual design itself, but just that it's new.

OS lock-in

FreeBSD also has ZFS and there is a Mac OS X port. OpenSolaris is open-source; the result of lock-in if it was the only OS with ZFS is minimal.
In any case, with virtualization it doesn't matter.
And also: md and ext4 are Linux only.
patent issues (that prevent other OS from implementing ZFS)

What patents in ZFS that prevent reimplementation? Why do you keep repeating this? Are you sure you are not trolling?

Reply Parent Score: 2

RE[6]: RAID Z
by Kebabbert on Wed 20th Jan 2010 10:17 in reply to "RE[5]: RAID Z"
Kebabbert Member since:
2007-07-27

Gilboa, you are hilarious. Are you sure you are not SEGEDUNUM, he keeps repeating weird stuff, like ZFS requires several GB of RAM to start - even though several people explained that is not true, he keeps claiming that.



Regarding your ZFS "rampant layering violation", that Linux kernel developer Andrew Morton called ZFS (now why would a Linux developer call ZFS something like that?). Ive heard that BTRFS is doing something similar with it's layers, does someone know more on this layering violation by BTRFS?

I dont really understand why you have something against a superior product, because of a different design. If you see a product with different design, but the product is best on the market, compared to a product with standard design but the product is inferior - which product do you choose? Do you refuse to use a database application if it is not using the standard three layer model (DB, logic, GUI)? If a product is not using standard programming languages, but instead is using something esoteric such as Erlang - do you refuse to use the product (even though the product is best om the market)? I understand your reasoning if ZFS were inferior, but almost everyone agree that ZFS is the best filesystem out there. So, if a product is best, but has a different design - how can the design matter to you? It is only the result that matters to most people. If something is best, then it is best. No matter the design, the price, or whatever. It is best.

ZFS has tried to get rid of old assumptions and redesign filesystems from scratch, targeted to modern devices. And that is a bad thing? When the first chip was invented with superior performance to transistors - you would refuse to use chips because they were different?

The main ZFS architect Bonwick explains why ZFS has different layer design, his point is that it is not necessary to use unnecessary layers - you can optimize a layer away if you are clever enough. If you are not clever enough, you continue to use the standard solution:
http://blogs.sun.com/bonwick/entry/rampant_layering_violation




Regarding your "ZFS is OS lockin, patent issues" etc. First of all, ZFS is not OS lockin. There are other OS than Solaris that use ZFS. Wrong again.

Second. How can you say ZFS is lockin, when the code is open and out there? If SUN goes bankrupt, we have access to the ZFS code and can continue development. What happens if your hardware raid vendor goes bankrupt? Do you expect your hardware raid to continue development?

Which is most lockin, a hardware raid controller (that needs device drivers to an OS, you can not move your discs to another controller, nor to another OS) or ZFS (you can compile open ZFS code to every OS you want, and move your discs freely between the OSes, even with different endianness). Hardware raid disks can not be moved to different OSes, and if they use different endianness, you are screwed. ZFS can move discs between Apple Mac OS X, FreeBSD, Solaris, OpenSolaris, and every other OS that compiles ZFS - even between different CPU architectures with different endianness! You can not do this with hardware raid - hardware raid is lockin, you are forced to wait for drivers to your OS, you can not do anything, you have to wait for the vendor makes something. ZFS is not lockin, but hardware raid is lockin.




Man, you are just plain wrong in almost everything. The things you say are not even factually correct. It is like "In my opinion, that 2m guy is shorter than the other 1.5m guy" - but that is simply not true, factually. You can say "I dont like ZFS" - but your reasons to do so are false. Hardware raid is the most lockin there is, you can not do shit, only the vendor can do something. You dont have access to the BIOS code, you have nothing. If the vendor bankrupts, you can ditch your card.

And hardware raid is less safe than ZFS. My friend who is CTO for a small company, lost two raids due to bugs in hardware raid BIOS. they confirmed the bug but are not releasing patches yet. This was one year ago, I dont know what happened since then.

Reply Parent Score: 2