Linked by John Finigan on Mon 21st Apr 2008 19:00 UTC
Oracle and SUN When it comes to dealing with storage, Solaris 10 provides admins with more choices than any other operating system. Right out of the box, it offers two filesystems, two volume managers, an iscsi target and initiator, and, naturally, an NFS server. Add a couple of Sun packages and you have volume replication, a cluster filesystem, and a hierarchical storage manager. Trust your data to the still-in-development features found in OpenSolaris, and you can have a fibre channel target and an in-kernel CIFS server, among other things. True, some of these features can be found in any enterprise-ready UNIX OS. But Solaris 10 integrates all of them into one well-tested package. Editor's note: This is the first of our published submissions for the 2008 Article Contest.
Thread beginning with comment 310951
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Comment by agrouf
by jwwf on Wed 23rd Apr 2008 02:21 UTC in reply to "RE[3]: Comment by agrouf"
jwwf
Member since:
2006-01-19

"I too am impatient with the state of Linux storage management.

I don't see many people sharing your impatience in all honesty. The software RAID subsystem within Linux is pretty good, and has been tested exceptionally well over and over again over a number of years. You need to have spent a reasonable amount of money on a pretty decent hardware RAID set up if you really want something better. That extends to ZFS as well, as software RAID by itself can only take you so far.

The only perceived problem is that you don't get volume management, RAID and other storage management features for free in one codebase. If distributions started partitioning by using LVM by default, created userspace tools that had a consistent command line interface, as well as GUI tools that made LVM and software RAID much more visible and usable, then you'd see them more widely used on a wider variety of systems.

...but I find LVM and MD to be clumsy and fragile.

You're going to have to qualify that one, because LVM and MD software RAID were stable and being used before ZFS was even a glint in Bonwick's eye. Indeed, ZFS has yet to be proven in the same way.
"

Clumsy: See your comments about consistent userland tools. Actually I think LVM is ok, but I am not a fan of MD's userland tools, and I am not convinced that the separation of the two subsystems is optimal. I believe this is for historical reasons, anyway. Regardless, this is a matter of taste.

Fragile: I can crash Ubuntu 6.06 pretty easily like this:

1. LVM snap a volume
2. Mount the snap read only
3. Read the snap, write to the parent, let a couple of hours pass.
4. Unmount the snap.
5. Sleep a little while.
6. Destroy the snap.
7. Panic.

The solution? Sync between each step, 4 to 6. This is not the only weirdness I have seen, but it stands out. Humorously, I was trying to use LVM snapshots in this case to reduce backup-related downtime.

The problem with hardware RAID is that most of them aren't very good, and the ones that are are really expensive. To approximate ZFS features like clones and checksumming, not to mention well integrated snapshots, you really need a NetApp or, less elegantly, block storage from EMC. The price per gig of these is about 10 times that of the dumb storage you would use with a software solution. I find it really ironic to hear that one establishment Linux perspective is "just use expensive, closed, proprietary hardware".

And before anyone says that if you have the needs, you'll have the budget, remember that overpriced storage is a problem with any budget, since it causes more apps to be crammed onto a smaller number of spindles with correspondingly worse performance. Plus, what is good about the little guy being stuck with a mediocre solution?

Reply Parent Score: 2

RE[5]: Comment by agrouf
by segedunum on Wed 23rd Apr 2008 09:15 in reply to "RE[4]: Comment by agrouf"
segedunum Member since:
2005-07-06

Clumsy: See your comments about consistent userland tools.

Well yer, but you hardly need to create a whole storage system to achieve that. One reasonable userland tool and multiple subsystems would do it. It probably hasn't been done because there's not much demand, and many people are using tools on top of the normal command line tools.

Fragile: I can crash Ubuntu 6.06 pretty easily like this: 1. LVM snap a volume 2. Mount the snap read only 3. Read the snap, write to the parent, let a couple of hours pass. 4. Unmount the snap. 5. Sleep a little while. 6. Destroy the snap. 7. Panic.

Yer. You're going to need to allocate enough space to the snaphot to take into account divergences and changes between the original logical volume and the snapshot. Which part of that did you not understand? In reality, the logical volume should just be disabled, so I don't know what you've encountered there.

This is the same for ZFS. Regardless of the space saving methods employed, snapshots and clones are not free (certainly if you're writing to a snapshot, or you expect to keep a RO snapshot diverging from an original) as proponents of ZFS want to make out. The only time they will be is when we get Turing style storage, but then, all of our problems will be solved. Do you see what I mean about polishing a turd? To go any further you need to solve fundamental problems with storage hardware.

Humorously, I was trying to use LVM snapshots in this case to reduce backup-related downtime.

Humourously, I do this all the time and I don't need an all-new file and storage system from Sun.

The problem with hardware RAID is that most of them aren't very good, and the ones that are are really expensive.

This debate of software versus hardware RAID has been done to death, and I'm sure you can find some adequate information via Google. ZFS is not adding anything to the debate, and software RAID in ZFS has been around an awful lot less than other software RAID implementations.

...you really need a NetApp or, less elegantly, block storage from EMC. The price per gig of these is about 10 times that of the dumb storage you would use with a software solution.

Yer, but if I need those features the data within them is worth more than any cost. Additionally, ZFS is still experimental and has been around for far less time than any of those solutions.

I find it really ironic to hear that one establishment Linux perspective is "just use expensive, closed, proprietary hardware".

I'm not entirely sure you understand why people use hardware RAID, and if you don't understand that then you've adequately demonstrated why ZFS is trying to solve problems that don't really need solving. Trying to label it as 'proprietary' to try and create a sensible usage scenario for ZFS is pretty desperate.

I have a choice of software RAID and various hardware RAID options, some better than others, proper hardware RAID allows you to do sensible hot swapping of drives and abstracts the RAID storage away from the system. Linux and Solaris support lots of 'proprietary hardware', but we're not tied specifically to any of them. That just seems like a pretty flimsy argument. "Oh, but it's proprietary!" is the usual last resort argument Sun themselves use, which is ironic in itself considering that various roadblocks make additional implementations of ZFS difficult.

If I'm worried about universally moving RAID arrays between machines then I have software RAID for that, and it doesn't stop me getting the data off and moving it off somewhere else.

And before anyone says that if you have the needs, you'll have the budget, remember that overpriced storage is a problem with any budget, since it causes more apps to be crammed onto a smaller number of spindles with correspondingly worse performance.

The market for that scenario is so small as to be insignificant. Overpriced storage is certainly not a problem today. People are not going to switch to ZFS because they think they're running out of space.

Plus, what is good about the little guy being stuck with a mediocre solution?

Nothing, but the problem is that the little guy has little need for all of the features available. He might use a volume manager and he might well use some form of RAID, but that's pretty much it. He's not going to get excited about ZFS if he's already using that stuff, and more importantly, he won't switch.

As I said in the previous post, when the little guy requires storage management of the kind provided by ZFS (and most of it will have to be entirely transparent) then storage hardware will have advanced far beyond any requirement for ZFS.

Reply Parent Score: 2

RE[6]: Comment by agrouf
by Arun on Wed 23rd Apr 2008 23:24 in reply to "RE[5]: Comment by agrouf"
Arun Member since:
2005-07-07

We get it you don't like ZFS or just don't plain understand it.

Reply Parent Score: 1