Linked by Joel Dahl on Sun 25th Apr 2010 19:25 UTC
FreeBSD Today Jeff Roberson committed his patches to FreeBSD 9 for adding journaling to UFS. No more background fsck after unclean shutdowns! This is a major landmark in the history of UFS, with 11000 new lines of code (and about 2000 removed). Much of the work was done in collaboration with Kirk McKusick, the original author of FFS and Softupdates, under sponsorship form Yahoo!, Juniper and iXsystems. Jeff's blog contains quite a lot of technical information of his work. There's also information on the FreeBSD mailing lists.
Thread beginning with comment 420759
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[3]: UFS?
by aargh on Mon 26th Apr 2010 07:46 UTC in reply to "RE[2]: UFS?"
aargh
Member since:
2009-10-12

I modded you down as a troll because without providing any examples or links that's what you are.

Reply Parent Score: 4

RE[4]: UFS?
by Mr.Manatane on Mon 26th Apr 2010 13:12 in reply to "RE[3]: UFS?"
Mr.Manatane Member since:
2010-03-19

You want proof ?

ok bastard (I call you bastard because you moderate me wihtout knowing ZFS)...

first, try to use it with high level of I/O and not you the computer under your bed ...

Now:

- you CAN'T remove a disk. No problem you say ? I say no. Try to migrate your LUN from one bay to another and you are doomed. Try to consolidate your LUNs with less and bigger one ? You can't. And don't tell me it's coming, they are saying that since 5 years now ...

- copy-on-write fragment the filesystem over the time and guess what ? You can't do anything about it. No tool (and prefetch don't do it for high I/O).

- ZFS just kill your san: our storage team told us it was eating 30% of the total I/O only for one host.

- Mirroring with ZFS is crap. ZFS only knows that a mirror is corrupted when it tries to access the data and it will only correct the corrupted data he tries to access. (great no ? what just happens if you lose the good one ?).

The only way to detect that the disk is corrupted if you don't access this data is to use scrub, which kill you disks / san (cf over). And yes it happens to lose a good disk and then it came back (ie you lose a path on a san). Now to resync your disk you have remove it and take it back, but it has to resync everything from scratch...

And I don't speak about ZFS cache problems we had, and the problems we got with those crappy solaris zones...

Reply Parent Score: -1

RE[5]: UFS?
by marcp on Mon 26th Apr 2010 19:56 in reply to "RE[4]: UFS?"
marcp Member since:
2007-11-23

The fact is that ZFS eats a whole bunch of memory. Don't know about disk I/O, this might be true in some cases.
I really don't think your comment should ever be modded down just because you added "ZFS sucks" in your comment. It's obvious, that it sucks to YOU. Maybe you shouldn't have use a stron language, or maybe the guy who modded you down should reconsider his participation in public discussions.

Regards to all

Reply Parent Score: 3

RE[5]: UFS?
by Kebabbert on Mon 26th Apr 2010 20:36 in reply to "RE[4]: UFS?"
Kebabbert Member since:
2007-07-27

"first, try to use it with high level of I/O and not you the computer under your bed ..."

You have to configure ZFS differently than other file systems. ZFS is not perfect, but you need to know what you are doing.

It is well known that all raid solutions have trouble achieving high IOPS, but it is possible to circumvent that problem with ZFS. One way is to use SSD as a ZFS cache. Then you can reach good performance, for instance CIFS ~3GB/sec over a 40GBit NIC and 300.000 IOPS. This ZFS server is basically a bunch of SATA 7200 and SSD and quad core CPUs running OpenSolaris + ZFS:

http://blogs.sun.com/brendan/entry/iscsi_before_and_after



Sure, you can not remove a disc. It may be a problem. But many Enterprise companies are mostly interested in adding storage capacity.



Copy-On-Write fragments discs, yes. But ZFS takes measures to minimize it. It collects data until 7/8 of RAM is full, or 30 seconds have passed - before writing all data in one go. That minimizes fragmentation. Hence, lots of RAM is preferable for a ZFS file server. ZFS is mostly targeted for Enterprise companies. If you have less RAM and less needs, then other file systems may be better for your needs.



The only way to detect that a disc is corrupted is when ZFS tries to access the data - how is that bad? How can ZFS work differently? How can a filesystem know that data is corrupted, without even trying to access the data? Of course you need to access the data first, to see that it is corrupted! Otherwise it would be magic. This applies both for ZFS mirrors or ZFS raids. You need to access the data first, to see that it is corrupted. If you think ZFS mirrors sucks, then also ZFS raid sucks - because ZFS raid also requires you to first access the data to see if it is corrupted.



I get the impression that you are not handling ZFS correctly. I mean, you talk about small issues, but dont even mention ZFS biggest advantage, which no other normal file systems offer. That is the ONLY single reason to use ZFS. I am quite sure you dont even know what advantage I am talking about...

Reply Parent Score: 3

RE[5]: UFS?
by phoenix on Tue 27th Apr 2010 16:44 in reply to "RE[4]: UFS?"
phoenix Member since:
2005-07-11

you CAN'T remove a disk. No problem you say ? I say no. Try to migrate your LUN from one bay to another and you are doomed. Try to consolidate your LUNs with less and bigger one ? You can't. And don't tell me it's coming, they are saying that since 5 years now ...


Very few RAID controllers (I haven't personally seen one that does) allow you to remove disks from an array. Why should software RAID be any different?

copy-on-write fragment the filesystem over the time and guess what ? You can't do anything about it. No tool (and prefetch don't do it for high I/O).


Not yet. But there is work underway to fix this.

ZFS just kill your san: our storage team told us it was eating 30% of the total I/O only for one host.


And ... you can prove 100% that it is ZFS doing the I/O and not the applications running on top?

Mirroring with ZFS is crap. ZFS only knows that a mirror is corrupted when it tries to access the data and it will only correct the corrupted data he tries to access. (great no ? what just happens if you lose the good one ?).


A hardware RAID controller doesn't know data is bad until it reads the data, or runs an auto-verify in the background. Just like ZFS, which can run a scrub in the background. In fact, nothing in the world can tell a block of data is corrupt without first trying to read it. At least with ZFS, the corrupted data can be detected and re-written automatically. A hardware RAID array just notes that the sector is bad.

The only way to detect that the disk is corrupted if you don't access this data is to use scrub, which kill you disks / san (cf over).


And the only way for a hardware RAID controller to detect that a disk is corrupt is to run an auto-verify in the backgroun, which either kills your I/O or takes forever, depending on the settings on the controller.

And yes it happens to lose a good disk and then it came back (ie you lose a path on a san). Now to resync your disk you have remove it and take it back, but it has to resync everything from scratch...


No, it only re-silvers the data. ZFS doesn't re-silver empty space. In comparison to hardware RAID controllers that have to sync every block of the new drive, regardless of whether it's in use or not (the controller doesn't know the details of the data on disk).

And I don't speak about ZFS cache problems we had, and the problems we got with those crappy solaris zones...


Can't comment on Solaris Zones. Never used Solaris.

However, it sounds to me like you don't really understand how ZFS works, and have been trying to use it like a hardware RAID setup. Which is just wrong.

Reply Parent Score: 2