Linked by Thom Holwerda on Fri 23rd Oct 2009 21:13 UTC, submitted by poundsmack
Mac OS X John Siracusa, the Mac OS X guru who writes those insanely detailed and well-written Mac OS X reviews for Ars Technica, once told a story about the evolution of the HFS+ file system in Mac OS X - he said it was a struggle between the Mac guys who wanted the features found in BeOS' BFS, and the NEXT guys who didn't really like these features. In the end, the Mac guys won, and over the course of six years, Mac OS X reached feature parity - and a little more - with the BeOS (at the FS level).
Order by: Score:
What's not to like?
by Vanders on Fri 23rd Oct 2009 21:26 UTC
Vanders
Member since:
2005-07-06

it was a struggle between the Mac guys who wanted the features found in BeOS' BFS, and the NEXT guys who didn't really like these features


What features in BFS could you not like?

Reply Score: 5

RE: What's not to like?
by Thom_Holwerda on Fri 23rd Oct 2009 21:32 UTC in reply to "What's not to like?"
Thom_Holwerda Member since:
2005-06-29

They had a UNIX philosophy. You can't hold that against them.

Reply Score: 2

RE[2]: What's not to like?
by DittoBox on Fri 23rd Oct 2009 22:50 UTC in reply to "RE: What's not to like?"
DittoBox Member since:
2005-07-08

By "they" you mean the NEXT devs?

Creeptastic avatar, by the way. I like it.

Reply Score: 2

I really wish they wouldn't
by Bill Shooter of Bul on Fri 23rd Oct 2009 21:35 UTC
Bill Shooter of Bul
Member since:
2006-07-14

I just fear, absolutely fear Apple homegrown protocols, file systems, standards, ect. Too much marketing smoke and reality distortion accompanies them, with too few details. The most useful homegrown thing they've ever, ever done is firewire.

Reply Score: 6

RE: I really wish they wouldn't
by Burana on Sat 24th Oct 2009 07:53 UTC in reply to "I really wish they wouldn't"
Burana Member since:
2009-01-26

I just fear, absolutely fear Apple homegrown protocols, file systems, standards, ect. Too much marketing smoke and reality distortion accompanies them, with too few details. The most useful homegrown thing they've ever, ever done is firewire.


Absolutly. Until they get something serious, they will hack something together, put a "bling-bling" interface on it, and call it "innovation".

And boy, will they patent it!

Reply Score: 1

RE: I really wish they wouldn't
by Tuishimi on Sat 24th Oct 2009 16:18 UTC in reply to "I really wish they wouldn't"
Tuishimi Member since:
2005-07-06

Do they usually work?

Reply Score: 2

RE: I really wish they wouldn't
by subterrific on Sun 25th Oct 2009 19:10 UTC in reply to "I really wish they wouldn't"
subterrific Member since:
2005-07-10

While not entirely homegrown inside Apple, I think Zeroconf is a good example, possibly the only example, of a good protocol to come out of Apple.

Link-local addressing
Multicast DNS
DNS Service Discovery

All good standards. Though we have yet to see Stuart Cheshire's vision of Ethernet and/or TCP becoming the standard protocol for all devices (TCP keyboard and mouse anyone?), I think Zeroconf is still a huge success.

The bad example I come up with is resource forks. While adding structured data storage to files isn't a bad idea, the way it was implemented at the file system level made it very difficult to exchange files with non-HFS systems. This might have been what the UNIX guys at NeXT didn't like about BeFS, it had the ability to store extended metadata attributes attached to a file that would be lost in a copy to another file system. If I remember correctly, the address book in an early version of BeOS was implemented using file attributes, which was cool in your walled off BeOS corner of the universe, but if you copied a "Person" file from BeFS to another file system or even tried to send one to another BeFS system using a protocol that didn't support copying file attributes you'd end up with a useless 0 byte file. However, BeFS did support HFS resource forks by storing the raw fork data as a file attribute.

Reply Score: 2

RE[2]: I really wish they wouldn't
by jgagnon on Mon 26th Oct 2009 14:46 UTC in reply to "RE: I really wish they wouldn't"
jgagnon Member since:
2008-06-24

I've often wondered why Ethernet couldn't be more pervasive as an interconnect of devices. It would simplify so many things from an IT standpoint and it is plenty fast enough for the majority of devices, especially since you can run 10/100/1000 on the same lines (not sure about 10GB). Heck, 10 GB Ethernet is faster than many (most?) hard drives can keep up with.

Reply Score: 1

Not surprising
by diegocg on Fri 23rd Oct 2009 21:53 UTC
diegocg
Member since:
2005-07-08

While ZFS is great, its advantages are targetted mainly to servers, from the user POV it's just POSIX + snapshots/volume management. It doesn't brings new things to the desktop (with time machine apple doesnt even need snapshots). Apple is the kind of company that could want to go beyond of POSIX and bring new ideas to the desktop...

Reply Score: 0

RE: Not surprising
by Erunno on Fri 23rd Oct 2009 22:04 UTC in reply to "Not surprising"
Erunno Member since:
2007-06-22

It doesn't brings new things to the desktop (with time machine apple doesnt even need snapshots).


Wouldn't Time Machine profit greatly speed-wise from ZFS if only the changed blocks between two snapshots would have to be sent to the backup disks instead of whole files (which is especially painful with large ones)? I do not own a Time Capsule but I read that larger backups can be quite painful over the air. Plus, the sometimes long calculation of the changes would also disappear.

Reply Score: 5

RE[2]: Not surprising
by diegocg on Fri 23rd Oct 2009 22:19 UTC in reply to "RE: Not surprising"
diegocg Member since:
2005-07-08

Wouldn't Time Machine profit greatly speed-wise from ZFS

Sure, but just that - speedups (which could probably be hacked around in many ways). I think Apple would want to go beyond of all that - like presenting to applications something else that a path and a stream of bytes.

Reply Score: 2

RE[3]: Not surprising
by Bill Shooter of Bul on Sat 24th Oct 2009 04:59 UTC in reply to "RE[2]: Not surprising"
Bill Shooter of Bul Member since:
2006-07-14

With the same hardware, the only way to achieve ZFS like speeds for snapshots, is to adopt zfs like methods. Waving you hand and using the word hack doesn't change that. Apple probably wants ZFS like features ( for teh speed and reliability) but use it like HFS+ with the psuedo BFS features and legacy Mac-isims.

But that would take a lot of work, and frankly apple doesn't give a crap about painless effortless snapshots, so no ZFS. They'd rather have something that took forever, but was easy to use, than something instantaneous but had a more difficult UI.

Reply Score: 4

RE[4]: Not surprising
by darknexus on Sat 24th Oct 2009 05:14 UTC in reply to "RE[3]: Not surprising"
darknexus Member since:
2008-07-15

They'd rather have something that took forever, but was easy to use, than something instantaneous but had a more difficult UI.


And I suspect that, for the most part, non-geeks would agree with Apple. That being said, I'd rather have something instantaneous and also have an easy UI.

Reply Score: 2

RE[5]: Not surprising
by Bill Shooter of Bul on Sat 24th Oct 2009 06:25 UTC in reply to "RE[4]: Not surprising"
Bill Shooter of Bul Member since:
2006-07-14

I wasn't trying to judge Apple, but its wrong to expect them to pursue performance over usability. They'll select usability every time. Linux will most likely select performance, with flexibility.

Unix: Do one thing and do it well.
Apple: Be easy to use for most people.

Different philosophies. It sort of annoys me when Apple's decisions result in lack of comparability/ extendability or when Unix is just difficult to explain how to use to other people. (find ./ -name *.txt | xargs grep -i "bob" ). Of course, now that OsX is Unixy you can do some of both. But, any new stuff they do is not going to be unixy.

Reply Score: 3

RE[3]: Not surprising
by boldingd on Mon 26th Oct 2009 19:18 UTC in reply to "RE[2]: Not surprising"
boldingd Member since:
2009-02-19

In my never-humble opinion, I think you pretty much wouldn't want to present application developers with a representation of a file that was much more complicated than a path and a stream of bytes. I doubt most application developers want much more than that from a file; for them, neat new features are just added complexity.

The stream-of-bytes-at-a-path model of a file has lived for so long and changed so little because it's a good model that works very well for its intended purpose. It's not likely that someone's going to come up with some new-and-better metaphor for permanently-stored data on a disk, that's going to supplant the filesystems of today.

Edited 2009-10-26 19:23 UTC

Reply Score: 2

RE: Not surprising
by tobyv on Fri 23rd Oct 2009 22:19 UTC in reply to "Not surprising"
tobyv Member since:
2008-08-25

Apple is the kind of company that could want to go beyond of POSIX and bring new ideas to the desktop


Apple is also the kind of company where the NIH mindset is very strong.

I can't see Apple engineers willingly embracing Sun/Oracle technology.

Reply Score: 5

RE[2]: Not surprising
by haus on Fri 23rd Oct 2009 22:27 UTC in reply to "RE: Not surprising"
haus Member since:
2009-08-18

"Apple is also the kind of company where the NIH mindset is very strong."

That's simply not true anymore. Their focus in the past was to develop everything in house to maintain control. Now their motivation is to ship the best product they can and differentiate themselves wherever possible. Often times that means including open source, other times its meant licensing 3rd party technologies and yet other times it means creating those technologies themselves.

Reply Score: 3

RE[2]: Not surprising
by Zifre on Fri 23rd Oct 2009 22:35 UTC in reply to "RE: Not surprising"
Zifre Member since:
2009-10-04

Apple is also the kind of company where the NIH mindset is very strong.


Then why would they use the Mach kernel, the BSD userland, CUPS, etc.?

Reply Score: 1

RE[3]: Not surprising
by tobyv on Fri 23rd Oct 2009 23:10 UTC in reply to "RE[2]: Not surprising"
tobyv Member since:
2008-08-25

Then why would they use the Mach kernel, the BSD userland, CUPS, etc.?


They were aquisitions: Mach from NeXT and CUPS when they hired the team that wrote it.

I'm sure Apple would have no qualms with ZFS had they aquired Sun :-)

Reply Score: 1

RE[4]: Not surprising
by Thom_Holwerda on Fri 23rd Oct 2009 23:16 UTC in reply to "RE[3]: Not surprising"
Thom_Holwerda Member since:
2005-06-29

Crazy wild unsubstantiated drivel from me:

Maybe Apple wanted to buy Sun, but Oracle beat them to it (better offer, faster negotiations, whatever)?

Reply Score: 3

RE[5]: Not surprising
by tobyv on Sat 24th Oct 2009 03:11 UTC in reply to "RE[4]: Not surprising"
tobyv Member since:
2008-08-25

Maybe Apple wanted to buy Sun, but Oracle beat them to it (better offer, faster negotiations, whatever)?


There were merger rumors for years. A disaster for both companies IMHO; AOL Time-Warner all over again.

Apple should have bought SGI, if anything, for the XFS devs and graphics tech.

Reply Score: 1

RE[4]: Not surprising
by badtz on Sat 24th Oct 2009 12:03 UTC in reply to "RE[3]: Not surprising"
badtz Member since:
2005-06-29

dtrace ..... sun product

Reply Score: 1

RE[3]: Not surprising
by alcibiades on Sat 24th Oct 2009 07:25 UTC in reply to "RE[2]: Not surprising"
alcibiades Member since:
2005-10-12

Then why would they use the Mach kernel, the BSD userland, CUPS, etc.?

Because they were desperate!

Edited 2009-10-24 07:26 UTC

Reply Score: 4

RE: Not surprising
by azrael29a on Fri 23rd Oct 2009 22:56 UTC in reply to "Not surprising"
azrael29a Member since:
2008-02-26

While ZFS is great, its advantages are targetted mainly to servers, from the user POV it's just POSIX + snapshots/volume management. It doesn't brings new things to the desktop

How about "no more filesystem checking"?
There is no fsck for ZFS. End to end checksumming does it all.

Reply Score: 6

RE[2]: Not surprising
by phoenix on Sun 25th Oct 2009 21:23 UTC in reply to "RE: Not surprising"
phoenix Member since:
2005-07-11

"While ZFS is great, its advantages are targetted mainly to servers, from the user POV it's just POSIX + snapshots/volume management. It doesn't brings new things to the desktop

How about "no more filesystem checking"?
There is no fsck for ZFS. End to end checksumming does it all.
"

There is no separate, offline fsck. But there still is the online, background "fsck" known as scrubbing. And it's recommended that you do that at least once a month.

Reply Score: 2

RE: Not surprising
by Tuxie on Fri 23rd Oct 2009 23:11 UTC in reply to "Not surprising"
Tuxie Member since:
2009-04-22

It would bring low level checksumming with error correction (protection against bit-flips) and super-flexible support for multiple disks.

They could use it to implement a much better time machine with short snapshot intervals, requiring a fraction of the IO-usage and storage space of the current hardlink implementation.

They could also use its support for SSD caches, which means that you could add a small and expensive-per-GB but superfast SSD disk to your storage pool and have your most frequently used files automatically and transparently hosted on the SSD while the less common files are on your large and cheap 3,5" SATA disks.

Reply Score: 4

RE[2]: Not surprising
by segedunum on Sun 25th Oct 2009 11:21 UTC in reply to "RE: Not surprising"
segedunum Member since:
2005-07-06

While I'm sure that we're all excited by the technical advantages of ZFS, the fact is that Apple thinks those reasons are not enough to use it - and they now have experience of using it.

Reply Score: 2

RE: Not surprising
by Burana on Sat 24th Oct 2009 07:51 UTC in reply to "Not surprising"
Burana Member since:
2009-01-26

While ZFS is great, its advantages are targetted mainly to servers, from the user POV it's just POSIX + snapshots/volume management. It doesn't brings new things to the desktop (with time machine apple doesnt even need snapshots). Apple is the kind of company that could want to go beyond of POSIX and bring new ideas to the desktop...


I'm running OpenSolaris as my primary desktop.

Doing riskless OS upgrades with snapshots is the best invention since sliced bread.

There are many more features that are nice on the desktop (cloning, compression etc.)

Reply Score: 3

RE[2]: Not surprising
by Karrick on Mon 26th Oct 2009 02:19 UTC in reply to "RE: Not surprising"
Karrick Member since:
2006-01-12

I'm running OpenSolaris as my primary desktop.


What hardware are you using? !!!

I've been trying to find a standard machine to do this with for a long time. A computer where I do not have to spend extra hours loading a driver from a CDROM burned from a different computer just to get the wired network interface working. That is utter nonsense. Do you have a recommendation? I love OpenSolaris, except for that pain.

Reply Score: 1

RE[3]: Not surprising
by Kebabbert on Mon 26th Oct 2009 10:46 UTC in reply to "RE[2]: Not surprising"
Kebabbert Member since:
2007-07-27

I am also running OpenSolaris as my primary desktop. It works fine. I just use intel 9450cpu, 4850ATI (no 3d driver, only 2d), ordinary SATA discs. Here is a hardware compatibility list, so you can see components that work with OpenSolaris:
http://www.sun.com/bigadmin/hcl/data/os/

Reply Score: 2

RE: Not surprising
by Laurence on Sun 25th Oct 2009 17:09 UTC in reply to "Not surprising"
Laurence Member since:
2007-03-26

While ZFS is great, its advantages are targetted mainly to servers, from the user POV it's just POSIX + snapshots/volume management. It doesn't brings new things to the desktop (with time machine apple doesnt even need snapshots). Apple is the kind of company that could want to go beyond of POSIX and bring new ideas to the desktop...

Actually, ZFS would be ideal for media professionals who use OS X.

Think about musicians or sound engineers who deal with files that are hundreds of megabytes in size. Rather than having dozens of copies of the same file taking up gigabytes of diskspace, ZFS would just store the differences.

Plus support for software RAIDing would make recording of ultra high quality audio a breeze where currently latency is often an issue.

ZFS also has native support for compression (which is ideal when your a media professional and frequently handling data that can't be lossy compressed)

ZFS also doesn't require defraging nor scandisk/fsck'ing - which is in line of Apple whole philosophy (as in "it just works")

And lets not forget the improvements to Time Capsule (as already mentioned).


ZFS could have been an awesome addition to OS X and a valued asset for media professionals who regularly work with high resolution samples.

Reply Score: 3

HFS was designed for Floppy Disks
by jrash on Fri 23rd Oct 2009 22:17 UTC
jrash
Member since:
2008-10-28

HFS+ makes filesystem geeks like myself have nightmares, the way it stores it's data and resource forks is ridiculous and it makes FAT32 look like XFS. Dominic Giampaolo (author of BFS and the excellent book "Practical File System Design with the Be File System") works for Apple now, why doesn't Steve pick of the phone and be like, "Hey, remember that really neat filesystem you whipped up in 9 months for Be back in the day? Here are 10 more engineers, I'll be in touch."

Reply Score: 8

redshift Member since:
2006-05-06

I thought they hired Giampaolo in 2002? If he is still there, he probably has somthing impressive cooking in the skunkworks.

Edited 2009-10-23 22:36 UTC

Reply Score: 2

DittoBox Member since:
2005-07-08

8 years is a hell of development cycle for an FS.

Reply Score: 2

jrash Member since:
2008-10-28

I think he added journaling to HFS+, then worked on Spotlight after that.

Reply Score: 1

Tuishimi Member since:
2005-07-06

Yes, that is what I heard.

Reply Score: 2

chrish Member since:
2005-07-14

Pretty sure resource forks are deprecated in Mac OS X (and have been for a couple of versions).

dbg has been working on HFS+ for quite a while; were do you think the journalling and FSEvent stuff came from? These are direct descendants of BFS features.

Reply Score: 1

Wise timing, Apple. I see what you did there
by kragil on Fri 23rd Oct 2009 22:31 UTC
kragil
Member since:
2006-01-04

used the Win7 buzz to kill promising tech without too many people taking notice

Reply Score: 5

haus Member since:
2009-08-18

Ya, because Linus is the be all end all of what is good and what is not.

Reply Score: 2

Tuishimi Member since:
2005-07-06

Is it me or hasn't Linus changed much in 25 years (appearance-wise). It's kind of creepy. I think I will dress up as Linus for Halloween.

Reply Score: 8

thewolf Member since:
2007-12-27

It's a wig! ;)

Reply Score: 1

kaiwai Member since:
2005-07-06



Neither of them mentioned HFS+. When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors - then I'll give the slightest toss what people like you think of Apple or Microsoft.

Reply Score: 1

Bill Shooter of Bul Member since:
2006-07-14

WTF? What are big name vendors? Why is their software superior to open source software? Is IBM a small name? What about SUN? What the heck does that have to do with file systems?

If you want to know what a good file System is, you ask people who care about file systems. Microsoft has no choice NTFS or FAT32 is a no brainier. Apple HFS+ or UFS no brainier. Linux ext4 vs reiserfs vs XFS vs btrfs. That's competition that requires people to evaluate file systems based on independent benchmarks as opposed to blind os fanaticism.

Reply Score: 1

dragSidious Member since:
2009-04-17

Out of sight out of mind.

The Linux install base probably outnumbers OS X 100 to one. People who think that Linux is small-time compared to Apple really have no clue at all what they are talking about.

Reply Score: 0

Bill Shooter of Bul Member since:
2006-07-14

True, absolutely true. Its used much more in high performance servers where file system performance and resiliency are more important than they are in desktops.

Reply Score: 2

Doc Pain Member since:
2006-10-08

If you want to know what a good file System is, you ask people who care about file systems.


Some years ago, I read in a german forum that someone "does not like the Linux file system because the pictures are too big" - obviously refering to some icons in some file manager. :-)

This shows one thing: Users aren't interested in file systems per se, in most cases they don't even know what a file system is, or they confuse the term with something else. Users are just interested in what benefits a particular file system gives them, and those who "give them" the file systems (along with operating systems they sell) should promote advantages of the file systems they use according to what it means to their specific target audience. Apple's Mac OS X is primarily targeted to the home market and the professional workstation market, not to server farms or heavy virtualization sites.

Microsoft has no choice NTFS or FAT32 is a no brainier.


Legacy.

Apple HFS+ or UFS no brainier.


I was always fine with UFS2, but do honestly prefer ZFS as its follower in BSD and Solaris. Buf for Mac OS X, it's highly debatable if ZFS or UFS2 are the best choice, remembering the fact that the target audience's interests primarily indicate what to develop (and to sell), given specific characteristics of hardware used, as well as the settings in which it is used.

Linux ext4 vs reiserfs vs XFS vs btrfs. That's competition that requires people to evaluate file systems based on independent benchmarks as opposed to blind os fanaticism.


Blind OS fanatism seems to be a result of excellently working marketing. This is applyable to the same folks who demand MICROS~1 "Office" on every platform and who cry for "Photoshop" on Linux. Often, the same folks have pirated copies of everything they use. :-)

Reply Score: 6

SterlingNorth Member since:
2006-02-21



Neither of them mentioned HFS+. When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors - then I'll give the slightest toss what people like you think of Apple or Microsoft.
"

Um, yes the first one does mention HFS+

To quote:
"I don't think they're equally flawed - I think Leopard is a much better system," he said. "(But) OS X in some ways is actually worse than Windows to program for. Their file system is complete and utter crap, which is scary."

Reply Score: 2

twitterfire Member since:
2008-09-11

. When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors - then I'll give the slightest toss what people like you think of Apple or Microsoft.


Maybe it's hard to believe for an apple fan[atic], but Linux allready has "better hardware and software selection from big name vendors" than Apple and their os.

Reply Score: 2

FellowConspirator Member since:
2007-12-13

... or Microsoft and their OS. Linux has hardware support in the bag. As far as being a popular platform for desktop productivity apps, then the point is a fair one. The big players in commercial desktop applications don't seem to pay any attention to Linux.

Reply Score: 1

darknexus Member since:
2008-07-15

Linux has hardware support in the bag, eh? Funny you should say that, especially since there are loads of devices that work in Windows or even OS X but have poor drivers or no drivers in Linux. If your hardware follows a standard or has a good open source driver then Linux has your hardware in the bag, otherwise you're more than likely sol, because as a driver development platform Linux is simply awful (kernel versions and such nonsense).

Reply Score: 4

jgagnon Member since:
2008-06-24

The big players in commercial desktop applications don't seem to pay any attention to Linux.


If Linux really wants (and I hope they do) the "big players" to create applications for their OS then they better get to work on standardizing an API to work with in as many areas as they can. It is pretty damn hard to "write an application for Linux" and have it work most distributions without recompilation or installing every dependent library along with it. It is much easier for an application developer to target a given "product" like Ubuntu 9.04 than to target Linux as a platform.

The people that drive Linux standards need to consider the overhead of this "many platforms within a platform" problem and find a solution before they can ever expect the masses to come develop commercially viable software for Linux (outside vertical markets, that is). It is the fluid nature of Linux that is both appealing to casual developers and the bane of mass commerce.

And for the record, I am very much a fan of Linux and use it daily. But being a technical person who does development daily, I will say that as a business desktop programming platform Linux leaves a lot to be desired. My opinion, of course, but I've seen nothing in recent history to change it.

Reply Score: 1

sbergman27 Member since:
2005-07-24

When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors - then I'll...

...have by then hopped onto another bandwagon, and will be praising yet *another* OS while trashing MacOSX and the others?

? -> Linux -> OpenSolaris -> Apple -> ?

is how my Kaiwai radar screen looks right now. The edges are a bit blurry.

Edited 2009-10-25 03:03 UTC

Reply Score: 5

BallmerKnowsBest Member since:
2008-06-02

When Linux has Creative Suite, Microsoft Office, better hardware and software selection from big name vendors - then I'll give the slightest toss what people like you think of Apple or Microsoft.


Oh no - Maclots vs. Freetards? I can't decide which side I want to see lose more, since they're both so insufferably obnoxious in their own special ways.

Reply Score: 0

chrish Member since:
2005-07-14

Aren't most Linux systems still using ext2 for their filesystems? And he's complaining about HFS+ having legacy features? At least it's journalled. :-P

I still can't find a Linux distro with btrfs as the default (or even an option for non-/boot filesystems). Which is a shame, I'd like to check it out and compare to ZFS on my FreeBSD server.

OS X is the best desktop (and laptop) UNIX I've ever used.

Reply Score: 1

boldingd Member since:
2009-02-19

Aren't most Linux systems still using ext2 for their filesystems?


No, actually: most of the distros I've tried have used ext3 as the default for some time, which is journaled. Some are even using ext4 now.

I still can't find a Linux distro with btrfs as the default (or even an option for non-/boot filesystems). Which is a shame, I'd like to check it out and compare to ZFS on my FreeBSD server.


Just because the installer won't create a BTRFS file system doesn't mean it's unavailable. It's at least possible that, if the kernel on the install disk had BTRFS support built, that you could create a BTRFS file-system before you start the installation (say, by using a convenient live disk, like gparted or SystemRescueCD) and just select that partition as the installation target from withing your distro's installer. Slightly technical? Sure, but not impossible, and well within the capabilities of... well, anyone who has any business testing and speed-racing BTRFS. ;)

Reply Score: 2

sbergman27 Member since:
2005-07-24

used the Win7 buzz to kill promising tech without too many people taking notice

I'm not sure why they would want to kill promising tech unless it just didn't make sense for them to pursue it.

ZFS is variously thought to be... manna from Heaven, the spawn of Satan, a universal panacea, a rampant layering violation, a spiteful licensing attack upon Linux, or a sure-fire way to enlarge various body parts.

Maybe ZFS was, in the end, not deemed to be a good fit for MacOSX and Apple.

I don't see anything sinister, here, that would have to be done under cover of darkness.

Edited 2009-10-25 03:39 UTC

Reply Score: 3

They are free to
by mmu_man on Fri 23rd Oct 2009 23:16 UTC
mmu_man
Member since:
2006-09-30

use OpenBFS, it's MIT-licenced.

Besides, it wouldn't be the first time they rip off ideas from BeOS anyway... so why not take the real version instead of some skim ? ;-)

Reply Score: 4

IMHO Oracle sounds scary to Apple execs...
by sergio on Fri 23rd Oct 2009 23:19 UTC
sergio
Member since:
2005-07-06

Oracle isn't like Sun. They're agressive and deeply closed source... sooner or later they can treat you.

Reply Score: 4

Comment by kaiwai
by kaiwai on Fri 23rd Oct 2009 23:57 UTC
kaiwai
Member since:
2005-07-06

The fundamental problem I think people are avoiding to address is ZFS major memory hogging; this might be ok if you have a massive multi-core monster with minimum 2GB memory to get decent performance. That isn't acceptable for a file system that is supposed to 'rule them all' that can scale from an embedded iPhone/iPod Touch device all the way up to a Mac Pro with a high end configuration.

If they are going to create a new file system then they'll need to be able to convert the existing users file system to the new one then defragment it after as to avoid any performance loss because of the conversion process (FAT16 -> FAT32 -> NTFS yielded massive fragmentation). I assume with the moves they've made in 10.6 that it is now possible to swap many parts of the system out and replace them incrementally now that, for example, there is a standard way of interacting with the file system, for example. If I remember correctly there was a PDF put up on the Apple website which talked about replacing many ways of achieving something with a single system call thus making a programmers life easier.

Regarding the filesystem, I wonder if they're going to possibly use an existing one? although it might make sense to have an in house one, there are also some great ones out there such as HAMMERFS from DragonflyBSD, or maybe Apple can buy VxFS from Symantec (who recently bought Veritas)? hopefully they'll deliver something that is reliable. With that being said, HFS+ isn't as bad as some claim. In the 8 years of using a Mac I've never experienced data loss because of the file system going pear shaped.

Reply Score: 6

RE: Comment by kaiwai
by ebasconp on Sat 24th Oct 2009 00:14 UTC in reply to "Comment by kaiwai"
ebasconp Member since:
2006-05-09

Actually I was thinking the same about HammerFS: Why Apple does not adopt it as its FS? It is a nice filesystem, it has a very sexy license and they would eventually help the HammerFS development and DragonFly because of that.

Reply Score: 3

RE: Comment by kaiwai
by jwwf on Sat 24th Oct 2009 01:24 UTC in reply to "Comment by kaiwai"
jwwf Member since:
2006-01-19

The fundamental problem I think people are avoiding to address is ZFS major memory hogging; this might be ok if you have a massive multi-core monster with minimum 2GB memory to get decent performance. That isn't acceptable for a file system that is supposed to 'rule them all' that can scale from an embedded iPhone/iPod Touch device all the way up to a Mac Pro with a high end configuration.


I'd be more willing to agree if it wasn't for personal experience with my core 2 macbook hitting swap every day with 1gb and leopard. It wasn't really usable until I upgraded (to 4gb).

Anyway, I had hoped for ZFS on the Mac, but I'm just as annoyed that they are removing UFS. HFS+ doesn't even support sparse files.

Reply Score: 4

RE[2]: Comment by kaiwai
by kaiwai on Sat 24th Oct 2009 01:37 UTC in reply to "RE: Comment by kaiwai"
kaiwai Member since:
2005-07-06

I'd be more willing to agree if it wasn't for personal experience with my core 2 macbook hitting swap every day with 1gb and leopard. It wasn't really usable until I upgraded (to 4gb).


And what you experienced has nothing to do with the HFS+ file system at all. We're talking about ZFS and the memory used to improve performance. It is a known side effect of the file system design - ZFS was never designed to be used in an environment where memory is at a premium.

Anyway, I had hoped for ZFS on the Mac, but I'm just as annoyed that they are removing UFS. HFS+ doesn't even support sparse files.


UFS was a walking disaster area when one considers the litany of issues people had with it. Apple is eventually going to replace it with something that will scale from embedded to servers so that they don't have to have duplication and thus unneeded extra cost. HAMMER will do what you need - are there features missing? of course but HAMMER is in continuous development with the short comings being addressed.

Unlike ZFS, HAMMER provides everything plus a lower memory foot print - I'd say that is a pretty good alternative to ZFS.

Reply Score: 4

RE[3]: Comment by kaiwai
by jwwf on Sat 24th Oct 2009 02:31 UTC in reply to "RE[2]: Comment by kaiwai"
jwwf Member since:
2006-01-19

"I'd be more willing to agree if it wasn't for personal experience with my core 2 macbook hitting swap every day with 1gb and leopard. It wasn't really usable until I upgraded (to 4gb).


And what you experienced has nothing to do with the HFS+ file system at all. We're talking about ZFS and the memory used to improve performance. It is a known side effect of the file system design - ZFS was never designed to be used in an environment where memory is at a premium.
"

Of course not. Why would you think that I connect OSX memory usage with HFS+ ? My point is that leopard as deployed on desktop class platforms is _already_ a memory hog in my experience. Thus it's hard for me to speculate on ZFS not making the cut because it _might_ be a memory hog on OSX.

Why should a next gen apple FS be required to span all platforms? This is as likely as not to lead to undesirable compromises on both ends, even if it is cheaper. I say horses for courses. (do we buy apple products because they are cheap or cheap to design? does apple pass cheap design costs on to us?).

"Anyway, I had hoped for ZFS on the Mac, but I'm just as annoyed that they are removing UFS. HFS+ doesn't even support sparse files.


UFS was a walking disaster area when one considers the litany of issues people had with it. Apple is eventually going to replace it with something that will scale from embedded to servers so that they don't have to have duplication and thus unneeded extra cost. HAMMER will do what you need - are there features missing? of course but HAMMER is in continuous development with the short comings being addressed.

Unlike ZFS, HAMMER provides everything plus a lower memory foot print - I'd say that is a pretty good alternative to ZFS.
"

Hammer might be great on OS X except for the fact that it does not currently exist on OS X. I wouldn't bet on that changing either, but would be pleased to be proven wrong.

Reply Score: 2

RE[4]: Comment by kaiwai
by dragSidious on Sat 24th Oct 2009 06:55 UTC in reply to "RE[3]: Comment by kaiwai"
dragSidious Member since:
2009-04-17

"
And what you experienced has nothing to do with the HFS+ file system at all. We're talking about ZFS and the memory used to improve performance. It is a known side effect of the file system design - ZFS was never designed to be used in an environment where memory is at a premium.


Of course not. Why would you think that I connect OSX memory usage with HFS+ ? My point is that leopard as deployed on desktop class platforms is _already_ a memory hog in my experience. Thus it's hard for me to speculate on ZFS not making the cut because it _might_ be a memory hog on OSX.
"

It probably did not make the cut because it is not only a memory hog, but because it is slow and does not have the features necessary for compatibility with OS X applications.


Why should a next gen apple FS be required to span all platforms? This is as likely as not to lead to undesirable compromises on both ends, even if it is cheaper. I say horses for courses. (do we buy apple products because they are cheap or cheap to design? does apple pass cheap design costs on to us?).


Apple is good at making slick user interfaces and is good at marketing their hardware.

The original iPhone, for example, had slow and overpriced hardware compared to it's contemporaries at the time. In all respects it was a mediocre product with the major exception that Apple was able to make the interface attractive and easy to use and was able to market it intelligently.

Apple uses HFS+ because a file system is really irrelevant to the sort of thing that people buy OS X for. It does not really matter in terms of desktop user experience that behind the pretty face lies a OS that depends on a file system that is fragile, overly complex, and slower then what is offered by Windows or Linux. So what if your applications take 2 seconds longer to start up and that you have a 15% higher chance of data loss during a improper shutdown?

Apple designed their UI so that you can't really tell the difference either way.


"Anyway, I had hoped for ZFS on the Mac, but I'm just as annoyed that they are removing UFS. HFS+ doesn't even support sparse files.


UFS was a walking disaster area when one considers the litany of issues people had with it. Apple is eventually going to replace it with something that will scale from embedded to servers so that they don't have to have duplication and thus unneeded extra cost. HAMMER will do what you need - are there features missing? of course but HAMMER is in continuous development with the short comings being addressed.
"

If people actually paid attention to Apple's documentation they would of not been stupid enough to try to run OS X on UFS.

UFS was provided for POSIX-compatible file system for things like compliance testing and running database products. HFS+ is NOT posix compatible.. UFS was never alternative to HFS+

Edited 2009-10-24 06:57 UTC

Reply Score: 2

RE[3]: Comment by kaiwai
by Mark Williamson on Sat 24th Oct 2009 13:24 UTC in reply to "RE[2]: Comment by kaiwai"
Mark Williamson Member since:
2005-07-06


Unlike ZFS, HAMMER provides everything plus a lower memory foot print - I'd say that is a pretty good alternative to ZFS.


I'm pretty sure that HAMMER doesn't give you writeable snapshots, which ZFS does - that feature could be very useful for some purposes. For people who just want free or cheap read-only snapshots, HAMMER should satisfy, feature-wise.

Interestingly Matt Dillon (DragonflyBSD lead developer) was considering ZFS for a while but decided it didn't solve the problems he was interested in (he wants to do single system image clustering, i.e. tying a cluster together into a single logical unix system. HAMMER is designed to accommodate that goal in some way).

Reply Score: 2

RE[4]: Comment by kaiwai
by Marquis on Sat 24th Oct 2009 19:47 UTC in reply to "RE[3]: Comment by kaiwai"
Marquis Member since:
2007-01-22

I wonder if apple is looking at using WAPBL http://www.bsdcan.org/2009/schedule/events/138.en.html

While this may seam like a bad idea. Think about the fact they already have UFS support . Getting UFS2 support and WAPBL would make a good fit. They are both BSD licensed supported and incredibly stable.

Reply Score: 2

RE: Comment by kaiwai
by Burana on Sat 24th Oct 2009 08:00 UTC in reply to "Comment by kaiwai"
Burana Member since:
2009-01-26

The fundamental problem I think people are avoiding to address is ZFS major memory hogging; this might be ok if you have a massive multi-core monster with minimum 2GB memory to get decent performance. That isn't acceptable for a file system that is supposed to 'rule them all' that can scale from an embedded iPhone/iPod Touch device all the way up to a Mac Pro with a high end configuration.


First, ZFS runs just fine with 1GB on my EeePC none multi-core monster. Second, have you checked the recent GB prices?

Reply Score: 2

HFS+ crashes
by s_groening on Sat 24th Oct 2009 13:06 UTC in reply to "Comment by kaiwai"
s_groening Member since:
2005-12-13

... I guess the Gods let me have those instead ...

I've lost 1 TB + 1.8 TB due to file system poo-poos that were caused by HFS+ and nothing else ...

Anyways, NEVER run HFS+ on large volumes with massive amounts of data without keeping your copy of DiskWarrior in your back pocket - just in case ...

Reply Score: 3

RE: HFS+ crashes
by REM2000 on Sat 24th Oct 2009 16:48 UTC in reply to "HFS+ crashes"
REM2000 Member since:
2006-07-25

I have to agree, my macbook suffered a HFS+ file system fault which FSCK and other checks could not fix, even tried booting off the disc. I had to reformat, reinstall and then restore to get it running ok. Corrupted a lot of data for no reason. The laptop hadn't been switched off without a shutdown, it just developed a fault.

ZFS is one of the best FS's However i also hightly rate NTFS, as ive used that with some heavy duty file operations with both small files and large files and even after quite a few crashes the NTFS keeps going. The only thing i would say about NTFS is that sometimes it likes to fragment itself quite badly.

Reply Score: 3

RE[2]: HFS+ crashes
by darknexus on Sun 25th Oct 2009 02:08 UTC in reply to "RE: HFS+ crashes"
darknexus Member since:
2008-07-15

i also hightly rate NTFS, as ive used that with some heavy duty file operations with both small files and large files and even after quite a few crashes the NTFS keeps going. The only thing i would say about NTFS is that sometimes it likes to fragment itself quite badly.


Another thing about NTFS is that it rarely goes belly-up but when it does it does so in a rather spectacular way. Ever had your cluster bitmap become corrupted, i.e. the section of the mft that tells the fs which space is used and which is free? When that information gets out of sync you essentially face the issue of disappearing files, because the fs is writing over files that have gotten marked as free space and updating the mft accordingly. the mft and cluster bitmap itself can be fixed rather easily, but there's no real way to undo the damage it has already done except to restore from a backup. Of course, if you run a mission-critical server, no matter what fs, and don't have a working backup then you're asking for whatever misfortune you get.

Reply Score: 2

RE: HFS+ crashes
by StephenBeDoper on Sat 24th Oct 2009 20:28 UTC in reply to "HFS+ crashes"
StephenBeDoper Member since:
2005-07-06

... I guess the Gods let me have those instead ...

I've lost 1 TB + 1.8 TB due to file system poo-poos that were caused by HFS+ and nothing else ...


I've had similar experiences - OS X and HFS+ account for a disproportionately high number of the tech support calls I get due to data loss. Most of the time, it's a thumb drive or enclosure that was unplugged/powered down without being un-mounted first - a bad idea with any OS, but I've only ever seen it result in actual data loss with OS X & HFS+.

Perversely enough, I've found that the quickest solution is usually to connect the drive to a Windows machine with the "MacDrive" software installed (commercial app that lets Windows read from HFS+ volumes), then plugging it back into the Mac. I'm not sure why, but it's worked for me 9 times out of 10.

Reply Score: 2

RE: HFS+ crashes
by Matt Giacomini on Mon 26th Oct 2009 16:39 UTC in reply to "HFS+ crashes"
Matt Giacomini Member since:
2005-07-06

Count me on this list too. I have lost two file systems due to corruption in the last 2 years.

In both cases DiskWarrior was able to recover them. Which leaves me wondering which aspect of HFS+ I should be more pissed off about.

1) That HFS+ is a curruption monkey filesystem.

2) That apple utilities are so crappy that they can't even repair their own file system.

I have not had a Windows, Solaris, or Linux file system corrupt on me since 2002. <- and of course the utilities provided by the OS were able to fix it.

Reply Score: 2

RE: Comment by kaiwai
by Laurence on Sun 25th Oct 2009 17:27 UTC in reply to "Comment by kaiwai"
Laurence Member since:
2007-03-26

The fundamental problem I think people are avoiding to address is ZFS major memory hogging; this might be ok if you have a massive multi-core monster with minimum 2GB memory to get decent performance.

Which all Macs have had for a good while now.

That isn't acceptable for a file system that is supposed to 'rule them all' that can scale from an embedded iPhone/iPod Touch device all the way up to a Mac Pro with a high end configuration.

So don't port it to embedded devices.
That doesn't mean that said devices can't still interact with OS X though.

Lets not forget that iPods (or at least my misses iPod) still run on FAT32 - so it's not even as if Apples embedded devices currently run the same FS as their desktop machines.


If they are going to create a new file system then they'll need to be able to convert the existing users file system to the new one then defragment it after as to avoid any performance loss because of the conversion process (FAT16 -> FAT32 -> NTFS yielded massive fragmentation).

No they wouldn't.
They only have to recommend the new FS for clean installs.

Reply Score: 2

Ext4
by 3rdalbum on Sat 24th Oct 2009 10:15 UTC
3rdalbum
Member since:
2008-05-26

Apple should really implement Ext4 using their own code; it should be fairly quick to do and it would be miles away better than HFS+.

I need two hands to count the number of times HFS+ has gone pear-shaped for me and I've lost data. That's not impressive at all. I shudder to think of what Leopard users must go through, considering that their operating system deletes files before putting them back on disk when you're just trying to save them.

And then when Btrfs comes out, Apple can nicely re-code that too.

But honestly, for god's sake, get rid of HFS+ and get rid of it SOON.

Reply Score: 3

RE: Ext4
by darknexus on Sat 24th Oct 2009 11:33 UTC in reply to "Ext4"
darknexus Member since:
2008-07-15

Maybe I'm just the odd one out, but I've had issues with data loss on ext4 in the event of a system crash or hibernation gone bad and none, I repeat none, with HFS+. That being said, I agree fully that HFS+ is antiquated in design and should get replaced with a better filesystem. But please, please not ext4!

Reply Score: 3

RE[2]: Ext4
by mmu_man on Sat 24th Oct 2009 12:21 UTC in reply to "RE: Ext4"
mmu_man Member since:
2006-09-30

Btw, AFAIK ext4 still has those ugly limits on xattr size (4k/inode for all of them or something)...

Reply Score: 2

RE[2]: Ext4
by sbergman27 on Sun 25th Oct 2009 02:21 UTC in reply to "RE: Ext4"
sbergman27 Member since:
2005-07-24

Mount with the "nodelalloc" option. Ted T'so is still singing "La! La! La!" with his fingers stuck in his ears, admiring his great benchmark and fragmentation avoidance numbers, and thinking that the patches he put into 2.6.30 really and truly mitigated the important problems inherent in delayed allocation in a meaningful way.

I believed him, sorta. Until it trashed several of one of my customers' C/ISAM files after a crash. And then did it again a week later. (The crash was unrelated to the FS and will soon be fixed.)

Currently, I have "nodelalloc" set in fstab, and "data=journal" set as default in the superblock. I don't think I really need data=journal, but the customer says they see no noticeable performance penalty, so I'm leaving it.

However, I think that "nodelalloc" is probably the only change I really needed to make. We ran ext3 for years at the defaults, and *never* had these issues, even under adverse conditions.

Edited 2009-10-25 02:23 UTC

Reply Score: 2

RE[3]: Ext4
by boldingd on Mon 26th Oct 2009 19:59 UTC in reply to "RE[2]: Ext4"
boldingd Member since:
2009-02-19

Heh, thanks for that; I'll bear that in mind the next time I do an OS install.

Reply Score: 2

RE[3]: Ext4
by segedunum on Mon 26th Oct 2009 22:53 UTC in reply to "RE[2]: Ext4"
segedunum Member since:
2005-07-06

I've certainly seen this sort of thing happen - to some test systems thankfully. I just don't think ext4 is necessary in any way. ext3 was the end of the the line for the ext filesystem line and I don't think stretching the codebase out any further to do things it wasn't mean to do has done anyone any good.

Reply Score: 2

RE[4]: Ext4
by sbergman27 on Mon 26th Oct 2009 22:58 UTC in reply to "RE[3]: Ext4"
sbergman27 Member since:
2005-07-24

I've certainly seen this sort of thing happen - to some test systems thankfully. I just don't think ext4 is necessary in any way. ext3 was the end of the the line for the ext filesystem...

Hey! Extents are nice. And safe. 48 bit is nice. And safe. Delayed allocation was reckless... and not safe. Turn it off, and continue enjoying the stability of EXTx. And shame Ted T'so when appropriate. He'll notice, eventually. He's just living in his own little File-System Superstar world right now.

Edited 2009-10-26 23:01 UTC

Reply Score: 2

RE[5]: Ext4
by segedunum on Tue 27th Oct 2009 00:36 UTC in reply to "RE[4]: Ext4"
segedunum Member since:
2005-07-06

Hey! Extents are nice. And safe. 48 bit is nice. And safe.

Yes they are nice additions, but I would have preferred the addressing of concerns like dynamic inode allocation which would have been a nice improvement over ext2/3. It's a very common problem, running out of inodes. I certainly understand the backwards compatibility reasons for that, but it's a very common and real problem nonetheless.

While I respect the need for backwards compatibility with filesystems and why new filesystems like ZFS often have a tough time, I can't help but feel we're at the end of the line.

Reply Score: 2

RE[6]: Ext4
by sbergman27 on Tue 27th Oct 2009 01:59 UTC in reply to "RE[5]: Ext4"
sbergman27 Member since:
2005-07-24

but I would have preferred the addressing of concerns like dynamic inode allocation which would have been a nice improvement over ext2/3. It's a very common problem, running out of inodes.

Really? I've never had such a problem. Inodes are generally over-allocated by a very wide margin with the defaults. Even on the multi-user systems I administer. Static allocation has definite advantages when it comes to fsck. When you know where things *should* be, you know better what to do when they don't seem to be where they are supposed to be.

Dynamic inode allocation might seem a good a good idea at the time... in the same sort of way that killing one's wife with a Judo hold might. But I don't think it really pays off in the end.

Edited 2009-10-27 02:05 UTC

Reply Score: 2

RE[2]: Ext4
by segedunum on Mon 26th Oct 2009 22:51 UTC in reply to "RE: Ext4"
segedunum Member since:
2005-07-06

Maybe I'm just the odd one out, but I've had issues with data loss on ext4 in the event of a system crash or hibernation gone bad and none...

I will never use ext4. Use ext3 as a general purpose filesystem or XFS if you really need specific performance but don't use ext4 - it just isn't necessary. I won't use a distro that won't let you change the default filesystem.

While I respect Ted Tso's work over the years I'm afraid he's another developer who has got to the point where he is too wrapped up and proud to see the failings in his own code and those close to him. I don't care for his defensive tone over ALSA either.

Edited 2009-10-26 23:00 UTC

Reply Score: 2

My guess...
by thavith_osn on Sat 24th Oct 2009 23:10 UTC
thavith_osn
Member since:
2005-07-11

...is there was a very good reason to drop ZFS. To actually announce they were going to use it was a big step. It would be interesting to hear the full details...

I say this because writing a FS isn't an easy thing, it's crazy to write one when there is already one existing. If there was functionality they wanted, it would be easier to add it to ZFS then to start from scratch...

Apple use Unix, OpenGL / PDF, OpenCL, OpenAL, CUPS, gcc, llvm, Webkit and the list goes on and on. Most of these have been improved by Apple and given back to the community. Apple also benefit greatly from the community, it's like having thousands of extra coders on the payroll, but you don't have to pay them.

From what I have seen, they don't re-invent the wheel anymore unless they see a need, there is no point. No company does (well, shouldn't). Usually the "need" my elude us or be closely attached to revenue, but it's still a "need".

So, I would guess they are either looking at something else, or cooking their own. If they are cooking their own, you can bet it will be a lot better than HFS+. Better than ZFS, I seriously doubt that (but we can hope)...

Reply Score: 4

ZFS memory hog?
by Kebabbert on Sun 25th Oct 2009 12:57 UTC
Kebabbert
Member since:
2007-07-27

I dont think 1 GB RAM is too much for a server enterprise file system? Enterprise servers tend to have more than 64 MB RAM? As soon some app needs RAM, ZFS will release it. Until then, it will grab all RAM it can get. Which is a good thing, RAM should be used.

I think that the rumours that ZFS needs several GB of RAM just to boot, comes from FreeBSD first port attempts. The ZFS port to FreeBSD used lots and lots of memory, but that was because of a bug. The FreeBSD developer explains:
http://queue.acm.org/detail.cfm?id=1317400

"The biggest problem, which was recently fixed, was the very high ZFS memory consumption. Users were often running out of KVA (kernel virtual address space) on 32-bit systems. The memory consumption is still high, but the problem we had was related to a vnode leak."

Reply Score: 2

RE: ZFS memory hog?
by fridder on Mon 26th Oct 2009 17:31 UTC in reply to "ZFS memory hog?"
fridder Member since:
2007-11-03

Also, keep in mind that the high memory usage is also due to very aggressive caching on ZFS's part

Reply Score: 1

RE[2]: ZFS memory hog?
by sbergman27 on Mon 26th Oct 2009 18:22 UTC in reply to "RE: ZFS memory hog?"
sbergman27 Member since:
2005-07-24

Also, keep in mind that the high memory usage is also due to very aggressive caching on ZFS's part

There is something about all of this that I have never understood. The linux page cache and buffer cache make aggressive use of memory for caching. To a great extent, application pages and disk data are treated the same. (Though not exactly the same. e.g. /proc/sys/vm/swappiness.) And yet when more memory is needed for applications, it is available in a flash. Thus no one ever speaks of Linux filesystems as having a memory *requirement*. What flusters me about ZFS is all this talk about it having a memory *requirement*. A filesystem should not have a memory *requirement*.

Is this an artifact of the strange way that ZFS was implemented? i.e. a result of it being a "rampant layering violation", as Andrew Morton once quipped? In Linux all that sort of caching, for all block devices, as well as applications, takes place in one unified layer. But in ZFS, I guess it allocates large amounts of memory and does its own management of it, independent of the rest of the kernel?

Edited 2009-10-26 18:25 UTC

Reply Score: 2

RE[3]: ZFS memory hog?
by Kebabbert on Tue 27th Oct 2009 10:10 UTC in reply to "RE[2]: ZFS memory hog?"
Kebabbert Member since:
2007-07-27

ZFS releases all memory as soon as an application needs RAM. The thing is, to achieve good performance you need at least 1GB RAM and 64bit CPU. If you have 512MB RAM or 32 bit CPU then the performance will not be so good.

I used 1GB RAM and 32bit CPU and I only got 20-30MB/sec with 4 discs. With 64 bit CPU, I get over 100MB/sec. ZFS is 128 bits, so it likes 64bit CPUs.

So, ZFS does not _require_ much ram nor 64 bit CPU, but your performance will not be good without them.

Reply Score: 2

Comment by Oliver
by Oliver on Sun 25th Oct 2009 13:28 UTC
Oliver
Member since:
2006-07-15

http://mail.opensolaris.org/pipermail/zfs-discuss/2009-October/0331...

> Apple can currently just take the ZFS CDDL code and incorporate it
> (like they did with DTrace), but it may be that they wanted a "private
> license" from Sun (with appropriate technical support and
> indemnification), and the two entities couldn't come to mutually
> agreeable terms.

I cannot disclose details, but that is the essence of it.

Reply Score: 3

RE: Comment by Oliver
by segedunum on Mon 26th Oct 2009 12:31 UTC in reply to "Comment by Oliver"
segedunum Member since:
2005-07-06

I don't buy that at all. It's a reason given on a mailing list where people are frantically running around for an answer other than "Apple couldn't integrate ZFS into OS X properly and felt it was the wrong solution in the long-run that might well create more work." While reading and writing to ZFS on OS X has approached something like production quality, using it as your own true filesystem is something else. Performance issues need to be analysed and corrected (ZFS is doing a lot of things that have never been seen in widespread desktop filesystems) as well as doing far deeper integration with the operating system. HFS(+) has been bludgeoned into doing that over many years.

Apple has integrated many software components under a variety of open source licenses and never had problems before. They might have wanted Sun to give them a special license or come to some kind of support agreement, but that really shouldn't have been any trouble at all for Sun. The relationship would have been extremely beneficial to both Sun and Apple considering the workload that could have been shared, especially considering Sun's takeover by Oracle and Apple's historically bare filesystem development resources.

Reply Score: 2

Linux Mag mentioned OSnews on this
by kragil on Mon 26th Oct 2009 10:34 UTC
kragil
Member since:
2006-01-04

Not big news, but nice to see that decent reporting is still alive in a few hidden places on the internet.

Reply Score: 2

segedunum Member since:
2005-07-06

That's a pretty well done blog article that covers the main probable points.

Reply Score: 2