I arrive home to find a spiffy package from ADC… Look it’s Jaguar! The excitement was racing to install this upgrade but then I thought what about my data? I wanted to partition my drive differently for Jaguar so I did what I would do on any of my systems. I tared my home directory double checking the file contents to make sure I got all my hidden files. I then uploaded the tar to my server via scp and checked the md5sum of the file. Everything looked good I was ready to go!
The Jaguar install false started once and crashed in the middle of it. I started it over and went on smooth as butter. Jaguar was up and running in no time and I was impressed by the repsonsiveness of the OS. While it wasn’t BeOS it was pretty darn quick, especially when you consider the amount of eye candy there is and all the alpha calculations that are done. Before I start delving deeper though I wanted to get all my data back. So I snagged the tar file via scp, checked the md5sum all was good. I untared the file into a temporay directory and restored the files I needed right away.
My first problems came with Chimera. I restored the ~/Library/Application Support/Chimera folder from my tar archive and everytime I started Chimera it crashed before loading… This wasn’t too troublesome though since Chimera is still under heavy development. I simply created a new profile, copied my bookmark file over and I was ready to go again. The next problem I ran into was with my software archive. I like to keep copies fo my software on my hard disk cause I hate looking for CD-Roms. Well when I goto install Adobe Photoshop 7 (downloaded version) the SMI image won’t mount. This is disturbing but I had this at home on CD too so I didn’t think about it to much and went onto install Quicken. Again Quicken installer wouldn’t start up. I then discover that pretty much my entire Mac OS software archive is fungled, but my Linux binaries were fine when I copied them over to my Linux box.
At this point I was frustrated and confused. I spent a lot of time playing with Jaguar but was still somewhat disturbed by my software archive being pretty much toast. I got home and installed Quicken 2002 off of CD-Rom and go to open up my quicken data file and Quicken does not recognize it. So that got me thinking about my creator ID’s and OS 9 resources. I set the file type and creator code to the appropriate settings via the Developer Tools and Quicken now recognized the file. I still had problems though. When Quicken attempted to open the file it said the resource file had changed and it would rebuild it. Well the rebuild failed and I’m without my Quicken data. This get’s me really worried. I then remembered something about legacy Mac OS systems… The resource fork! The tar command didn’t archive the resource fork. So I used the Developer Tools again and extracted the resource fork from a new quicken file and copied into the resource fork of the restored quicken file, and voila! My quicken data was back. This brings me to an interesting question. Is the behavior of tar a bug or a feature? I searched through apple’s documentation and all I could find on the topic was this:
Note: You can use the BSD cp or mv commands on a application package (or any other bundle) without ill effect. However, if you use those commands on a single-file CFM application, the copied (or moved) application is rendered useless. For the latter purpose, Apple includes the CpMac command-line utility.
There is no mention of the other unix utilities. Further more I had applications corrupted and single files corrupted. To me this behavior is inappropriate, the filesystem should be smart enough to provide this forked data through unix tools some how. The other gripe I have is that the Finder application uses a seperate fork other than the data and resource fork to store its information! Where does this stop? As a *nix sys admin I am very frustrated with this aspect of Mac OS X, though I must admit that it shines everywhere else. Personally I think OS X needs to be on UFS file systems only, get rid of this crazy HFS garbage, but that’s just me.
Let me know what you think…
About the Author:
Philip Streck is an Applications Programmer for Akron General Medical Center, an independent consultant and specializes in Palm OS(C)-based applications. He is also currently obtaining a BS in Computer Science at the University of Akron.
Everything went smoothly, and I haven’t had any problems with any disk images, etc.
I’m sure you know that you can have UFS for OS x.Use a different drive and install with UFS.
I attempted to use UFS but many applications (Mozilla, Quicken 2002) have compatibility issues with it, and also Apple recommends HFS+ for new installs. Alas maybe I’ll do that next time, to bad I can’t connect another hdd to my tiBook easily. Or maybe it’s time for an external firewire drive
Phil
BeOS managed to get resources to work in a way that was 100% compatible with all the tools “out there”, cp, mv, tar, ftp, and all filesystems.
Attributes were another story, but they weren’t supposed to store atual data, just hints about the content of the file.
JBQ
These primitive, metadataless unix file systems are holding back a number of possible usability and feature improvements for Linux desktops. Abandoning metadata on Mac OS X would be a step backwards IMHO. They should instead be embracing ideas like the BeOS FS implemented and now Windows is adopting.
PS: It is going to be up to companies like Lycoris to take advantage of things like this, since they control the entire OS and are focused on the desktop user. The dekstop projects themselves are always going to be saddled by the lowest common denominator in an effort to be portable to different unixy platforms.
PPS: Therefore, what I think is that Apple should fix all of those commandline tools to support HFS resource forks.
This is a big problem with HFS/HFS+. For example, with Sound Designer 2 audio files (.sd2, the standard on Mac for pro recording), the data portion of the file contains only raw sample data. If you lose the resource fork, there is no way to recover the sample rate, bit depth, number of channels, time stamp, etc. For this reason, broadcast wave is the cross-platform audio format of choice.
Perhaps with Dominic at Apple, the file system situation will improve in the future (not to mention the attrocious creater code nonsense – another thing BeOS got right with MIME types).
I don’t know what the problem at Apple is, but it seems like the folks on the storage/filesystem team have no respect for HFS+. (No respect as in “an active dislike”)
OS X is full of little things that make you feel that they only support HFS+ because they had to: the way file types are handled, these kind of problems supporting resource forks, etc.
This kind of stuff makes it much more difficult to back up your stuff. I mean, one method of copying won’t get your resource forks. Another will skip unix invisible dot files. Another will skip HFS+ invisible files. What the heck?
Maybe they’ll fix it in the next big cat release (“Puma”).
And maybe we won’t have to pay for it. Better still if they fix it in the 10.2.x series. This has to be something Apple’s engineers knew about for a while.
–JM
wacky enough, if you copied your data to a UFS disk (using MacCp or the finder) and then tared it. Untared it to a UFS disk then move it back to HFS+. Everything is fine.
most a the apps are HFS +.
re:backing up, I spent the money and got retrospect and it does a good job,an easy way to back up everything.
I don’t really expect Apple to fix that. Apple screwed up quite some good technologies they had, and they will again. Remember Newton or Hypercard? Now they’re ruining both Openstep and MacOS the same time, not to mention that they flushed one of the best UI designs ever (Platinum) down the toilet.
“Now they’re ruining both Openstep and MacOS the same time”
Could you expound on this opinion?
I am having a very good experience on 10.2.
The upgrade worked for me, I am continuing to be
be productive, all my apps work, meeting deadlines,impressing customers etc.etc.
I remember reading about this. IMHO, the real problem is that Apple didn’t take as much care with its command-line tools as it did with its GUI. The Right Thing(TM) would have been to have rewritten cp, mv, tar, etc. so that they actually could work with HFS+ forked files. CpMac should never have needed to exist.
I think Apple could easily store the metadata they need in XFS attr, and XFS is a niftier filesystem anyway. I don’t understand why I can’t have my choice of filesystem on MacOS X.
Mac was never about choice. It is about what works better.
I agree that XFS would be cool for OSX. But I do not agree have a whole bunch of filesystems to create a mess. One is enough.
I am sure that Apple would not want to use a GPL filesystem though. If they can license XFS under other terms from SGI though, that would indeed be cool.
Dominic Giampaolo who now works at Apple’s kernel/fs team, used to work at SGI, before he went to Be and QNX.
You should have used CpMac. There’s also CpMac’s sister, MvMac, both of which preserve resource fork data.
“Now they’re ruining both Openstep and MacOS the same time”
Could you expound on this opinion?
Sure.
Openstep was doing its filetyping entirely via file extensions, just like DOS and Windows. MacOS used to do that by file/creator attributes. MacOS X is trying to mix that, but it doesn’t work well.
One of the major points of MacOS used to be consistency and simplicty. Now Apple has the same problems as Microsoft had with Windows95: Some programs can handle 256 chars in a file name, some can’t. Look at your files from the command line and you see something different than when you look at them in your graphical shell.
Then the UI:
A bad effort to combine Openstep (e.g. the Dock) with MacOS elements. Too bad no one realized that somethiing important got lost when they threw both of them in the blender: Concept. Both Openstep and MacOS had a conception for their UI, and they don’t mix well. Openstep was heavily relying on multiple mouse buttons, where MacOS was built for one button. MacOS used the file manager as the central of the UI, Openstep the Dock.
Do you want examples?
MacOS insists on SDI, but look at ProjectBuilder – MDI!
The paradigm of MacOS’ context menus is that you must not need them – the functionality you find in the context menu must be available over the regular menu bar as well. Now, where in the Finder menu is “Show package content”?
The result: None of the concepts is left, and you’re getting the common denominator, not the best of both worlds.
I am probably posting on the wrong web site, as I am a user of software ( a web designer, graphic arts, musician), not a programmer.
I like to learn about the underpinnings of the OS, and the experiences folks in the programming field have.
So your comments add to my base.
..
The ones that I typically use are hfstar and/or hfspax, both of which are simply patches to the GNU versions of the tar and pax utilities which allow them to gather resource forks into the archives. Google for them and you should find them.
As you discovered and others have pointed out, the standard BSD commands completely ignore resource forks. I imagine this was done to maximize compatibility and sync-ability with FreeBSD… if they re-wrote all file commands to correctly handle HFS (if that’s even practical) then a simple re-sync to an updated FreeBSD would be a major undertaking, to say the least.
There are some other interesting, free options for archiving & backup which do work with resource forks. One is psync:
http://www.macosxhints.com/article.php?story=20020711091017747
Someone else pointed out that you could use CpMac to copy to a UFS volume, and tar from there. This method does not require UFS… any non-HFS file system would work. When forked files are copied to file systems without forks, OS X creates dot files containing the resource forks; when they’re copied back, it recombines them.
By the way, CpMac and MvMac are deprecated tools. There is another CL utility for copying with resource forks… “ditto”. See:
http://www.macosxhints.com/article.php?story=2002022409532098
I still don’t understand why when any others OSes achieves to store structured data in the same, single file *fork* when Apple need to design this weird dual fork file system!?!
Now under MacOS X, it’s seem to be the worse case: resource fork HFS dependancy *and* file typing by name extension
😎
Hey, at least Helios, with their EtherShare product, made money from this situation for years!
🙁
Its about backwards compatibilty with the older Macos. They DO use resource forks. So that is why the dual approach basically you are damned if you do, damned if you dont. Once classic is fully history these things won’t be an issue. They can be dropped, however I don’t see that happening for about 2-3 years as far as classic being a totally DEAD, to where they can ditch code and support for backward compatibilty. You have to look at this. Apple has to support their Mac customers before the UNIX customers to they did what they did in that order. Once we get classic out the picture totally I think the OS will start to make even better progress. It can’t all happen overnight though, no matter how much we complain.
When people purely worked ont eh mac and mac only (No pc file sharing and that) Resource forks were great. You could easielly see the structure of a programs data. Pictures were stored in PICT resources, text in TEXT, data in DATA. Dialogs in DIAG etc etc etc. For the programmer it was great too. Single files with all your data in a nice structure.
Now a days resource forks are better replaced by bundles and XML. Which apple has done, and apple recommends all developers to move to.
Keeping backwards compatiblity is a good idea.
If you take a quick search at the support site @ apple, you will see that
If you take a quick search at the support site @ apple, and search for backup, you will find their recommended methods of doing backup of an existing system. I for my part did a backup with Carbon Copy Cloner, which also worked perfectly. DMG archives seems to be a good way of doing backup of Mac files because of the lousy filesystem forks and stuff.
If there is one thing I REALLY want on my Mac, its a GOOD FS (Please Dominic, strut your stuff!! )
The UFS2 filesystem in the next version of FreeBSD will provide extended attributes. I have not investigated the implementation details, but will this work have similar problems with HFS’ resource forks?
I expect that Apple will be closely tracking the UFS2 work. Perhaps they will be able to adopt it as a better alternative to both hfs and the current ufs.
This kind of problem has bit me several times, too. I had learned to like the differences of MacOS. Now, everything I liked about OS9 and its predicessors is missing from OSX or is being slowly killed off.
Why do creative and good ideas always get wiped out by old and inflexible ones?
I’m with stew 100% here. The UI conventions do not mix and the filesystem is badly handled. How many times have you been pounded by the “Error -36” message when trying to copy files with names longer than 32 characters to HFS volumes? Does Jaguar correct this stupid error message and tell you what the problem is???? It’s like the OS and the file system both come from different places… oh yeah, they DO.
The fact that there are dot files and hidden files and other garbage thrown everywhere that the user isn’t aware of but needs to be aware of is a big fat design flaw. A FS should be transparent to the user and the OS should handle it transparently. Not bash the user with cryptic error messages when they do something that seems completely routine.
Anyone like how MacOSX handles FAT disks now? Slow as hell. Twice the file names than needed.
Apple has thrown away all the good it had and replaced it with everyone else’s old, painted it sexy colors and sold it as “new.”
This is even more strange when you think of Apple’s “not invented here” mentality.
If it was easy it would have been done by now. Calm down the world will go on. Not everything can be done in a 5 minute patch. Transitions can take a while, when you are talking abuot two totally different OSes in OS9 and OSX. Meshing them together was never going to be a neat and painless project. FAR from perfect, but moving forward at least.
Philip Streck, you should check out Apple’s Backup utility. It comes as part of the dotMac package and is worth the price of .Mac alone. In fact, I wasn’t going to sign up to .Mac until I tried out Backup on free trial (only allows backups to your iDisk until you cough up for a sub).
In work we use retrospect which is heavy duty and execls at what it does. At home, Backup is more than capable of handling my needs, and may handle yours.
As for filesystems: metadata is good. Metadata is a very good thing. Apple are trying to push developers away from utilising the forked file capabilities of HFS+ but it is a big mistake. I’m a graphic designer and one of the reasons I use a Mac and not a Windows machine is that I have several prorgams which create EPS files (Quark, Illustrator, Freehand, Photoshop etc), each of which is slightly different. On a Win machine, the OS can’t mac the distinction. But my Mac can.
Apple needs to fix the utilities, not downgrade the filesystem.
It looks like nobody here knows anything about hfstar, it acts as normal tar plus it takes care of resource forks. Excellent little backup tool
http://www.geocities.com/paulotex/tar/
I am probably posting on the wrong web site, as I am a user of software ( a web designer, graphic arts, musician), not a programmer.
I like to learn about the underpinnings of the OS, and the experiences folks in the programming field have.
1) Which OS?
2) Also how much do you know coming in? For example as a web designer do you know raw HTML, javascript, cgi…?
3) What areas about OSes interest you? Hardware interfaces, message passing, GUI design?
Those 3 and I can recommend a book
I still don’t understand why when any others OSes achieves to store structured data in the same, single file *fork* when Apple need to design this weird dual fork file system!?!
Actually you see this quite often on mainframe systems like MVS and VMS (though implemented differently). OS/2 used it to in almost exactly the same way. Its one of the main areas that Unix diverged from the mainstream. CPM as a stripped down version of Unix picked up the Unix tradition that files are a simple a stream of data and its stuck through all the way till today in terms of Windows. So first off its not so unusual its just unusual at this particular time in this price range. Secondly, Windows by treating files as having clear types and expecting them to be associated with specific applications gets all the disadvantages of the resource fork system but since they don’t actually have resource forks they don’t get the advantages.
OK why are resource forks a good thing? Well the are similar to the advantages of object oriented programming over structured programming in that they allow you to bind the methods to the data — that is they are polymorphic.
Think about operations like: open, print and copy on a resource fork system over a flat system.
Open. On a Unix box where files are entirely untyped open can’t mean anything more than “open for read” as a stream of ascii text. And in general what makes Unix so powerful is that all files are treated as linear stream of ascii text and so really really powerful things can be done to them. The downside of that is that any files that’s not a stream of ascii text ends up being a second class citizen; like database files.
Conversely on a mac since the files have a resource fork “open” means open feeding this data to the appropriate applications. Even further since resource forks can call out other resources not contained in the data fork open can mean, “open feeding this data to the appropriate application and then collect resources from the local machine so as to make this file operable”. This is the reason that macs at areas like desktop publishing for so long without needing to store application data in its own database format like the .doc format you see on PC data files that need to call in additional resources.
Copy works even better. Think about email attachments in most companies; if someone sends an excel document to 20 people and they start hitting the reply button and a dialogue starts up you can easily have 500 copies of the same excel spreadsheet in the mail system inside of a day or 2.
On a system like Lotus Notes or Exchange the datafile isn’t really in any of the emails, rather it’s a called out resource so there will be one copy and only one copy unless someone changes it; in which case an entire revision history (like rcs) will be constructed. Which means if two different people make different edits the original author can pull the file back through the system see the original and see both sets of edits side by side and send out a final revised version which combines all the edits. That whole operation would probably use up 2x the disk space of a single file rather than 500 copies with no way to know which have revisions and which don’t.
Finally print is pretty obvious. For most file types you need the applications to do the work of converting the complex data format into a print ready file. Having the file know what application to invoke helps a great deal.
Author takes backup without first looking for instructions about doing so on Apple’s website. Author’s backups doesn’t quite work as they should. Author blames Apple.
I liked egilDOTnet’s comment.
For all you geeks, unix buffs: under normal circomstances ( meanning when you buy a mac as a particular) you have no support for anything you do with the command line. This i learned when I called apple’s tech support for broken metadata when using cp. Though the solution is quite simple aplle should just link all the modified unix commands it create to the standard ones it gives like cp -> cpMac etc etc
Hmm, I don’t know, but the part at the beginning where you (the author) had a false start installing and then crashed in the middle of the second try didn’t sound good, even before copying any programs and files over. I’ve reformatted and installed 10.2 on four Macs without a hitch of that kind. That doesn’t mean anything in itself of course, but I wonder if it was a good install in the end? If you’re having no problems now, I guess so and hope so. I worry about corrupted files, etc. if the installation itself does not go well or seems shaky.
You are at the University of Akron?! Not far from me at all! We should have Eugenia come to Ohio where she can see some real Greek communities in places like Akron, Youngstown and Warren. The annual Greek Festival in Warren is great!!
Mainly mac OSx.
html (use bbedit, golive)
javascript, not really, use mainly pre written scripts.
Use (not write) cgi’s
Learning data base (php,mysql), server procedures.
Like I said, a user!
Not enough time in the day for too much GUI exploration, too much hardware tweaking.
Most of my further edu in linux, etc. is reading sites such as this ( and posting my personal Mac OS x experiences).
thanks
OK cool. Something like MacOSX unleashed http://www.amazon.com/exec/obidos/tg/detail/-/0672322293/qid=103069…
will help with your understanding of how to use OSX better and a little of how it works. You’ll have a tough time understand much about OSes without knowing some basic programming concepts. If you are already doing PHP that’s not a bad start.
1) Keep going with the PHP till you are pretty fluent
2) Try and move from there to a more general language using a college “introduction to programming” book. That book will teach you about basic data structures and terminlogy.
3) Decide what your area of interest is again. If its still fairly general soemthing like Tannenbaum http://www.amazon.com/exec/obidos/tg/detail/-/0136386776/qid=103069…
would be a good next step.
The best thing about forks is that a single file with forks can be understood as several different file formats. So…
A word processing document can have plain text as a data fork, and formatting as a resource fork. Then, any app that uses text files can use that file, and the word processing app can still use it as a WYSIWYG file with all of the formatting.
Also, with image files… An image file could, say, have a jpg file as the data fork, with a resource fork containing 8-bit alpha data. Older apps could use it as a standard jpeg file, but new apps could take advantage of the extra data.
There aren’t many good examples of these scenarios because PC makers other than apple were, well, stupid. Look at the early MacOS apps (around System 6), and you will see formats like this.
He could have created a .sit file, and then the only problems he’d have had to deal with were long file names (StuffIt doesn’t get those quite right).
He could have… bah, screw it, the list is too long. There were many, many things he could have done not to screw up his stuff. Duh: tar doesn’t get resource forks. Caveat… backupper?
Just a tip for people who want to back up data. I run this from root’s cron job every day at 6am and it works beautifully, creates a fully bootable backup, etc.:
/usr/local/bin/psync -d / /Volumes/Monde
psync rocks. Files, permissions, resource forks, etc. All copied. Wheee.
You OS X guys need a nice port of zip/unzip that supports file attributes, like I did for BeOS.
Buy/lend me an OS X box and I’ll do it for free. 😉
– chrish
The OS X book looks like a good start.
Thanks again.
Willem
You OS X guys need a nice port of zip/unzip that supports file attributes, like I did for BeOS.
Doesn’t Stuffit already do this?
I used hfspax. it works pretty well