posted by Jussi Pakkanen on Tue 17th Apr 2007 18:20 UTC
IconLet me begin by telling you a little story. Some time ago I needed to run a script at work once a day. We had tons of machines ranging from big Unix servers to Linux desktops. Due to various reasons the script could only be run on a desktop machine. However using cron was disabled on desktops. All other machines allowed cron.

I contacted the IT guys and asked them to allow cron on desktop machines. A long and arduous battle followed. Among the issues said to me was that allowing cron on desktop machines would be a security issue (apparently because then people would actually use it). When I replied that cron is freely usable on all other machines I got a bunch of other comments, including a "helpful" way to work around my problem with a complicated combination of xnest and ssh port forwarding.

All I wanted was to get cron working. Eventually I was told that "you can code your own cron program with Python really easily, just run it under screen and you're all set".

End of line!

I had come face to face with the workaround trap. I'm sure most readers have faced some mutation of this phenomenon where an obvious problem does not or will not get fixed because you can hack around it somehow. Workarounds by themselves are not a bad thing. Problems arise when workarounds prevent the so-called real (or "correct") solution from emerging. This problem is especially prevalent on UNIX type systems, which have traditionally had a strong culture of DIY whenever the basic tools don't do exactly what is required.

Some examples might illustrate the nature of the workaround trap. The first one deals with binary relocation and the second one with email protocols.

Where am I, asked the little program

One of the cool things about OS X is that most programs come in self-contained bundles. Those can be placed anywhere in the file system and run. Similarly Windows programs can be installed in any directory whatsoever. In contrast rpms and debs have hardcoded install locations and simply will not work anywhere else. At least most of the time.

There are several reasons for this but one of the main ones is that there is no standardised way for a UNIX program (or a dynamic library) to ask where is the file it is being run from. Windows and Mac both have this feature.

Suppose we have a C program that consists of two different files: a binary called foo and a user-editable configuration file called foo.conf. These files are placed somewhere in the filesystem (say /opt/myprog/foo and /opt/myprog/foo.conf) spesified by the install system. The user then runs the program, which begins by reading its conf file. What filename should it pass to fopen to access its conf file?

Simply passing foo.conf would try to open the file in the directory the program was launched from (usually the user's home directory). Opening the correct file would require some way to know where the executable file currently resides.

This is where the workaround trap springs into action.

The common solution is that the build system #define:s a string constant such as INSTALL_PREFIX as the path where the program will eventually get installed. Then accessing the data file is simply a matter of string concatenation (INSTALL_PREFIX + "foo.conf"). This works and is relatively simple, but the downside is a severe static dependency: the files must be in the spesified directory or else nothing works.

You can do all sorts of cool things with relocatable binaries, like allowing users to install non-official rpms (such as daily snapshots of Firefox 3) to their own home directories in a safe and reliable way. But these things will never get explored since there is no way for a UNIX program to ask where is the file it is currently being run from.

Note: this is one of the problems being tackled by the Autopackage team. They have a working solution but it is quite hackish and works only on Linux.

"My great-great-great grandfather used SMTP so that's what I'll use"

An oft-repeated wisdom about design is that you have reached perfection not when there's nothing left to add, but when there's nothing to remove. Email is one area where this rule has definitely not been followed.

Have you ever wondered why you need to define both an incoming mail server and an outgoing mail server? You'd think that a server that can transport mail in one direction would be smart enough to know how to do it in the other direction as well. And in fact those server can do that, but you can not send mail through IMAP.

And why is that, you ask?

Because mail is sent using SMTP. Mail has always been sent using SMTP. Mail will always be sent using SMTP. To not use SMTP is unthinkable.

This causes a lot of unnecessary work. Mail client developers have to code and debug two different protocols instead of one. You need two different authentication steps, one for incoming mail and one for outgoing. Basically you need two of everything when one would suffice. A committee has even developed a way for an SMTP server to communicate with an IMAP server to retrieve attached files from sender's IMAP mailbox so that they can be attached to the current outgoing message. I kid you not! I hope they patented the hell out of that 5-phase email sending process of theirs so that no-one will ever have to use it.

There is a simple solution to this mess. The IMAP server shows clients a virtual folder called, say, OUTBOX. The client composes a message and writes it to this folder. The IMAP server takes the message, possibly checks that it is valid and passes it on to an SMTP server. Mail sent. With 50 percent less cruft, even.

Sounds too simple to be true? Can't possibly work? There must be some critical thing missing?

The Courier mail server has had this feature for ages. But should you suggest that this feature be added to other programs, you'll probably get some blank stares followed by instructions on how to properly set up an SMTP server.

Conclusions

Have you ever wondered what Steve Jobs actually does at work? Looking at his public performances you can clearly see that his most important task is aggressively seeking and destroying workaround traps that lie in Apple's products.

Take for example the iPhone. Almost everything it does has been possible for ages. Telephone exchanges have had the support to make conference calls since the 80s. However I don't personally know a single person that would have used this feature. Mostly because to get it working you had to enter long magical keysequences with zero feedback. Smartphones may have this feature in their menus somewhere. On the iPhone you just push one button. The same holds true for most of its other features. Some of those features required changes to Cingular's telephone exchanges. No-one else was willing to do those changes because they kept saying "you can already do that by ...".

The free software community does not and can not have one person like Jobs who would force people to fix needlessly complex systems and eradicate harmful habits. We must do it ourselves. Therefore I encourage all readers to think of one example where a complicated and crufty hack is being widely used today even though creating a more elegant solution would make everyone's life easier.

Then post it in the comments. Back your assertion with solid technological facts.

Because the first step to recovery is admitting your problems.

About the author:
Jussi Pakkanen is a long time Linux user. He has had the dubious pleasure of beating his head against most corners the personal computing world has managed to build. The doctors are optimistic that, given time, he may yet become a productive member of society.


If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
e p (3)    79 Comment(s)

Technology White Papers

See More