Note: Despite the popularity of the term “widget”, a widget is something else, and to avoid confusion, this article (and in fact, this series) will use the proper term: desk accessory.
This time, we do not need an English dictionary to define the term in the spotlight. A desk is a desk, and an accessory is an accessory; a desk accessory is an accessory to the desk. This is still fairly vague, of course, so let me explain a bit more. Just like part II of this series about the icon, I will start with the history of the desk accessory. After that, I will move on to a few examples of desk accessories as we have today, followed by a conclusion.
Multitasking rules
A long, long time ago, multitasking was something of a novelty to many computer users. You used to run one application at the time, mostly because the home computer systems of yore had to and perform intensive tasks, and write a full display, and run multiple applications at the same time. Consequently, many systems of those days simply could not include multitasking. For instance, the original Macintosh lacked the raw power to multitask, as Andy Hertzfeld, a key player in the original Macintosh team, explains: “One of the first architectural decisions that Bud and I made for the Macintosh system software in the spring of 1981 was that we were only going to try to run one application at a time. We barely had enough RAM or screen space to do even that, and we thought that we’d benefit from the resultant simplifications.”
In the PC world (PC as in, the IBM PC), the decision to not support multitasking with the first PC and its software was not made because of a lack of power alone, as told by David Both: “Small business would buy most PCs. Large business would stick with mainframes and dumb terminals. A few departments in large businesses would use PCs for local, non-connected work. The PC would be used for one task only. Not just one task at a time, but a single task all day long. This might be a spreadsheet, or word processing, or accounting, but no more than one task would be performed all day. Based on these assumptions, the operating system was specified to be single tasking.” Which we all came to know as DOS. I learnt computing with DOS (and look how I turned out).
Sooner or later, you would encounter the shortcomings of a single tasking system. There you were, with your brand new IBM PC (USD 5000), writing a letter. In that letter, you need to make a calculation. That was kind of a problem. Sure, you could get a USD 15 calculator and use that to perform the calculations – but that is just weird considering you were sitting behind a glorified 5000 Dollar calculator. So, you want to use your brand new machine – all right then. Save your current work. Shutdown the word processor. Shutdown the machine. Swap disks (the calculator application is on another disk). Reboot. Perform the calculation. Write the answer down. Shut down the calculator. Shut down machine. Swap disks. Launch word processor. Type in the answer, reading from your physical notepad.
That, you do not want. Programmers soon realised they needed to solve problems like this, but coding a multitasking environment is not exactly something you do in a few days, especially not on a computer with less processing power than my espresso machine. Solutions quickly came to market – for both the PC, as well as the Macintosh. Since the DOS solution came a little earlier than the Mac one, I will start with DOS.
DOS
MS-DOS was a purely single-tasking operating system. If you close an application, using the INT 21H/4CH
system call, the application recedes control over computer resources back to the DOS shell (COMMAND.COM
), telling DOS the memory used by the program is now free, effectively destroying the data loaded in RAM. However, there is another way to exit an application in DOS: the INT 27H
or INT 21H/31H
system calls, which allows a program to mark up to 64kb of memory (this limitation was removed from DOS 2.0) as resident – in other words, it will not be overwritten. This system call was called “Terminate but Stay Resident”, or TSR.
This allowed DOS to actually have a very crude form of multitasking (coincidentally, many DOS viruses abused TSRs too). Borland’s SideKick (released in 1983) is a very good example of this: this is an old personal information manager that loaded itself into memory using TSRs, and could be recalled by a specified key combination. What makes this interesting for us is that Sidekick included several little desk accessories: a calculator, a notepad, a phone book, and so on. Even though they were not called “desk accessories”, they effectively are.

As is common in many of these prehistoric computer programs, it relied on very clever programming tricks that would use the system to its fullest of potential – something we sadly rarely see today. Programmers are sloppy people these days.
Mac OS
The Macintosh equivalent of the above is based on maybe even a more clever trick than the TSR system call of DOS. As already mentioned, the original Macintosh lacked multitasking, and therefore, despite being graphical, faced the same problems as the IBM PC. An interesting solution was devised: the early Mac OS already had a framework for loadable drivers – why not use that to create an illusion of multitasking?

The idea of having little applications that could only do one tiny task was already expressed by Bud Tribble. He basically defined the desk accessory so fittingly, that same definition can still be used today: “You’d want tiny apps that were good at a specific, limited function that complements the main application. Like a little calculator, for example, that looked like a real calculator. Or maybe an alarm clock, or a notepad for jotting down text. Since the entire screen is supposed to be a metaphorical desktop, the little programs are desk ornaments, adorning the desktop with useful features.” Since the Mac OS could not multitask, but could load drivers, it was decided that desk ornaments, by this time renamed to accessories, would get their own special class of drivers. Hertzfeld even wrote a piece of “glue code” in assembly to do the dirty work, allowing programmers to write accessories in Pascal.
This allowed the desk accessories to be surprisingly capable – allowing for cut and paste, for instance. In fact, the original Mac OS’s control panel was a desk accessory.

As is usually the case in computing, the desk accessory was not an invention by one company alone, as some tend to claim. The desk accessory was ‘invented’ at multiple locations, using different means, in roughly the same timespan – not only DOS and the Mac OS had desk accessories, GEM had them too, for instance. There is a very simple explanation for this phenomenon: the operating systems of those days suffered from the same limitations, so it only made sense that similar solutions would sprout.
Thom, I think you’re filling in some odd blanks based on faulty memories: the example you gave of using the word processor, having to use a calculator program, and then using the word processor program is stretching the truth. Certainly, if the user doesn’t know how to record that information into a text file, they’d need to use some intermediate storage, if only their own memory. However, the worst they’d have to do is exit the word processor, start up the calculator application, find the answer, write it down, exit the calculator, and then restart the word processor, but it would not require rebooting the computer: at the very worst, using a cassette tape drive (the earliest IBM PC’s had them) you’d have to swap tapes, or floppies, as the case may be: the reboot is something you added in that was not required, unless the word processor program was something that didn’t exit in a clean DOS-like manner.
Also, being the grammar/spelling nut you are, I’m surprised you posted this with as many spelling/typing errors as there are: I don’t think they’d be caught with a simple spellcheck by itself, but they’re there: I’ll leave that as an exercise to the reader and writer
Otherwise, a good article, do more like this, but don’t invent things that weren’t there
…in the dark ages of stupid single-tasking-systems…
…there was the Framework by Ashton-Tate.
I’ve always been partial to “Replicants” – but disappointed that no one ever wrote a complimentary program called “Decker” (which would be designed to kill replicants, of course).
Speaking of which…
Programmers? Yes… Users? Not so much. I think that part of the “problem” was that it’s generally more effective to use the workspace management tools in BeOS than the window management tools. At least, I’ve always found it easier to just switch to a blank workspace and open an app regularly, rather than shuffle windows out of the way to get at a desktop replicant.
Having just bought an iMac with Leopard, I’m thinking that using replicants in the same sort of usage scenario as Apple has with Dashboard would help increase their attraction to users. Creating a Dashboard-type of setup in BeOS/Haiku actually would be rather simple to do, too.
The problem I see here, JT, is that replicants are not coded in web-like languages [right?], making them the territory of people with programming experience, instead of all sorts of other people being able to code them too.
Edited 2007-11-05 11:11 UTC
Ah DOS TSRs. There’s at least one I still use today, to take screenshots of the game Privateer (running in DOSbox) as part of an ongoing effort to clone it. Amazing the hoops we jumped through.
I’m still not sure what I think of Dashboard and the like. Is the Dashboard calculator really more convenient than hitting the calculator key on my keyboard and having a regular (tiny) windowed app appear, one that I can switch to and from with the window manager’s normal methods?
[iI’ve always been partial to “Replicants” – but disappointed that no one ever wrote a complimentary program called “Decker” (which would be designed to kill replicants, of course).[/i]
It’s have to be called Deckard, but yes, that does seem like an obvious name for such a program. [OT]I’m trying to decide whether I can make it to Denver in time for a screening of the Final Cut.[/OT]
> I’m still not sure what I think of Dashboard and the like. Is the
> Dashboard calculator really more convenient than hitting the calculator
> key on my keyboard and having a regular (tiny) windowed app appear,
> one that I can switch to and from with the window manager’s normal
> methods?
Certainly not, and the same could be said (in one way or the other) about all dashboard widgets. But then, you can re-arrange them on dashboard as you wish, but you won’t rip out the keys on your keyboard and stick others in their place. (There was a keyboard announced which claimed to change it’s keys on demand, with tiny OLED displays on each key, but so far it’s vaporware).
“A long, long time ago, multitasking was something of a novelty to many computer users. ”
What was really amazing – from the perspective of an RISC OS/Amiga user – was the hard sell that multitasking needed with some people. “Why would I [i]want</> to run more than one program at once?”.
I think history is littered with comments and predictions that in hindsight seem rediculous.
I remember thinking when I upgraded from a 4MHz Amstrad 80286 to a 16MHz 286SX I’d never need another computer again.
Even these days people make comments like why do we need CPUs with 80 cores (or whatever number they insert here) but in my view we’ll definitely find a way to use them and want even more thereafter.
Because we dont need them, not now. In the future, for sure, but not now. This is quite different from the multitasking case which had obvious immediate advantages.
We don’t NEED 80 cores, we WANT 80 cores
If all you want to do is edit a text document then your 16MHz 286SX is still as capable as it ever was. The thing s we now expect our computers to do FAR FAR more than that (watching dvds). This is what drives us to upgrade
“What was really amazing – from the perspective of an RISC OS/Amiga user – was the hard sell that multitasking needed with some people. “Why would I want to run more than one program at once?”.”
Even today, when MICROS~1 are able to run things quasi-simultanously, there are still users out there who dont get familiar with a multitasking concept, expecially when mapped onto GUI elements.
I’d like to illustrate this with a few typical sentences:
– I don’t use it at this moment, so I don’t want to see it.
– I’m done with it, now I must close the application.
– If I need to see another application, I will have to close this one I’m working at.
– This window annoys me, I don’t need it.
So, in order to browse the web for an information, the word processor needs to be closed. Allthoug switching applications on screen, or even screens theirselves (virtual desktops), would be the more comfortable way here, it seems to be too complicated.
Hard to understand, I know… 🙂
Then, there are users who want their desktop clean if the computer doesn’t do anything, and there are the ones who need everything at once (many applications opened, desktop littered with icons, as many widgets as possible).
The problem here is the interface. Today’s computer systems are needlessly complex and actually works against many human principles. In Jef Raskin’s book, the Humane Interface, Raskin shows how the computers of today are needlessly complex and so, necessitates a complete user interaction overhaul. The book also describes one vision of how a general purpose computer system should operate. The system he proposes fixes all of the interaction examples that you have described.
Edited 2007-11-05 13:08
Are you a Raskin blow-hard or something?
Quick! Everyone pay homage to the author of all knowledge of all interfaces by intuitively sucking on this fleshy hose! After all, it looks like it was designed for such use.
If you could present an idea without making it sound like a product placement pitch for a book and it’s egomaniac author, it might lend you some credibility.
> What was really amazing – from the perspective of an RISC OS/Amiga
> user – was the hard sell that multitasking needed with some people.
> “Why would I want</> to run more than one program at once?”.
I’d be similarly cautious if you asked me. Let’s see what applications I have currently running:
– A web browser (obviously). That’s what I’m typing this comment in, so it is needed. I have a background tab open which I’m currently not using. I will use it in the future, but not now.
– Other applications like Eclipse and Finder (file browser), which I don’t use at the moment.
Here we already have the first point: Saving and restoring the state of these applications would do the job as well. Multitasking is a solution, but not the only one. Going on:
– Email and ICQ client: These applications aren’t doing anything at the moment. They are waiting for incoming events. Even when an event arrives, they will quickly handle it and then go to sleep again.
Event handling is an obvious case for the “widgets as drivers” idea that the people at Apple had. Again multitasking can do the job, but others can do as well.
– iTunes: Yes, here I am actually running a second program.
– Background tasks: … and some more.
Finally we have some use-cases for multitasking. But they are much less obvious than some people think.
with my ancient TRS-80 Color Computer3(IIRC) back in 1986. The device had 512k of memory, 1 floppy driver, no hard drive and a 2Mhz 6809E Motorola processor. The OS I used was named OS-9 from microware.
OS-9 was coded in assembly and was amazingly efficient. It implemented a primitive *NIX like shell, complete with mutliple virtual terminals-which you could switch between with ctrl-alt-1 through ctrl-alt-9. Yatou could run an application in each of these seperate terminals all concurrently.
Shortly before I quit using OS-9 a GEM-like graphical user interface came out which was a precursor of what I later experienced using twm on Linux.I routinely ran 6-7 major applications at the same time(including a wordprocessor- Dynastar-a Worstar clone, a spreadsheet program-Dynacalc, a LOTUS-1-2-3 clone, and Dynabase, a DBASEII clone in addition a terminal for programming and other programs.
When I think about all of these applications running so smoothly on a 2Mhz CPU w/ 512K and only 1 floppy drive I remain underwhelmed by most of the advances in OS design and technological progress in hardware.
I got my first IBM clone in 1987 and I was thrown back into the stone age-only later did I find out about TSR’s and GEM-DOS and windows-3.1 were horrible back then. One of the reasons I started using Linux in 1994 was because my 386sx laptop with 16MB memory running windows 3.1 in 1994 was not capable of doing what my old coco did back in the mid 80’s.
When i started using Linux I rekindled some of my fascination with computing-in 1998 i went Linux fulltime and have never looked back. To this day Winxp cannot hold a candle to the multitasking I do under Linux each and every day.
Yet I still yearn for my old coco and OS-9….
You somehow think that Windows can’t run multiple programs as well as Linux can?
Have you even used Windows since 3.1?
Given the struggling with Linux’s scheduler I would say that Windows excels here (and I use both daily).
While I have not had any problem with Linux and multitasking, (usually running Win2k3 in a VM for windows development while having Firefox, Thunderbird, pidgin, xchat, and amarok running on the host), claiming that Windows cannot multi task well is either FUD or ignorance.
When running Windows, I generally have the same apps (Media Player instead of amorak), and have never, ever had a problem. Been using Windows since 3.0, and the NT based Windows line multitasks very well, and always has.
There is a lot to complain about with Windows, but this isn’t one of them.
Maybe there are issues with Windows that can be associated with poor multitasking, but relate to something else. I know that the Explorer shell freezes up on me a lot (file copy operations and such), which I don’t see in Nautilus (?) or the Finder. But it is definitely an issue with Explorer since other processes chug along just fine.
Thanks to Thom for recognizing that the early DAs offered “multitasking” capabilities on older operating systems. But I would argue that modern DAs have nothing to do with that heritage or even usability.
I would argue that modern DAs serve two purposes:
Eye-candy sells modern operating systems or add-ons (like DesktopX). For whatever reason, people want something that looks good rather than something that just does the job.
While I cannot really speak for Vista or DesktopX, I think it is safe to say that Dashboard provides a development framework that is more accessible to non-programmers because it uses web-development technologies. Sure you end up doing some programming at the end of the day, but HTML, CSS, and JavaScript are things that people actually want to learn.
If is was a usability issue (i.e. people need the functionality wrapped up in DAs), then I would argue that all of these modern DAs would be pointless. A vanilla C application would do the job just as well. All you have to do is tweak the UI and avoid adding a glut of features.
That sentence makes no sense. Care to elaborate?
No, a c application would not do the job just as well, because a c application is a lot harder to write. By using web languages you allow a whole lot more people to scratch user itches, opening up a lot more possibilities. They’re the ultimate in high-level programming.
I should have said that these DA environments (like Dashboard and DesktopX) do not add much in terms of usability. I’ve used DesktopX before, and everything that it adds to the screen could be accomplished through a regular application that doesn’t have DesktopX as a dependency. Much the same can be said for Dashboard, with the (arguably) minor difference that all Dashboard widgets are on a single layer.
Didn’t I say that distinction exists from the perspective of the developer. At any rate, it is a pointless distinction from the perspective of a user.
I’ll have to disagree with you here, Thom, because it’s rare to get the maximum flexibility out of what you can do with a scripting language when you’re programming a sufficiently complex widget that’s native to an OS: that is, something more sophisticated than that which will run from a webpage. The key here is not that web languages make it available for more average users to create widgets, so much as the barrier to creating relatively simple widgets is a bit lower than requiring them to use a fully systems-enabled language (in other words, a language that has full access to the full system API of the relevant OS) and the reality is that whether a systems language (such as C/C++ or Pascal variants with full API access, as just a couple examples) or a scripting language (JavaScript, VBScript, whatever) is used, has zero impact on usability of the resulting widget, with the greater limitation being what’s available from within that language to access the GUI features of the platform, and furthermore, how much the developer wishes to actually use them: for example, in OS X there’s a nice little widget that encapsulates a standard Terminal, which, by the standards of GUI usability, is about as far away from it as you can get, short of dip switches and blinking lights. I’ve not verified which language it was created in, but strongly suspect no web scripting language was involved, while in other cases, I suspect scripting languages were used for widgets I’ve got in use.
I’ve not had time yet to fully investigate the Mac OS X API and see which languages have complete (other than the system languages) access to all the features, but it behooves Apple to make as many bindings available as possible for the languages that people actually want to develop in. What’s very important to keep in context is that the language doesn’t enable the typical user to develop a decent widget so much easier, as does having proper editing tools and documentation. The net result (pardon any possible puns in reference to a web-enabled language) is that all developers of widgets that have any meaningful functionality have to be able to program and solve problems with whichever language(s) they use, and what that means is that people that can’t solve problems regardless of development language, still won’t be capable of developing widgets in even the “Easiest” scripting language, no matter how much they desire to do so. Adding a URL into a form and calling it a widget and pronouncing “Hey, I’ve just developed a Dashboard application!” just doesn’t count, sorry.
“I’ve not had time yet to fully investigate the Mac OS X API and see which languages have complete (other than the system languages) access to all the features, but it behooves Apple to make as many bindings available as possible for the languages that people actually want to develop in.”
From what I have seen, the Mac OS X Dashboard provides some additional HTML elements (canvas for example) as well as some Javascript bindings for parts of the system. In addition there are “Widget Plugins” which can be implemented in Cocoa to provide Javascript bindings to essentially anything. I believe this is how the iTunes Widget works for example.
So I suspect almost anything could be implemented as a Widget, though obviously implementing plugins in Obj-C is a bit beyond anyone but programmers.
Once my WebKit port to Haiku and its associated browser are in good shape, I will probably investigate making a Dashboard/Sidebar like thing for Haiku. I’m not sure how useful it would be, but I imagine it would be fun to implement
But at the rate things are going, I doubt I will get around to that until February/March next year.
A friend of mine who is not registered on osnews, said:
“I think u you should consoder mentioning {NeXT,OPEN}STEP and windowmaker, with the support for dockapps as a gadget framework.
see dockapss.org”
An article about the birth of multitasking that doesn’t mention the amiga? How rude. 🙂