Federkiel writes: “People working with Apple computers are used to a very consistent user experience. For a large part this stems from the fact that the Lisa type of GUI does not have the fight between MDI and SDI. The question simply never arises, because the Lisa type of GUI does not offer the choice to create either of both; it’s something different all along. I usually think of it as ‘MDI on steroids unified with a window manager’. It virtually includes all benefits of a SDI and and the benefits of an MDI.” Read on for how I feel about this age-old discussion.We have touched on this discussion a couple of times before on OSNews, most notably when we ran a poll on whether or not GNOME should include a patch to enable a global application menubar. The 175 comments to that story provided us with some very valuable insights concerning the matter; most importantly, it illustrated how hard it actually is to make a case for any (G)UI related standpoint, probably because in contrast to many other computing related issues, you cannot benchmark usability. There is no ‘UsabilityMark2001′ you can run on your Mac and Windows machine, and then compare the results in order to come up with a grand theory of usability which will predict users’ behaviour.
The author of the article writes:
“First of all, it saves a lot of screen space. Because the additional menubars are no more than optical bloat. ‘But,’ you may say, ‘screen estate is not so important anymore. Screens get bigger and bigger, with higher resolutions and stuff.’ Well, yes. But the human visual sensory equipment has limits. There is a limit of how much information you can get on an area of a certain size. And there is a limit to the area the human eye can usefully overview.”
While this makes a lot of sense, the article author fails to realise that this is why menubars ought to be standardised; the order of menubar entries should be the same across all the applications, reducing the amount of new information the eyes and brain must process in order to use the menubar. On top of that, the author also fails to mention that no matter how many windows of, say, Firefox you have open, the menubars in all those instances are exactly the same. In other words, the eyes and brain only have to process that menubar once, since it will know that that menubar will be the same in any instance of Firefox.
In addition, the author’s argument does not take training into account. Because I use Firefox so often, I know its structure for the menubar from the top of my head. I do not need to process the menubar at all, simply because it is imprinted in my spatial memory. In other words, the brain has processes in place to minimise the amount of information it needs to actively process.
The article goes on:
“Another advantage is the document-centric approach the Lisa-type GUI takes. Documents, not applications, are the center of the desktop. No matter what kind of document you’ve opened, it never feels like you’ve ‘started’ an application. It never feels like you are using an application, rather the document itself seems to be providing the necessary tools.”
This is a very bold statement to make. Here, the author is promoting his own personal user experience as fact. I have been a Mac user for quite a while now, and I ‘still’ think in terms of applications. The document-centric approach has often been touted as easier to use than the application centric approach, but in spite of its apparent superiority, many people do not seem to have any problems – at all – when using an application centric approach (Windows). Interestingly, many people switching from Windows to the Mac complain about applications not closing when destroying its last window – which brings us to another interesting point: experience.
If you have only had experience in using an application centric user interface, such as Windows, a document centric approach is just plain weird. You have become accustomed to the application centric approach, and as such, this approach will be superior to you, no matter the documented advantages of its alternatives. I always compare this with manual and automatic gearboxes: no matter how many people explain to me the advantages of an automatic gearbox (and I acknowledge those), I simply cannot get used to driving a car with an automatic gearbox, simply because I feel uncomfortable on the road while using one.
Of course the author also brings in Fitts:
“Last but not least there is Fitts’ Law. More properly termed: Fitts’ Rule. There have been many discussions about how much Fitts’ Law really applies. But in the long years I’ve been using graphical user interfaces, the rule has proved itself many times, again and again.”
Now, I expected the author to dwell a bit more on Fitts, since Fitts really deserves more than just a few lines when it comes to this discussion. Fitss Law is very hard (if not impossible) to disprove, but it is actually not very hard to restrict the law’s influence on user/computer interaction: training and experience come into play once more.
When you first learn to play darts, it would really help if the triple 20 was the size of Texas. This would make a it a whole lot easier to hit it, and thus would greatly improve your accuracy. However, as you get better at playing darts, the triple 20 does not need to be the size of Texas. Experienced darters do just fine with a triple 20 the size of a few square centimeters.
The same applies to user interface design. Sure, Fitts’ Law predicts that the larger a graphical user interface target is, the easier it is to accurately click. However, just as with playing darts, the better you get at clicking stuff with your mouse, the smaller a target can become. When my grandparents bought their first computer at age 76, it was extremely difficult for them to hit and click the icons on the Windows 98 desktop. However, as time progressed, even my grandmother and (late) grandfather got better at performing this task, and now, my grandmother has no trouble at all hitting smaller targets.
In other words, despite its correlation coefficient of 0.95 (which is quite high), Fitts’ Law does not take training into account, which is a major limitation in this day and age, where computers have become ubiquitous in our western world.
Now, it was not my intention to ‘attack’ the Macintosh interface; in fact, I prefer it over Windows’ and GNOME’s, and I make my KDE behave exactly like it. In fact, my PowerMac Cube is my main machine. What I did want to show you is that it is pointless to claim that one approach to user interface design is superior to that of another. There are simply too many factors involved (most notably, the 6 billion different people on this planet) to make generalised statements about it.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
I am not an Ion/RatPoison user, and I don’t know if I *could* become one, because for some reason I prefer to have control over the exact size and position of my windows (this size is seldom “maximized”, except on very small screens). But it is interesting and I think relevant that they follow a sort of meta-fitt’s law/rule: when using the keyboard (as we usually are), the fastest point to reach on screen still involves your keyboard.
That said, I’ve observed that conversations of Fitt’s law and the archetypal windows/mac interfaces concentrate on the corners and top of the screen. I think this is probably because mac users are more aware of the design philosophy of their platform, or even just more aware of design philosophy itself (as a whole) than windows users. This isn’t really a criticism of either camp, and since I don’t really know a lot about either anymore, is probably based on inaccurate stereotypes.
Still, the discussion almost always ignores or merely mentions in passing the fastest point to reach on the screen, the fifth point of fitt’s law/rule, your pointers current position.
Windows and the applications that run on it (and most Linux applications, too) heavily use this position for contextual menus. In my linux environments of choice, the context of the root window is a list of my programs, which when my pointer is not inside the context of an application further makes use of this point. On interfaces coming from apple, they still use it, but less heavily, and usually less contextually (because they always include these options elsewhere on the interface).
I think that mouse gestures were an attempt at bypassing the limitations of “aiming”, but I feel that people with physical handicaps or people who are just plain not practiced with a mouse will probably struggle to do any but the simplest mouse gestures. Does anyone know of any studies done on this that prove or disprove my guess? Have there been any other ideas in user interface history to utilize the fifth point?