Computers are complex systems but it’s a mistake to assume they need to be complex to use. However, usability is not as easy as it may first seem. It is a different discipline from software development lacking the strict logic or having a “right way”. There are only differing requirements and differing collections of guidelines. Making things easy is difficult.
I’ve used a number of operating systems over the years and it’s obvious that some systems get usability right more than others. It’s also obvious that you need usability people on your team if you want to get it right, I’ve never seen a programmer driven OS (volunteer or paid) produce usability to the same level as companies with usability people.
Usability is paradoxically, not an easy subject. It doesn’t have the mathematical background of computing, so there is no one correct or incorrect way to do it. It strikes me as something between art and science, about putting the right things in the right place, not over complicating things, simplifying where necessary and moving more complex or less used options out of the casual users way.
The people developing a system often have very different requirements and preferences from those who will be using it. Developers are a different kind of people from normal users, they like technical stuff, they like to be in control and have many as many configuration options as possible to allow optimisation, they will often have very good memories for detail.
Users tend to be be pretty much the complete opposite of this, but that is not to say they are any less intelligent, that “user” may be performing brain surgery on you one day. However due to these differences software developers do not generally make good usability people, they can make software usable for themselves but that may mean it’s completely unusable for casual users.
“Programmers do their work but once, while users are saddled with it ever thereafter.” – Jef Raskin
There is no “perfect” usability as not everyone works the same way. Trying to force a method of working on users is likely to backfire. Witness the outcry some users made about Gnome when they switched to a spatial nautilus. A spatial browser does not work well with a deep file system hierarchy, unfortunately many users have exactly that and consequently had problems with the system.
All users -even developers- get used to ways of operating, even if something new is better, the fact they are used to the old way means there will always be resistance to change, a chunk of the complaints about the recent changes in Gnome will be for this reason and this reason alone.
Of course the advantage of creating a new platform is that there are no existing users to complain when things change. Of course this means migrating users from another platform will have difficulties but this is only the case if that’s what you want to do, what is the point of building a new platform to do the same as all the others?
More or less options
One approach to usability is to simplifying everything by removing options, this strategy assumes users are idiots and in my opinion is more likely to annoy users due to lack of options than help usability. Just because someone doesn’t know about the inner workings of an computer does not mean they are an idiot. Usability is not about “dumbing down”.
Removing options or flexibility for the sake of it may make things easier to use, but it also makes for an inferior product. On the other hand adding too many options is only likely to confuse users.
Gnome and KDE respectively are taking these approaches, the end result being Gnome is annoying some advanced users (to the point that the spin off project GoneME has started) while KDE is confusing for casual users (or at least the control panel is).
OS X has plenty of options but they are not thrown at you all at once, many are hidden behind “advanced options” buttons and such like, it manages to be both powerful and easy to use. Different companies and projects have been copying the Mac for years simply because it looks so good, nobody seems to have ever matched it’s usability though.
Sometimes adding complexity actually helps usability. Try editing a sound on a early 90’s synthesiser, then try the same on a 70’s synth. The 90’s synths used a minimalist approach with only a few buttons to navigate a wide variety of different controls, consequently they were a compete pain to use. The 70’s machines put all the controls on the front panel and allow direct access. Initially these were frighteningly complex just to look at but once learned were very easy to operate. Today modern “virtual” synthesisers have gone back to allowing the same 70’s method of operation.
Human Interface Guidelines
There have been many human interface guidelines published over the years and there are numerous basic rules and guidelines (i.e. keep things consistent).
However, as I said there is no one “right way” and as if to back up this even the user interface experts don’t agree with each other, you don’t need to read much to see the term “the experts are wrong”.
I briefly discuss some specific Human interface ideas here but there is much more out there though and I have included a number of links for further reading at the end. There’s also links to the “User Interface Hall of Shame” which has numerous examples of how not to do interfaces. There is also a “Hall of Fame” but it is notably smaller.
The ROS (Ruby Operating System) [ROS] has an ultra minimalist approach to usability guidelines, there’s only 4 of them. They say a user interface should be:
- Easy to learn
- Easy to remember
- Satisfying and pleasing
Despite the abject lack of details it’s remarkably easy to find applications or systems which break one or more of those rules.
The Proximal Interface
One of the most advanced yet seemingly little used concepts in usability design is that of the Proximal interface [Proximal]. The idea is to build an interface which in effect becomes invisible, the user just gets on with their job and the interface is there to facilitate that.
The idea of direct manipulation is used so rather that selecting an object then picking an option from a menu, you can do some operations directly by grabbing or moving the object, the options can be increased by holding a modifier key. Drag and Drop is a perfect example of direct manipulation.
The proximal interface of course includes a number of principles to use when designing user interfaces, these include:
- The principle of graded effort – the most common or most important options should be very easy and direct.
- Distinguish what object the mouse is over and where exactly it is clicked to allow different direct operations.
- The principle of the left hand – Use the left hand to qualify operations (i.e. Left-shift & click).
- Treat a mouse operation as a process, i.e. other operations can happen during the process.
- The principle of visual cues – we remember where we put things, the system should not reorganise them without good reason.
- The irreversibility principle – destroying data (i.e. deleting) should take more effort than creating it.
I think it’ll be a long time before we’ll see mainstream systems based on these principles though some applications already seem to follow similar principles (e.g. the vector graphics component of Gobe Productive on BeOS). BeOS itself had drag and drop extensively throughout the system.
Interface Arrangement – Fitts’ Law
Usability goes beyond the capabilities and arrangement of software. It also applies to the layout of controls on screen, Fitts’ law is an often quoted law of interface layout design [Fitts] but originally had nothing to do with computers.
Developed in 1954, Fitts’ law applies to random selections of targets in 1 dimension using the human arm, It defines a relationship between size of target, distance to target and speed of acquiring the target.
Computer interaction is really quite different from this, it’s in 2 dimensions, common movements are not random and there’s mouse acceleration. It’s a wonder how anyone came to apply Fitts’ law to computers.
Some use Fitts’ law it to justify placing menus at the top of a screen as opposed to directly on windows, the rationale is that even though the menu is further away it has effectively “infinite” size since it is at the top of the screen. It may be infinitely high but the menu item is not infinitely wide, the actual aim point is no bigger on the edge of a screen than it is on a window. Fitts’ law would suggest that the menu should really be on the window as your mouse is likely to be closer to it (this assumes Fitts’ law applies in 2 dimensions).
This also does not take account of mouse acceleration which makes aiming at a specific point on the edge of a screen almost impossible from any significant distance (apart from the corners where your pointer invariably end up). Yet despite this, placing the menu at the top does seem to be better, how can this be so?
I can only conclude there are other factors at play:
- Muscle memory will have an impact since the menus are in the same location for every program.
- The mouse acceleration means you can get the pointer to the top of the screen very rapidly.
- Once the mouse gets to the top aiming is only horizontal, controlling a mouse in one dimension is easier then controlling it in two.
Getting to the top of the screen is a flick of the wrist, this only gets you to the rough area. Once you are in the rough area getting to the specific menu is then very quick. It’s a two stage process and Fitts’ law applies does indeed apply to both, individually.
If the menu is not at the top aiming has to be in 2 dimensions and this is more difficult. Try drawing an exact circle quickly, this is very difficult. Try drawing a circle quickly with a mouse, it’s even harder and acceleration only accentuates the errors. Aiming a mouse is difficult enough in two dimensions but acceleration means you’re likely to overshoot, making aiming even harder.
This does not mean putting menus or controls on windows is somehow unusable (you can click on a web link can’t you?), but if you have high mouse acceleration it does make getting to them can be slower than points on the edge of the screen.
However, it’s still not that simple, it may well be the case that with low mouse speed / acceleration it is better to have the menus on the window – try moving menus around and changing mouse speed, at a low mouse speed moving around the entire screen is quite a chore.
I don’t believe Fitts’ law it is the be-all and end-all some seem to describe it as but it is a useful guideline none the less, though the differences between the original law and modern computer interfaces need to be taken into account, as do individual user preferences.
Usability is not just about the GUI, it should apply to everything, even the command line. English (or other natural language) commands have been around since MS-DOS and probably even before that. IBM have had a slightly cryptic but very easy to use shell for decades. However the concept of usability appears to have completely passed by the common Unix shells where bizarre acronyms abound making them all but impenetrable to the casual user. What exactly does “Grep” mean?
Command lines are very useful tools and they are in many cases better suited to tasks which would be difficult to represent well in GUIs. However, there is no reason they should be so complex as to require a reference book to use them, they should be available to all users, experience on other platforms has shown this can be the case with a well designed naming scheme.
In our new platform I’d like to see a shell with the power of bash but with commands such as “list”, “remove”, “search” etc. There’d also be an extensive help system for using the commands. The system should boot with a GUI as standard so this should be used to enhance the console, displaying options, help etc. Unix users may scoff at the idea of a system booting with a GUI by default but many different desktop systems have been doing this for a very long time with no ill effects. Just because X can be flakey does not mean other graphics systems are the same.
Another thing Unix geeks are not going to like is the removal of case sensitivity [Case], it is yet another hangover from the past, serves no useful purpose and is a potential source of confusion, so why have it? Google and other search engines are by default case insensitive, they’d be a pain to use if they were not. I would however keep keep case preservation.
A sensible naming scheme should also apply to the file system structure, it could be organised beginning along the lines of:
- /System – OS files
- /Applications – 3rd party applications and libraries
- /Home – All user files in here
The user and system files are separated to simplify backing-up, to backup or move your files you just copy “/Users/My Name” to the desired destination (something else I’d fix is the ability to use spaces in file names).
When a user downloads an application it and it’s libraries should be automatically moved to the relevant applications area, this should be indicated to the user along with the application’s global settings (i.e. type of app, who can use it etc.) so they can be changed.
Installing software should be a quick, painless operation, the installer should not assume the user has an internet connection. If the application requires libraries which are not included in the system they should be supplied with the install medium, be that CD, DVD, zip or binary. Under no circumstances should the system (and especially not the user) have to go looking for additional files or “dependancies”, it may be OK if you are supplying software to geeks who like to roll their own but not if you also intend to target casual users.
Users and programs should be forbidden from changing system components. Changing components leads to problems like “DLL hell” where a program installed system library can cause program or even system failures. Malicious programs can also get in and change Libraries causing problems. By preventing users and programs from changing OS components any program which is installed can expect a specific environment in which to run and can test against. As indicated in the previous part of this series even if something could change a system component the system would automatically replace it.
Of course DLL hell can still happen when 3rd party libraries are involved. This could be avoided by allowing multiple library files with same name but different version. If one program wants version X they get version X, at the same time a different program asking for version Y can get Y – without conflicts (Microsoft .net can do this). Additionally the system should search for libraries in case they are in the wrong place.
Usability should even apply to development. Computer programming should not be a black art only practised by those in the know.
Many of the different platforms available in the 1970s / 80s had a version of BASIC supplied with them and many programmers started on them, unfortunately this is an example of a good idea which has been discarded. I would do the same and include a language for users along with examples and instructions to help them get started.
This would be the “default” language which could be used for everything from just playing around, shell & application scripting as well as full applications. The default language would not preclude the use of other languages of course but having a standard language would encourage development and if it’s an easy enough language may even encourage non-programmers to try it out. I would most likely use Python for the default language as it has the specific aim of ease of programming and can be used for all the purposes I mentioned.
There is no logic in having different languages in different parts of the system unless there is a pressing need for them, there would be no other shell scripting language supplied for instance. Being able to program multiple parts of the system with a single, easy to use language will in my opinion, be highly beneficial. Again this is only by default, it would still allow other languages to be installed and used.
There would of course also have a default “performance” language such as C++ or Objective-C for normal developers in which I expect the majority of applications would be developed. I’d also like to see a good implementation of Java but I’d like to see it’s rich class library be made available to the rest of the system.
Today usability is an important part of any system but it is far from perfect and hasn’t even reached some areas. A new platform would give us the ability to spread usability into every area making the computer truly a tool for everyone, but not a tool which will can be sabotaged easily or accidentally.
It’s easy to believe all the computer can do is what it can do today, but each new platform brings with it new possibilities and new applications. In the next part we go into how you would actually use a new system and the new flexible GUI I’d include.
This series is not meant to be a definitive description of the state of the art in any given field, I don’t know everything and don’t pretend to. If you know of technologies I’ve not mentioned that would be appropriate feel free to them in the comments.
References / Further Reading[ROS] Ruby OS Interface Guidelines [Proximal] The Proximal Interface [Fitts] Fitts’ law [Case] Case Sensitivity [HI Resources]
Cornell University HIG
Summary of The Humane Interface
Nielsen Norman Group HIG
User Interface Hall of Shame
User Interface Hall of Fame
© Nicholas Blachford July 2004
About the Author:
Nicholas Blachford is a 33 year old British ex-pat, who lives in Paris but doesn’t speak French (yet). He is interested in various geeky subjects (Hardware, Software, Photography) and all sorts of other things especially involving advanced technologies. He is not currently working.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.