The IT sector today is a complete mess. The end-users rarely understand this, but most insiders reach a point when they realize that things should be different. The problems are numerous but they all reduce to a basic principle. IT and consumer electronics companies are interested more about money than helping people solve their problems. Of course companies need to make a profit and nobody denies that. They should however make money by helping people and not by creating more problems for them.
Driving a car versus using a computer
Forget everything about IT for a moment. Go back to your driving lessons. During your first steps, driving a car was full of small details. Turning the wheel, hitting the brakes, using the clutch and so on. After a while however, as experience was accumulated all these things became minor. An experienced driver thinks in a much higher level. “I use the car for transport”. A driver enters a car thinking that he/she wants to get from point A (e.g. home) to point B (e.g. work). Using the car is an intermediate stage which serves this higher goal. Using the car is a temporary low level task which fulfills the high level need of transportation.
Now come back to computers. Think the user. If you are reading this you may be more experienced than the user I am talking about. Think your grandmother or aunt who uses a computer for basic things (email/surfing/word processing). This is type of user I mean. A user as described above always boots a computer for a high level task. “I will prepare my presentation for tomorrow”. “I will search the Internet for information on the Roman Empire”. “I will write a mail to my cousin”. No user boots a computer in order to partition a hard disk, or download new security updates. Unfortunately every user soon realizes that things are not so simple and in order to reach these high level goals there are many boring details that need to be examined first.
Thus the user is forced to learn basic things like screen resolutions, folders and files, formatting floppies, installing programs, Menus, double clicking and a bunch of other details that (guess what) nobody cares about! Even simple actions like opening a document in an application are more complex than they seem. One has to remember the exact location of the file (its position in the file system) in order to retrieve it. Search functions and “recently opened” lists solve partially this, but the problem never disappears. Imagine a user who inserts a DVD full of digital images in the drive and wants to show a specific image to his/her friends. The search function will be slow (4.5GBs hold many images), the “recently opened” list doesn’t apply (since the DVD has never been inserted before).So the user will spend several minutes to find manually the requested image. If the user is smart enough the descriptions will be hard coded in the names (John-and-Mary-beach.jpg) or the folders will have some meaning (d:\holiday\beach\whatever.jpg). In most cases however, the images will have cryptic names like DSC06458.JPG and the directories will show just the model of the camera. The user ends up manually searching all the images previewing them in one of the zillions programs that exist for this kind of boring operation.
Users think in high levels
The situation I just described is totally unacceptable. Users don’t have to be tortured like this. It is true that graphical user environments (GUIs) have improved computer usability but this is certainly not the end of the story. Additionally all these flames between Linux zealots and windows fanatics are completely useless. I hope that after reading this article you will agree with me that all operating systems are inefficient. And that includes the Apple stuff too. Just think of all the amount of information a naive user must digest before finishing any work. I don’t mean the usual suspects (defragging? drivers? divx codecs?) but more basic things. The whole concept of different programs and windows that need to be resized and moved (isn’t this the job of the window manager?). The whole interface is a mess. The WIMP interface has been criticized long before this article and will also be in the future. Ask the users themselves. Most of the times they will surprise you. The classic one is “why do I need a thing called Nero to write a CD and can’t just drag and drop the files on the CD as I do with the floppy?”. Yes I know that windows XP does this but still the process takes two steps (1.drop files 2.write CD). This is something completely strange for the naive user. (Other solutions which involve different CD-Writing technologies are unknown to the common public.)
If you write down all complaints from casual users most of them follow two principles.
1) Users work with a high level goal in mind. Every task which is not directly related to this goal is a distraction which needs to go out of the way as soon as possible.
2) Users expect the computers to be intelligent and take decisions in the background for them. They get very frustrated when they realize (as they become experienced) that computers are stupid machines that need to be told what to do.
I haven’t said anything really new at this point. Most readers who know anything about human-computer interaction or have programmed guis know all this. The usual answer for the situation is that computers are complex machines, blah blah, they are not a VCR blah, blah, or a washing machine blah, blah. Basically that since computers have multiple roles, their interface cannot be too simple and so on and so on. Well I disagree.
Shifting the workload to the computer
Nobody expects computers to act like a washing machine or require zero experience. After all if you want to drive a car legally you need to have a driving license which is obtained after driving lessons. The car however, takes some decisions behind your back. For example the onboard embedded computer controls automatically the fuel injection process. It regulates the fuel flow to the engine in order to minimize the harmful emissions and maximize performance. This happens transparently from the user. No driver cares about such low level decisions. All drivers want to reach their destination safe and fast.
In a similar manner computers need to do some things automatically. We don’t have to create the ideal user experience but we can take small and important steps slowly and gradually. Why have a defragging application on its own? Why doesn’t the computer defrag itself when idle? It is not a technical problem (see most UNIX filesystems). Automation is not something exotic or new. If you spend some time calculating how much time you spend in front of the computer doing actual work and how much time you spend on unneeded management, administration and maintenance you will be surprised.
Why do we have a save function in Word 2003? The same function existed 9 years ago in Word 6. Thousands of users have lost their documents during blackouts. Thousands more will lose their work in the future. Saving can be automatic. I am not talking about partial solutions (like Vi and Emacs do) which protect the user from losing work. I am talking about the whole idea of saving. Why torture the user with the save function at all? The application should save the document at all times keeping different versions and revisions. The whole .doc file should contain all user actions on the document (think CVS in a single file).Opening the file would be a simple question. Open the latest version or edit the version of a specific date/time. The Word application should not have a save menu/button anywhere on the interface. The user doesn’t care about this. (Ok, ok maybe a “save as…” which just relocates the document file but you get the idea).
Making money by adding complexity
Now we reach the key point of this article. The situation is horrible simply because the companies behind the scenes are greedy. I am not talking about Microsoft (only).I mean the mafia of software and hardware companies which act as they see fit. Money, money and more money. Helping the user is an afterthought. What really pushed me to write this article is the new digital photography era.
It is no secret that with the boom of digital cameras a lot of people bought computers in order to edit and store their photos. The simplest approach (from the user’s point of view) is to have some kind of “disk” where photos are stored. Digital cameras write this “disk” and then computers can read this “disk” in order to edit the photos. The “disk” is universally accepted. Joe User can take his “disk” to Jenny User and insert it into her computer. No fuss no problem. Now take a look at the real situation. We have compactflash, SD, smartmedia, memory stick and so on. Each format is supported from different companies. Why oh why? Why make the life of the user a living hell? Why make money from all the adapters that have flooded the market? Why do I have to buy a “4 in 1” card reader? Why?
This is a classic situation which shows that things are organized around companies and not around people. Coming back to computer interfaces the situation is similar. Each operating system is just a platform. Each company creates different applications which have a different purpose. There are thousands of applications and thousand of file formats. The user needs to find the correct application for his high level task. Sometimes two or three applications are needed for one high level task. The whole IT sector is centered around companies and not around the end users. Things work so that companies make money, while in reality users get little job done.
Google mail: one small step …
The “high level computing” dream is not hard to achieve. We can reach it with simple steps which make the user suffer less and give more work to the computer. The latest example is the Google email (gmail).
Gmail offers 1GB of storage data. Everyone is impressed by this number. Some people have already created several utilities for accessing this space remotely (1GB internet drive).This is not however the important news. The side effects are more critical for the users. By giving away 1GB and encouraging users not to delete emails but to archive them instead, the end users have one less constraint. Think Aunt Tillie. No more “you have blah,blah space left in you mail account”. No more “I have to free some space in my email account”. Mail management with Gmail gets one level higher. “I send and I get emails”. There is no “I delete emails” in the picture or “I monitor my account space”. One less problem for Aunt Tillie. This is the “high level computing” I am talking about!
The same approach can be applied to user interface, consumer electronics,compression algorithms, image formats (do we need all of them?) and most other areas of IT and computing in general.Think the user first!
I could write a load more about autonomous computing, the amount of money companies get from technical support or even several other interesting ideas that have recently appeared (ratpoison, ion and friends) but this is just a simple article and nothing more. Food for your brain….
About the author
Kapelonis Kostis is a computer science graduate. He believes that all operating systems suck and envisions a day where computers work like those in the movies. They are fast, simple and easy to use. They are user-centric and they help users get on with their lives rather than wasting time in trivial details and low level decisions.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.