It’s conventional wisdom that computers need to be “easier to use.” But do they? More reliable, yes. Easier to troubleshoot, yes. But now that so many people use computers so much, I think there’s something to be said for making them less easy-to-use and less intuitive.
Whether complaining about a widely used operating system like Microsoft Windows or criticizing an up-and-comer like Linux, people have claimed for ages that computers need to be easier to use. As a blanket statement, this is certainly true. However, what does this simple statement actually mean in real-world application? When people complain about ease of use, they often rehash some stupid scenario involving an abject computer novice. Certainly, this type of person, the same one who’d be likely to use the CD-ROM tray for a coffee cup holder or put White-out on the screen, does exist in this world, and the ease of use problems in today’s computers certainly flummox them, but will the computing world truly be served by catering to their needs at the expense of everyday computer users?
You can certainly categorize today’s computer users into several broad categories, ranging from incompetent and lost to absolute mastery. But however you slice it, there is a significant portion of the user population that has a high degree of familiarity with their machines and installed software. For these people, unless there’s some sort of malfunction (unfortunately, all too common) they’re fully at ease with everyday computing. For these people, their computers shouldn’t be easier to use. In fact, many of the features designed to make their computers more accessible to a neophyte are actually holding them back from higher productivity and sometimes being quite annoying in the process.
If you believe that power users want simplicity, look at the average TV remote control. In the early days, remote controls had just a few buttons. The earliest only changed channels. As people have become more acquainted with their TVs and attached VCRs, DVDs, and home theater systems, they’ve demanded more power and convenience. Today’s remotes have as many as 60 buttons. Some now have reconfigurable LCD panels. Sure, some of these remotes are poorly designed, and the worst of them are truly mind boggling. But Joe Sixpack, a proficient TV user, can usually become so acquainted with his complicated TV remote that he can operate even the most complex series of tasks simply by touch. Would Mr. Sixpack want to see an industry-wide resurgence of the two button TV remote control and go back to getting out of his chair to change surround sound settings? No. But modern remotes are too complicated for people who have never used a TV before! We must cater to them! Doesn’t the argument sound stupid in that context?
The real problem is not that today’s computers are too hard for novice computer users, but that they all have inherent problems that make them burdensome for even the most experienced users. Just because a power user might be able to eventually narrow down a persistent stability problem to defective RAM (with few clues) does not mean that power users want to have to spend three days playing Sherlock Holmes just to get their computer working properly. Just because I am capable of making regular backups to removable media or to an off-site server, and am capable of doing a recovery in the case of a catastrophic data loss does not mean that I want to, or think I should have to.
People know that operating a computer requires some skill. In fact, they expect it. Modern operating systems are intuitive enough that once a novice gets the hang of making the mouse work, they can usually get started on the basics within the hour.
However, that accessibility comes at a price. The reliance on mouse-driven self-explanatory menus has probably robbed the economy of millions of dollars worth of productive activity, because the average user now has little incentive to learn time-saving and convenient keyboard shortcuts. In this case, I think you can make the case that computers have become too easy to use. If they were a little harder, we’d be more efficient. My recommendation is that the mouse-based menus should still be there, and still tell you that ctrl-S will save your document, but should not allow you to save with the mouse. Maybe they should force the user to learn to use ctrl-S to save. In the long run, everyone would be happier. The employer saves money, the employee saves more frequently, and has less carpal tunnel syndrome.
On Mac OS X, I use a UI hack called Quicksilver. It allows me to open applications, go to URLs, and even initiate a blank email to a contact, with a few keystrokes. It’s a revelation. It took me a couple of days to re-train myself to use it by default, and it’s been an incredible convenience ever since. It’s just a small application that starts up at boot, and it’s initiated when I type a specific key combo. A little menu pops up, and when I start to type the name of the application I want to launch, the menu displays my ever-narrowing choices. When I type “SA,” Quicksilver knows that I mean Safari, the OSX web browser. When I type “OSN,” it brings up a small menu of OSNews-related web bookmarks, which I can select with the arrow keys. I think that this is the direction that user interfaces should move in. It’s totally non-intuitive at the start (because you have to know that keyboard combo launches it. But once you know that, it’s both intuitive and efficient.
The standard Mac OS X interface with its screen-hogging dock, is a perfect example of an OS being too easy to use for a power user. That’s why most experienced Mac users I know have to customize their experience with Quicksilver, Tinkertool, and other hacks. I guess it’s an okay work-around, but I’d like to see OS developers taking a little more interest in their best customers’ needs.
Another area where operating system user interface design could use a little more innovation is in the area of information accessibility and organization. We now typically have tens if not hundreds of gigabytes of data on our hard drives. A typical family computer now plays the role of TV, video game console, photo album, family file cabinet, record collection, calendar, and document archive. Look at the desktop of a typical novice computer user and it’s littered with a hodgepodge of vitally important personal documents, interesting but trivial files and photos, and absolutely worthless detritus accidentally downloaded from the internet. Power users tend to have years, even decades of accumulated documents that need to be archived, not to mention gigabytes of photos, mp3s, and videos. All the OS makers have made some steps toward aiding the user in organizing and archiving all of this data, with tidbits of true inspiration contained in each approach. However, we’re far from a truly workable solution.
Obviously, the folders, subfolders and files metaphor works pretty well. It corresponds with the “analog” method of organizing, so it’s an easy concept to grasp. Mix in some tools to navigate the structure and full-text search and you have a workable solution for some types of documents. However, once you’re working with music files, photos, video, and other pieces of data, you suddenly need new interfaces to navigate the data, and you might want to organize them in new ways. For example, with an MP3, you may have the same song in an album, in a best-of compilation, and in a movie soundtrack with other artists. Do you want to keep three versions of the same song, or would you like to cross-reference them back to one original source. When you have a song that’s a duet with two well-known artists, do you categorize it under a separate artist listing? With photos, how can you easily and quickly categorize them so you don’t have to scan through a thousand thumbnails to find that one photo of you and your kids at the park that one day? These are truly confounding problems, and some individual applications have made great strides in tackling them. For example, iTunes and WinAMP have both developed good tools to categorize and catalog mp3s, but they’re not perfect, and their methods often exist outside the realm of the OS’ handling of the file, since they count on the id3 tag that’s embedded in the file. These apps’ capability to reconcile the id3 tag information with the location and naming of the file can often result in an unintended organizational disaster (that’s usually not un-doable) if used carelessly.
I recommend that OS vendors take into account the huge organizational need that have arisen from the massive adoption of digital music and photography and make it a priority to provide tools that aid in the organization of those materials and try to make sure that the specialized application vendors have access to the programming interfaces to these tools so their solutions can be synchronized. Apple and Microsoft have both developed their own music and photo management software. As good or bad as they are, they’re currently a level of abstraction away from the management of the actual media files themselves. That’s neither intuitive nor convenient for anyone. In the long run, the more file navigation becomes turned on to the unique attributes of the particular types of files, the better things will be.
Ever since multitasking came on the scene, computer users have struggled with how to deal with switching between multiple process and working documents. The Macintosh Finder, Windows taskbar, Unix virtual workspaces, and recently Apple’s Exposé have gradually made that task a bit easier, but it’s still a vital issue. Novices struggle with the basic concepts at first, but power users tend to suffer the most, as they tend to have more things going on at once. Mouse-based interfaces can lead to a lot of hunting and clicking when you’ve got a zillion windows open. There just hasn’t been much innovation on this front in many years.
Now that I’ve tried to make the case for computers being less intuitive, let’s talk about how they desperately need to be easier to use. The personal computer is by far the most unreliable piece of equipment in the modern household or workplace. Other machines, such as automobiles, may be more complicated and difficult to service, but we have become accustomed to a certain danger of catastrophic failure from our computers that we do not face from any other machine. Sure, the circular saw may cut your hand off, but you can pretty much guarantee that it was your own damn fault and you’ll understand exactly what went wrong as soon as you get the blood flow stopped. Only the family lawnmower can usually come close to the finickiness of the average family computer.
The problem is the nature of the personal computer. It was designed from the beginning to be an anarchic hodgepodge of bits and pieces of software, hardware gizmos, and device drivers from scores of manufacturers. And unlike other complicated machines, they aren’t delivered to us fully-functional, tested and static, like an automobile; they’re delivered to us stripped-down and barely-useable. Once we get them loaded with all our software and peripherals, each one is different, and not all of the add-ons are reliable. But this isn’t a new problem, and today’s operating systems are a lot better at dealing with this anarchy than they used to be. But it’s still far from ideal. And many of the steps that OS vendors have taken lately aren’t really fixing the problem, but merely hiding it.
It’s a great, bold step forward that now both Windows XP and Mac OS X take steps to hide the jumble of files, libraries, and registries that make the OS tick. Truly, for most users, even the most advanced, what’s in there is of absolutely no consequence. As long as everything is working, the system files should just as well be invisible and inaccessible, just as the average driver need not know where the brake master cylinder is located. However, computers tend to have problems, and unlike a car, users will routinely install software that inserts its own files willy-nilly among system resources. That’s like having your dashboard hula-dancer require a connection into your car’s cooling system to wiggle properly. A bare, out of the box installation of your OS will usually run flawlessly. Any hardware issues that are likely to cause problems will usually manifest themselves pretty quickly. The problem is, our computers are constantly being changed around, both their hardware and software.
This is another case where by making the computer friendlier for newbies, we’re actually missing out on a case where an experienced user might be missing out on important information. If, when installing anything, the process of these zillions of little files were made more explicit, two things would happen: first, it would open the door to more awareness of where problematic files are located, which would help in troubleshooting. Second, it would create a groundswell of revolt against lazy and irresponsible software and hardware vendors who, for example, demand root access to your machine when they really don’t need it, or replace libraries with older versions, or any of the nasty things that crap software can do to your computer.
What we get instead is a layer of annoying pop-up messages warning us about all of the security threats we’re making ourselves vulnerable to. They’re like the boy who cried wolf. Novice users don’t understand anyway, and all users end up on reflexively clicking OK so often that the warnings become meaningless.
The problem, in so many of these cases, is that the personal computer needs to be all things to all people. No matter who you are, your computer is pre-loaded with all sorts of capabilities and utilities that are useless to you. They add complexity, but don’t serve you in any way. They have also been slowly evolving over decades, and every major OS retains a lot of legacy cruft that may or may not be necessary for your software to work. OS makers have done an admirable job of taking these aging battleships and bring them into the 21st century, but I think that a lot of the efforts at increasing ease-of-use have been misplaced. We don’t need Clippy, we don’t need Microsoft Bob, we don’t need the retarded OS X dock or more wizards. We need a focus on easily-learnable, non-intuitive user interface features and tools to deal with the mass amounts of stuff we store on a modern computer.
Okay, maybe you about face by the end of your article — I admit I haven’t finished it. But let’s be clear, the opening is about the worse piece of “RTFM” illogic I have ever read.
Until computer interaction is as easy as having a conversation with a fellow (and reasonablly rational) human being, they are not easy enough. PERIOD.
Any familiarity with the way computers work now is but a crutch on the road to the eventual personfication of our automa.
It sounds like what you’re looking for is a person, not a computer. Why should a computer have to act like a person? Do we expect any other piece of equipment to be anything other than what it is? Any complex piece of technology requires some knowledge and training to use. This is how technology is. I don’t /want/ to talk to my computer. I don’t /want/ to say some semantically correct, and even colloquial English sentence to get it to copy files. I want a reliable, consistent method that I know will work without the computer having to interpret my voice, words, accent.
I think you’re being unreasonable and illogical. Sorry. Perhaps you could explain why you think computers are not “easy” enough if you can’t talk to them like a normal person. I can think of some improvements I would make to my computer if I could, but a verbal interface, or any system which avoids me needing to be consistent, is not among them.
Until computer interaction is as easy as having a conversation with a fellow (and reasonablly rational) human being, they are not easy enough. PERIOD.
I dunno, I’m sure the ‘sterotypical geek’ finds it easier to use a computer than hold a conversation. 😉 It’s not like the goals are the same. Computers are more than just communication.
The personal computer is by far the most unreliable piece of equipment in the modern household or workplace.
Tell that to my kettle that blew up! In all seriousness, software reliability is a difficult problem, but it will not be realised by the current IT industry or ‘mainstream’ open-source. To quote Hoare, for reliability simplicity is an absolute prerequisite. Simplicity is HARD.
I understand that computers may currently suffer flaws resulting from making things too user friendly. Think about it, the higher level your code, the more layers of libraries you tend to use. More room for error creeps in, but I disagree that having computers that are not user friendly is needed. The more people we have using computers the more information they can gather and use to get things done more efficiently and then they can move on to spend time on more important things like family or perhaps even furthering technological innovation.
I agree on some of the points, but not all.
I guess the ruby language “Priciple of Least Surprise; things work the way you would expect them to” applies here. Well it would ne nice if they did anyway…
Another example is KDE. It does what you would expect it to, but you CAN do it another way (often more elegant) if you choose too. It still allows you to do it the easy way, or the “alternate” way.
I think OO programming makes complex yet easy application GUI easier to design, IF you choose to. But i still think you need to make GUI:s sort of intuitive without dumbing them down.
Also, i like how apps (especially in KDE, but gnome as well) have a habit of “do things the same way” if you see what i mean. It sets a standard, and common shortcuts are easy to remember.
Windows apps have a tendence to NOT work the same way, which is too bad.
Edited 2006-05-24 15:59
Ssssssssh! How about the lives of those linux bashing fellows that always scream on forums “TEH LINSUX INTERFACE IS NOT CONSISTENT”? Don’t give them a pain!
However, I think the common desktop computer should be made either (1) more difficult to alter, or (2) far easier to recover to a known stable/secure state.
Put the core OS and various utilities in ROM, and let the user add applications on disk all they want. OS updates are sold on ROMs that you swap in for your existing ones.
Alternatively, build disk imaging software into the BIOS (or some other ROM) in the machine, and allow for one-touch backups and system restorals.
Right now, a computer is a meta-machine which is almost completely vulnerable to the whims of the uneducated public, and in an interconnected computer world that is a fairly dangerous thing…
sort of like commodores in the 80s? actually, I kind of like that idea, imaging what possibilities a system of today with its OS burned into the ROM would be like from a stability, security and convenience stand point. instant on and instant off, and no way for intruders to hack the core system.
hmm, i recall making similar posts on usability topics here on osnews before and getting virtual stones trown at me
to me it appears as if the home computer would be better of if it was based on a collection of parts that could talk together, but could do their individual tasks seperatly.
the computer itself would only be a kind of connections box that allows the diffrent devices to talk together. plug in a scanner and you can scan images, plug in a printer and you can turn on both and have a copier. plug in a keyboard and you can write books and maybe do spreadsheets. plug in a modem and you can do web and mail. slap a normal numeric keyboard on top of that modem and you can even do faxes. just hit scan on the scanner. then hit the number and “call”…
the gui would be generated by the connections box, but the individual devices would handle the “computing”. and many basic functions would be available as buttons on the diffrent devices themselfs.
ok, so the power user would scream murder. but this device is not for the power user. the allready existing PC is for the power user
with xml based filetypes, these two systems could share files without problems.
want a games-console with this setup? a dvd-player+a console computing device, complete with controller ports. so when a new storage format comes out, you can add the player for it into the mix, and the game makers can use it for the console from day one
the trick is a rich communications protocol between the devices, instead of drivers. kinda like how usb today gets more and more “profiles” buildt in. pictbridge, usb storage, usb hid. its all there. now trow in a xfree-like gui language (xml based).
say that when a new device comes online, the “computer” asks for a device icon, and have it transferd. then this is displayed on screen. select the icon and the computer asks the device for its “gui”. basicly a xml-based layout of buttons, menus and whatesnot. from there you can access all the functions normaly assosiated with said device.
to take my scanner+modem=fax above. the scanner scans, and then store said data in a predefined format in its own ram. it allso broadcasts this to every other device. now it tell the modem to dial a fax number. when the modem encounters the fax, it starts to send the data from the scanner, and the scanner purge the data from its ram when task is completed.
if you instead wants to fax a stored file. dig the file out of whatever storage device its on, put it up for grabs, then dial the fax. print it? up for grabs and get the printer going
in many ways, the problem of the computer today is to much reimplementation. each device or hardware addon have their own drivers rather then relying on common standards. even worse is when companys take a common standard and extends on it without releasing said extensions for potential review and implementation into a revised standard.
if so, then one could have each device transmitt what version of the diffrent standards it supports, and the others could from that know what functions are not supported.
The more people we have using computers the more information they can gather and use to get things done more efficiently and then they can move on to spend time on more important things like family…
I’m not sure that anyone has ever in the history of computers been able to go home early because they finished their work quicker.
On the contrary, computers make you easily accessible and therefore decrease your free time.
I think there are some examples. Particularly for students.
Writing a paper that must be typed. Typewriter, or computer? I do think, particularly with huge papers, computers have sped things up.
If one knows what they’re doing (I’ll concede most don’t) the internet has made research and fact checking tremendously easier.
I don’t think the flaw is computers themselves, only their implementation. Most people simply don’t know enough to get the most out of working with them. SO many people spend so much time figuring out how to change the font or switch to landscape that they could’ve just handwritten the stupid thing in the first place
Moral of the story is: people should make it a point to know what’s going on. They never will. Therefore, we either shut a huge population out (companies will never do this), or try to make things simultaneously easy, featureful, and powerful.
The author makes a good point that the the full capability of a computer is not realized and as a result, productivity suffers.
Ultimately, I believe this comes down to personal desire to achieve higher levels of proficiency and productivity. Perhaps this requires a shift in mindset. Many computer user activities are highly repetetive. If users could recognize this behavior, they might start to look for solutions (shortcuts, macros, regular expressions, scripting, etc..). Though I don’t see this happening. Without a solid knowledge of how an operating system functions, ascertaining these more-advanced skillsets is difficult or even impossible.
As far as reliability of the PC, computer hardware and software can be quite reliable. Server-grade hardware with built-in redundancy running a solid OS can provide many years of reliable service. I know of systems that fit in this category that have been running 24 hours a day for 7+ years and they are still operational with minimal maintenace issues.
The problem is very few people want to pay for this level of reliability. Low cost and familiarity are higher priority. I believe priorities are significantly different when buying a computer vs a car. I can go on about how *generally* people will not attempt to fix their cars nor have the local kid in the neighborhood attempt to fix it. Essentially they get what they pay for (in terms of both monetary investment and time investment).
“will the computing world truly be served by catering to their needs at the expense of everyday computer users?”
This implies that power and simplicity are mutually exclusive. Here’s a counterexample: TiVo. It’s much more powerful than a VCR, and yet is much easier to use to record your favorite programs.
It takes creativity to make something simpler without sacrificing power, but it can be done.
Well said. For years I’ve been advocating this stance, until
I broke down and started writing my own applications. It’s
unfortunate most developers often feel that powerful systems
need to be hard to use, complex and have intimidating
interfaces. I often need to point to google to annihilate
this distasteful myth.
Precisely. The author doesn’t take into account the nature of intelligent automation. Simple is good. One button that forces you to do a dozen things when six things is all you want done is bad. We want our lives more automated in order to save us time, but we strive to have that automation be as intelligent as we are.
Your example of VCR versus DVR is perfect. I know my ReplayTV has saved me hundreds of hours over the course of its use (a couple years) because it’s able to make some decisions for me. The key to the future of computing in general, is designing them not only to mirror the decisions humans make, but also learn how humans make the decisions.
bayesian filters and similar?
didnt microsoft work on that? and one of the results from that was clippy?
Edited 2006-05-25 13:14
Clippy is a poor example of intelligent design. It’s biggest disadvantage is that Microsoft created it. There are two kinds of examples that exist on either end of the intuitive/intelligent scale.
On the more intuitive end, extending the desktop/folders/files metaphor to include tabbed file/web browsing. This has saved an untold amount of time for a lot of people. You couldn’t pull off anything like the efficiency of tabbed browsing in a CLI.
On the more intelligent end, Stanley, Stanford’s entry (and winner) into the 2005 DARPA Grand Challenge, is a great example of a computer making decisions that a human would normally have to do.
http://www.stanfordracing.org/
Hopefully, those in the computer industry will see the same vision for augmenting everything from lifestyles to mundane tasks. If done right, general computing will be more like using Star Trek’s LCARS, and everything else will be specialized in much the same way lawyers, doctors, carpenters, and accountants are. Personally I’d like to see robot lawyers. Justice might be better served, and if it isn’t, we can recycle them into something more useful.
i think the real problem with clippy was that it seems they didnt try to detect what the users problem was and give a simple list of suggestings. they only tryed to detect if a person had a problem and then ask if he wanted help. so yes its a poor example.
however, im not sure that “stanley” is a good example either as that was a purpose-buildt device for a very specific kind of problem. a intelligent computer interface must be able to handle a increasing list of jobs, and understand what each user wants to do…
its the classical “do what i want you to do, not what i tell you to do”.
Of course Stanley was purpose built. That was part of my point. General computing won’t need to exist for any purpose other than to provide us with information. That’s why I gave LCARS as an example. Everything else will be purpose built, just like us humans are purpose driven. We don’t need computers to actually be us, just do the things we don’t want to do. Keep in mind that humans that are “jack-of-all-trades” are rarely experts at anything. I think people have pushed general computing too far in that direction. Computing does not need to become any more general.
While I applaud the author for exploring such a contrarian subject, I think that his portrayal/understanding of operating systems is a bit oversimplistic. An OS isn’t the shell that the user uses. At its base, an OS is really nothing more than a boot loader, a kernel, and device drivers. It doesn’t get more rudimentary or difficult to use than that. And since nearly every OS offers a tiered model (ie. driver <- kernel <- user app), by definition users can make things as complicated (or simple) as they like. They simply choose the tier that they feel comfortable working with.
Nonetheless, he’s right about the need for better organizational paradigms for the massive collections of music, images, and other data that users are accumulating. The desktop concept is getting pretty old. Yes, it’s still useful. And, yes, it still gets the job done. And, no, I’m not advocating that we drop things in favor of a 3D interface. But we should consider alternate ways of organizing data; for example, timeline-based indexing, network dependency graphs, etc. The tough thing is that people process information in very different ways, so what works for one person may be completely incomprehensible to another. Trying to shoehorn everyone into the same paradigm has reduced us to using the least common denominator and has basically stifled the evolution of alternate indexing schemes. Organizations such as Microsoft, Gnome, and others need to become more brave in developing alternate shells.
so, the best would be a kind of database where each “file” is stored with a host of predefined data, but allso allows the user to add more data?
and then have a search system that allow you to basicly filter these files based on all this “metadata”?
sounds somhow familiar…
timeline:
give me every file created or modified between date x and date y, sorted by increasing time. ignore all file-types except text files.
network dependency graphs?
I can mildly see where the author is comming from but there is never an instance in an OS where being less intuitive would help it.
It’s alot easier to explain and operate something of when your told you think ‘that makes sense’. Admittidly couldn’t finish the article but he’s going about things the wrong way.
Add features, doesn’t make thing’s harder to use for goodness sakes or by any means ‘less intuitive’. A person should like my favorite browser be able to choose what features he want’s to use (extensions) and have his own OS behavior. But not a harder less intuitive one.
He’s making some worthwhile points.
1) Graphical wizards when endlessly nested aren’t necessarily easier than text based tools.
2) The desktop/folder metaphor is very attractive to new users but it may be, compared to a proper file manager, a hindrance to understanding.
3) What sells new users in the shop display may drive experienced users crazy if they are compelled to use it.
4) Maybe the industry obsession with ease of use for new users is getting it wrong. His example of video remote controls is very interesting. I would include texting on mobiles as a similar example – as far from conventional human interface design and ease of use as you can get, but it took off like a bomb.
5) The most interesting thing is that he is a Mac user. Now its not the first time I have heard of Mac users throwing out the dock, clearing the desktop, and going to an empty desktop and a file manager when they actually need to manage files. Is it possible that Mac users, having been the first to embrace the desktop metaphor, are also the first to come to the end of it?
What I found when introducing naive users to Windowmaker is that it was surprisingly well accepted. No desktop icons except program icons, multiple desktops, use a file manager to find your files. All totally contrary to the Human Interface guidelines which were inspired by Xerox and Apple 20 years ago. And yet the universal reaction (including from old ladies of 70 with computer phobia) was ‘of course I can use this’.
Makes one think.
Another example I’d like to give is Latex. Oh the pain I went through the first few times I tried to write with it. But it gets easier everytime. And faster.
Now I’m to the point where a program that makes it easier for newcomers like Lyx or Scientific Workplace are actually a hindrance to me, and I’m glad I don’t have to use them (unfortunately I didn’t have those programs when I was new at it).
The point I’m trying to make is (well, the same as the article and everyone else who got the article): what is easy for the unexperienced can be a hindrance to the experienced. In the case of Latex it’s the fortunate circumstance that there’s something for both. But for operating systems, if it weren’t for alternate OSs and window managers and 3rd party hacks, experienced users would be forced to put up with the 4 button remote, to use the author’s analogy.
Though the article seems to imply that you can’t have easy for the beginner and efficient and functional for the experienced at the same time. I think you can, just not with the same interface. Which is why having different desktop environments on the same OS is so great.
And think about this: what if we all were forced to use Windows or OSX with absolutely no tweaks? Would so many people on a tech site such as this be clamoring for easy to use systems that their grandmother who’s never seen a computer could use, or would they too be asking for more advanced systems that let them work efficiently without wizards, balloons, clippys and other eye-candy?
Anyway, yeah people are just flaming the article as if he just wants computers to be hard. He doesn’t want them to be hard, he just doesn’t want to lose more functionality.
http://www.acm.org/pubs/cacm/AUG96/antimac.htm
Written in 1996 – I recommend the paragraph “Expert Users”.
Nice link, great article. Very thought provoking, and very early to see it all.
The author is not, despite the title, arguing for making computers “more difficult”. He is, rather, confusing, as many people before him have, ease of learning with ease of use.
It’s what Apple missed when they reimplemented the Xerox interfaces on the Mac. Originally, they missed it because they didn’t have room for it, but once left out, it got lost, and is only slowly being restored.
The features of any system that are easiest to use change for you as you practice with the system. The casual user needs features that are low on the learning curve, so that they don’t have to invest much effort in learning them to do the samll amount of work they intend to do. The serious user needs features that streamline their workflow, but the law of requisite complexity guarentees that these features will be difficult to learn.
The trick is to apply both. Modern GUIs come along way towards that by having keyboard shortcuts and intelligent context based menus, but they fall short by not having reasonable scripting.
“It appears our database has momentarily gone down.
Please refresh the page or check back soon.”
Still can’t get to page 4.
The author really makes a moot point. A computer can be as “difficult” or as “easy” as the user wants to make it. Power Users can use the shortcuts they want and disable things as they see fit for themselves. Novices can continue to have their hands held.
It’s the beauty of option. Not everyone is the same.
DHofmann also makes a very good point in the comment above:
“[The article] implies that power and simplicity are mutually exclusive. Here’s a counterexample: TiVo. It’s much more powerful than a VCR, and yet is much easier to use to record your favorite programs.
It takes creativity to make something simpler without sacrificing power, but it can be done.”
That was the most well-said idea on the subject and a perfect example of something done quite well.
I refuted the article. http://osnot.infogami.com/blog/
Just playing devils advocate here but read the following paragraph.
“A more reasonable example: Is it stupid to expect the computer to save the work automatically? Imagine Bob, a person new to computers, who types for an half an hour or so, assuming that the program automatically preserves his work, then when closing the program, misinterpets the do you want to save? dialog as “do you want to save the last few minutes of work?” and answers no. Interfaces need automatic save, which would make it easier for both acclimated and unaclimated users.”
If Bob Misinterprets the dialog as “do you want to save the last few minutes of work?” and answers no, then automatic saving would be doing EXACTLY what he doesn’t want to do by saving his work automatically. How is this helpful?
I sort of agree with the original posters point of view. If people are unwilling to read important dialogs and end up losing their work then screw them. It isn’t a flaw in the design of the OS or application.
When the user doesn’t want the last changes, the save dialog is completely unnecessary, because undo works much better.
I dont think the default guis of Windows or MacOS X
or Linux should get less intuitive. But they should give the power user more choices for customizing the gui.
In my point of view KDE is a good positive example for that. You can just use it like it is, or customize it until it has nearly nothing in common with the default look and feel.
I personally use the ion3 wm with kde (that is, the kde apps), which allows you to use the keyboard for everything.
All in all I think the vendors should distinquish more between power users and beginners, for my mom windows is still far to unintuitive.
Oggy
Personally I’ve never had a problem with UIs being dumbed down, as long as they’re still customisable. Software with a simple default UI for new users can still provide access to more advanced features. They may be a little more hidden, but surely that’s not really a problem for more experienced users who are happy to look through menu options and preferences? It seems like a small price to pay for an interface that’s much more approachable for novices.
The web browser Opera is a good example. Look at all the criticism of it’s UI a few years ago; many people switching from simpler browsers hated how “cluttered” the UI was, and reviews regularly complained about the complexity. With more recent versions the UI has been heavily simplified, with most of the features hidden by default. Despite that no features have actually been removed, they’re still easily discoverable for people who care to look, and as the UI is quick and easy to customise, experienced Opera users can bring back the features they use in seconds.
Another example is the remote control with my TV/DVD-recorder. On it’s surface it just has a small number of large buttons to provide quick access to the most common options. You flip it open to access more advanced recording and tuning features that generally aren’t in regular use. To me that kind of design is a perfectly good compromise when creating an interface for a device that has a large number of functions.
Of course in an ideal world products would be intuitive enough not to need that kind of compromise, but when dealing with something that’s complex and feature rich, that’s a very difficult challenge.
The author puts a lot of effort and thought into this article to make the point – I think – that access to functionality is limited by dumbing down the interface rather than having the interface match the level of power the machine inherently provides someone with the intelligence to use it. But therein lies the flaw in this logic. Intelligence and knowledge of computers are different things. We all learn based on abstraction of applied experience. Faced with a new situation, we immediately try to make sense of it based on prior experience. Even the most gifted person, when faced with a computer for the first time, will look at it and say, “What’s this?” Then you will need to train them how to use it. In time, their knowledge will grow and they can apply their intelligence to solve problems, and create great works with it. But the access to this power starts at a fundamental level and the machine must afford an interface to its power that is accessible on all levels.
This is a good example of another OSNews expert forgetting that he/she is not the average user. And while the requisite reference to Joe Sixpack is ironically included, Joe wouldn’t want to have a beer with the author of this article. Maybe Linus or Bill G would though.
Human dialogue, especially in English, is incredibly imprecise, slow, and complex to master. It would be a terrible way to interface with a computer under important and serious contexts.
Sure, it’d be nice to dictate your paper (which you can already do, dragon speak, etc). And it’d be nice to talk to bring up your e-mail; but I dare you to write an SQL query this way… And if you think that’s hard (because SQL has a remarkably spoken language like semantic) try to write a C program via speaking..
Or better yet, tell your computer where to find a document. “Yea, that’s in slash a-q-f-g capital X slash boborama that’s b-o-b-o-r-a-m-a no capitals and it’s named jill dot text t-x-t.” Of course, the computer can algorithmically narrow your choices and allow you to not be terribly precise and this is fine; right up until precision is exactly what you need and no amount of guessing is going to get it.
The interesting part of comparing computer interaction to conversing with your fellow comes into affect when you consider that conversation is horribly innaccurate and usually misunderstood… Take the differences between men and women (typically) for one! How cliché is it for men to complain that women read into what they say; because the woman is reading his speech pattern and body language too and the man is pretending that’s unimportant. Shall computers read body language too? Will they try and calm you down when they see you’re angry and keep you from deleting all your files because they “know you wouldn’t ever want to do that.”
This comes down to a basic argument of is computing apt as a tool or as a fellow worker. Obviously I see it as a tool and you see it as a worker. It’s working as a tool right now; the worker part is researched a lot (although maybe not as much by proportion as it was 30 years ago) in AI labs.
The last thing I’d ever want to see is being required to talk to my computer. Now, having it read my mind, that’s much more intriguing.
In ages past when spells were cast
in a time of men and steel
we were taught no special things
for it was all done by feel.
For basic file management, we’re good. Explorer, Finder, Konqueror, and even Thunar are quite good.
I can do just about anything well in KDE, XFCE, E17, and Explorer (as of WinXP). The trouble comes in not telling me useful info (it takes no more real space to give me MB/s, ETA per file, etc., like Konueror when moving/copying files), or adding too many steps (Safely remove device wizard–need I say more?), or even using file management that only works within a small subset of applications (network shares in OS X, FI).
All of those types of things can add minutes to simple tasks. That’s without even invoking Fitts’ Law!
You need to learn how to operate applications your computer. Some people are just stupid (as opposed to merely ignorant–this includes non-techies who buy a crappy Dell because it’s half the price of a Mac, when they need a Mac), and there’s no help for them. The rest of us can learn.
Streamlining does not need to be at the expense of options. An advanced tab, or little arrow things that Gnome and OS X use, are good ways to handle it. We just need people to tell the geeks making it work, “this is stupid, it should work like this.” Then get that working well before adding too many more features.
Unfortunately, it’s the pretty stuff that sells at the store.
…ION3 can do most of the things the author asks for. It’s in the repositories of Ubuntu, just do ‘sudo apt-get install ion3’, log out and choose Ion3 as your environment.
Make sure to read the manual when it starts though. And if you are like me, you’ll want to do ‘apt-get install gtk-theme-switch’ too, or else GTK-programs will use the default GTK-theme it seems. At least here.
I’m glad to see that some people have come to the defence of the author. Not least because most of the detractors seem to be accusing him of wanting harder to use computers. I’d say it reads more that he’d like easier to use computers _once you know what you’re doing_, and the LaTeX analogy above emobdies this perfectly. Indeed, one of the author’s points was that the user should remember that meta-s invokes the save command, and what is more intuitive and simpler than ‘s for save’? (OK, I’m a pine user so I might not be the best judge of that anymore.)
To focus on a slightly different aspect, rather than thinking about power users: why is it that the notional computer buyer is not expected to read any manuals for their spanking new top of the range technology, technology that is incredibly powerful and could see the naive user phished, accessing dubious material, deleting all their important data, keeping their credit card numbers in a non-secure fashion and who knows what else, yet the same person on buying a toaster will get more detailed instructional material? Is burnt bread that much more threatening?
So make the interface as ‘(non)-inuitive’ as you want, make it mouse driven if you must, but I’d like to see people moving away from the mindset that you should be able to take it home plug it in and be up and running in an hour if you’ve never used a computer before. We don’t apply that logic to any other product, so why something as expensive, powerful, and potentially damaging as a computer?
Has anyone tried to teach how to use computers? I have tried to teach my aunt, and then I realized how difficult to use computers are.
What would be a nice computer experience? it would be one like this:
the user presses an easily accessible button to switch on the computer; monitor etc are switched on.
a nice graphic greets you while the O/S loads.
The opening screen presents a list of functions in a vertical menu that occupies the screen: write a document, find a document, play a game, etc (each program installs its own category here).
The user selects ‘find’. The computer responds with another screen: what to find? the user writes “find all the documents between yesterday and today”. The computer responds with a list of documents.
While the above has happened, the previous screen has been minimized with an animation to a little transparent icon at one of the screen sides.
Then the user wants to checkout emails. He clicks the initial screen icon, then selects to view emails. The find screen is again minimized somewhere. The screen contains email information. The user clicks an email. The email list is minimized while the screen is occupied with the email and some nice translucent animated options using the 3d card (ala Spore). The user clicks the ‘forward’ button and then a list of contacts comes up. The user selects the contacts and presses ok.
Then the user wants to see the job tasks. He goes onto the first screen, selects ‘tasks’ and the new tasks come up. Then he proceeds to do the tasks etc.
Now let’s see reality:
the user tries to find the power button. Where is it? he pushes the monitor button, but the only thing he sees is the green power on monitor led. He presses the button again. Then he realizes the computer is hidden under the desk. He presses the button.
A nice black & white screen with some strange messages come up. He then waits while more strange messages come up. Then he see a message about ‘windows’.
Then WinXP finally boots.
The user wants to find the documents written from yesterday. He clicks ‘start’, but nothing happens. WinXp has not actually finished booting!
Then he clicks ‘start’ again after a few seconds. The start menu opens. The user sits there gazing at the marvellous invention called ‘winXp’ menu. He then clicks ‘search’.
The search menu talks about finding files, folders, printers and outlook. It does not say you can find documents. Therefore the user clicks away to try another way.
The user searches the start menu for ‘find documents’, but he finds nothing. Then he realizes that a document is a file, so he goes back to ‘search’. Then a little dog comes up and asks him if he wants to find ‘documents’.
The user is happy to have found how to search for documents. The options say ‘within last week’, so he clicks that. He then enters in the box that says ‘document name’: “my vacations”. He then clicks ‘search’.
A window comes up empty.
But the user is sure to have given the name ‘my vacations’ to the document. What happened? ‘my vacations’ was the word document’s title, not the file name. The filename was ‘this summer’, because the user’s text starts with ‘this summer’ and msword proposed it as a filename when the file was saved.
Then the user wants to checkout emails. He goes to the start menu and selects ’email’. Then he wants to find emails. There isn’t a search option anywhere, so he goes back to the start menu. He then sees that he can not search emails, because the start menu says he can search for ‘files, folders, people, printer and outlook’.
Then the user wants to check his job tasks. There is no such thing as tasks, but he remembers he has to open ‘internet explorer’. He opens IE, then he has to type some weird things.
To cut the long story sort, computers suck, because operating systems suck. Computer usage is not human-centric, but machine-centric.
hmm, symphony anyone?
still, i fully agree with the troublesome interaction above.
hmm, i recall a program called haystack that some MIT people was working on.
the main problem is, like i think the article pointed out, that the computer isnt buildt by one company or person, but by many.
still: with the new destop search stuff that apple and others have introduced lately, your example about searching may be a bit flawed…
I got the exact ideal experience you described with my sparcstation 5 and Solaris 2.6 and again came very close with an early mac II, yet neither are the world market share leader. The problem is not the computer.
I’ve always felt UNIX and C should only be taught on the street corner, like sex.