Over the past few decades, the software that enables us to be productive with our computers has become increasingly sophisticated and complex. Today’s UI designers are faced with the challenge of devising graphical user interfaces that are easy to grasp and use, yet still provide access to a wide range of features. Here are some ideas about the nature of GUI complexity, followed by a couple of thoughts on simplicity that might just surprise you.
On the nature of complexity
Everybody wants simple software. As a programmer, I often read about “bloat” and “featuritis” and how they are problematic both for software development and usage. So what compels software developers worldwide to keep creating ever more complex software?
In the interest of clarity, let’s start with some definitions. The meaning of “complexity” that I’ll be referring to in this article derives pretty much directly from Don Norman‘s most recent book, Living with Complexity (amazon) and can roughly be summarized like this: complexity is a measure for the structural sophistication of a system (be it a software application, an airplane cockpit or a toaster). That is, the more parts and properties, the more nooks and crannies something has, the more complex it is. I prefer this definition to other common ones because it decouples the complexity from its widespread negative connotation. It’s purely descriptive and non-judgmental. In contrast, a system may also be complicated, which I (in accordance with Don Norman) use as a measure for the extent in which it is confusing and unpredictable to a human user. This acknowledges that something being “complicated” is, at its core, a subjective feeling or, if you prefer, a psychological phenomenon. It has to be considered within the context of an individual human’s experience to be meaningful.
Don’t believe me? Take a look at the following user interface.
(“Hawker Hunter Cockpit” by Greg Weir on flickr, used under CC-BY)
To any non-pilot, this is an interface nightmare. The sheer number of buttons and levers is both complex and complicated, unless you know how to fly a plane. In that case, the complexity remains obvious, but instead of a paralyzing, complicated mess of buttons you see an overview of all possible actions, nicely ordered in meaningful groups, with everything important right in your reach. The difference lies in the education about the tools, and experience with the problems they enable you to solve.
So “complex” and “complicated” are not the same thing, but they’re not completely independent either. Add complexity without care, and you might soon end up with a very complicated system. Likewise, reducing complexity is a viable way to make something less complicated. But as we will see, it is not the only way at all (or even necessarily a good one).
Complexity on the rise
How do you actually measure how complex or complicated something is? I don’t have a definitive answer for either of them, but there are approximations we can make. I’ll use this opportunity to corroborate the claim that software is becoming more complex with some quantitative data courtesy of Jensen Harris, who wrote a four part article series about the history of the Microsoft Office UI (recommended reading for his musings on complexity). Among other insightful information, he also presents the number of toolbars across major versions of MS Office up until Office 2007, which is when they were abolished in favor of the Ribbon. These numbers are reproduced in the following table:
Version of MS Word | 1.0 | 2.0 | 6.0 | 95 | 97 | 2000 | 2002 | 2003 | 2007 |
---|---|---|---|---|---|---|---|---|---|
Number of toolbars | 2 | 2 | 8 | 9 | 18 | 23 | 30 | 31 | n/a |
Of course the number of toolbars is not a perfect measure of the available functionality, but it works for seeing the trend, doesn’t it? If you wish to delve deeper, Harris’s blog offers a lot of detailed information about the interface complexity of Microsoft Office.
“But,” you say, “not all software is getting more complex! What about my simple and elegant Web 2.0 apps?” Good point! Let’s take a look at Google Docs, late 2007 and Google Docs today. It started out as a lightweight online alternative to traditional word processing software, but soon had to face the reality that, for people to actually switch over on a large scale, it has to provide the features that users need and want (which apparently even includes those awful rulers). Just skim the Google Docs Blog and check for yourself how many of those blog entries deal solely with the introduction of new features.
In summary: The increase of complexity in modern software is a very real thing. But why? The answer is surprisingly simple: expansion. A lot of software is created with monetary return in mind, which obviously depends on the number of people who are willing to fork over the necessary amount. Even free software is often fighting for both mind- and market share. So, how do you get your software to appeal to more people? Let’s see what Joel Spolsky has to say about feature bloat:
I think it is a misattribution to say, for example, that the iPod is successful because it lacks features. If you start to believe that, you’ll believe, among other things, that you should take out features to increase your product’s success. With six years of experience running my own software company I can tell you that nothing we have ever done at Fog Creek has increased our revenue more than releasing a new version with more features. Nothing. The flow to our bottom line from new versions with new features is absolutely undeniable. It’s like gravity. When we tried Google ads, when we implemented various affiliate schemes, or when an article about FogBugz appears in the press, we could barely see the effect on the bottom line. When a new version comes out with new features, we see a sudden, undeniable, substantial, and permanent increase in revenue.
I rest my case.
Capable, yet usable
The other side of the coin is increased usability, another thing that everybody wants.
Software today is consumed quicker than ever. Remember when you bought software in a store after careful consideration, the (printed and included!) manual was several hundred pages long and the installation could consume whole evenings? Today you get software on an ad hoc basis through the internet. With software repositories and app stores becoming more widespread, download, installation and system integration are reduced to a single click. The software world is experiencing a paradigm shift from “software as a long-term investment” to “software as a commodity”, and to gain traction, software has to be quickly learnable and discoverable. Steve Krug, author of the iconically titled Don’t Make Me Think (amazon), has the mentality figured out:
One of the things that becomes obvious as soon as you do any usability testing — whether you’re testing Web sites, software, or household appliances — is the extent to which people use things all the time without understanding how they work, or with completely wrong-headed ideas about how they work.
Faced with any sort of technology, very few people take the time to read instructions. Instead, we forge ahead and muddle through, making up our own vaguely plausible stories about what we’re doing and why it works.
Besides broad functionality, good usability is a second cornerstone for the potential to expand the user base. With all this background information in mind, we can rephrase the opening question: How do we deliver good usability without compromising feature breadth? Should we be striving for simplicity, as it is so often demanded?
What to strive for
When I ask myself how I should create a good user experience, the trusty ol’ ISO 9241-110 always works as a reminder of what my conceptual goals should be. Simplicity is conspicuously absent. Instead, I read about properties like suitability for the task, suitability for learning or self descriptiveness. These are the things that are actually helpful to my users, rather than being a matter of taste or preference.
If we interpret simplicity as the reduction of the number of available actions at a given point, or (perhaps less academically) taking out features, then it is certainly a way to reduce the dreaded “aura of complicated”.
But, and if you take away anything from this article I hope it is this one thought, simplicity is a means, not an end. There are many other ways to create interfaces with great usability without compromising complexity or limiting features.
In fact, if anyone’s interested, I will be writing a follow-up article describing and showcasing several such techniques. I’d be glad to hear your thoughts on the topic.
About the author:
Julian Fietkau is a student of computer science and human computer interaction at the University of Hamburg. He is hoping for a career in interaction design and usability engineering.
> I’d be glad to hear your thoughts on the topic.
> How do we deliver good usability […]?
Mmm… what brings more money? 🙂
http://wiki.idux.com/uploads/Main/Dilbert_marginal.gif
Edited 2011-03-11 11:09 UTC
Perfectly explain the problem,
How would you sell a new version of the product whenthe only “feature” compared to the old version is less features.
And How would you offer traning course and certifications if a 6 years kid old could use the product.
I wish we stopped invoking this 6 years old kid and his grandma when discussing usability.
It tends to give a vision of usability which results in overly guided interfaces, that themselves become an usability problem as soon as people start to master them and want to use them efficiently. Like those modern games which always begin with a lengthy, overly guided, and unskippable tutorial : it’s fun the first time, but it makes you want to kill the development team the tenth time.
Should professional software used to design PCBs display on startup a description of what a conductor and an insulator is, along with a simple description of electrical currents ? No. Yet this would be a requirement if we want said software to be usable “even by children”.
Here are some real-world examples now. Tons of professional guys love Photoshop as a tool for daily work, yet it has one of the most obscure and undiscoverable UIs I’ve ever seen. I loved to play with SolidWorks back when I used it in school, yet I wouldn’t expect a 6-year-old to understand how it works. Software can be usable *and* have a steep learning curve, which permits the manufacturer to make money from it by selling training courses
Edited 2011-03-13 16:24 UTC
Well that’s mostly the marketing buzz talking.
Why ? I have nothing against interfaces which can be manipulated by young children if young children are a relevant part of the target audience. I’m just against the idea that every software, no matter its target audience, must be usable by young children in order to be labeled usable as a whole.
“Fisher price my first computer” as the paramount of usability… hmmm… No, thanks.
Edited 2011-03-13 16:43 UTC
I like what I’m seeing so far and eagerly awaiting the follow-up(s). Hopefully I will be able to use them as citations when talking about why GNOME 2.x’s approach to usability has sucked (a trend that sadly continues).
Complexity often come in the form of more features. Now, the problem itself isn’t having too many features, but focusing on “feature count†instead of the quality of each feature.
You can’t both have very high quality, and very high quantity. You have to choose. (At least not in a short period of time)
Great article by the way. I’d love more articles about design and UI! They are by far my favorite here on OSnews.
I see where you’re coming from and I think you’re fundamentally quite right, but I don’t really agree with the implication that there’s a dichotomy between more features and better features.
Sure, in high-level manager-speak, developing more features and improving the usability both consume time and money, but that line of thought IMO neglects that both of them also have long-term consequences.
That said, if the two extremes are “many mediocre features” and “few high-quality features”, how do you find a sweet spot in between? How do you prioritize? That’s assuming you’re active in this line of work. Sorry if you’re not, then I’ve misread your comment. But I’m always curious how other people deal with these questions.
I am a web designer. So I’m not really a proper UI designer, although one has to think about UI when designing simple content oriented sites.
It is obviously about balance, and there certainly are applications that are complex, yet have high quality in many of it’s features.
While I am not so extreme that I think that complexity is never appropriate I still tend to push for simplicity. There are some reasons for this:
• It is very easy to add features later on, but very hard to remove them once you’ve added them.
• Often one thinks that the needs are more complex than they are in reality. If you try it out you quickly find out whether you need the complexity.
• Often the complexity you need might be in another form than the one you thought. Again, when you actually try it out you know this better.
• There are often many people involved in a project, and often many people striving (indirectly and unknowingly, through wanting features) for complexity. Something becoming too simple is rarely a
• More features usually makes it much more difficult to create an easy to use UI.
The most important thing to think about is not just avoid complexity, but that features and complexity come at a price. Is a feature worth the complexity it brings?
You’ve presented the information in a very accessible manner. It is funny how many articles about improving UI are so difficult to use themselves.
Nice article. Waiting for the followups.
Edited 2011-03-11 13:42 UTC
I like the article. I have a clear goal in my current software project that aims at creating a really awesome interface and a lot of the focus is on usability and making the interface “get out of your way” so you can focus on the actual content and actions.
When you create interfaces you have to deal with stuff like spatial memory and “recognize vs. remember”. At the same time you need features because people like to do stuff in different ways.
So the pure aim is to create all necessary features and present them in the absolutely best way possible.
Not to tout my own horn but I think that my software is on the right track. I try to mimic interfaces that people are used to and I have tools to let people use the same keyboard shortcuts as the “competitors”, aiming at reducing the stuff they have to relearn.
In the projects blog I actually wrote a similar article on what I believe is the best approach to designing a good interface:
http://blog.stoffiplayer.com/2010/07/designing-application-interact…
I look forward to your follow up.
I wrote a post on the blog about your article.
http://blog.stoffiplayer.com/2011/03/dealing-with-complexity-in-ui-…
I am quite flattered and also very thankful for your input. Both articles you’ve linked are very interesting to read. What especially stood out to me were your ideas on easing migrating users into your application, you have some intriguing ideas there and, as far as I can tell, you do seem to be on the right track with your software.
Thank you very much. I think that as computer have become more and more common there’s a stronger focus on the actual interface between the technology and the human. This is an area that really needs a lot of attention and it is very interesting to see that a lot of people actually agree on many points.
However, there’s still software produced today that I see which have absolutely horrific interfaces. There’s always an excuse, for example “it’s only internal software, we don’t need to care that much”. But I believe that if you keep your focus on usability from the start it does not have to mean extra work for the developer, but it will mean extra productivity for the user in the end.
May I ask you which areas or concepts you were planning on talking about in the next article? I am very intrigued, I can’t wait for it.
Sure, just see my reply at the bottom.
Hey, I am like all the rest: really looking forward to the next part. Thanks for making this so interesting.
Please don’t hesitate to let me know what you’d like to read about in a followup article. Off the top of my head, here are a few aspects about which I could write something at least vaguely competent:
– What methods are there to help software developers create usable interfaces? (That is, before user tests and all that, how do you come up with a good interface in the first place?)
– What, if not its simplicity, helped Apple’s iPod to become the big success that it is?
– How did Microsoft come up with the Ribbon and what can other UX people learn from it?
To clarify, this isn’t a vote for the most popular topic, I’m just trying to gauge if there’s anything particular that’s really interesting to you.
I think that error messages and text in all is an interesting part. Communication!
Error messages are of course lovely, since there’s tons of examples of error messages that does not help the user at all. A good application should be able to handle problems in an elegant way.
Too much text is also a big problem, since users won’t read it. So here’s the thing: what is the best way to communicate with the user? The balance between images and text as well as using the users memory (familiarity).
Another interesting but somewhat subtle aspect of communication is the tone of it. Do you want to come off as mechanical and corporate or do you want to sound like what a friend would have sounded like if he was standing beside him and explaining the application?
The message “Your request contained an illegal action” is bad because it puts blame on the user (the word illegal is very loaded).
Anyway, there’s a lot of interesting stuff to talk about when it comes to usability design. The Firefox journey is also interesting since it contains the original Mozilla browser, the whole Mozilla Suit, the creation of Firefox which became Firefox 3 and now Firefox 4 is present a new design yet again. It has been up and down several times. Which could be an interesting analysis.
I’ve added error messages to my imaginary list. I suppose I could write at least half an article on that topic alone. Have you read Jef Raskin’s The Humane Interface, by chance? I find some of his opinions a little eccentric, but he makes several very good points about error messages that I’ll probably cite if/when I write an article about the topic.
And even though I personally use Firefox, I probably don’t have enough experience with the history to write about it, or it would involve a ton of research beforehand. So don’t count on me writing about that.
That was a good article. Can’t wait to read the followup.
I think accessibility is an interesting subject in UI design. Sometimes a UI design is simple for 90% of the people but extremely complicated for the rest. How to design a good UI that is simple and accessible for more people? Good color choices, separating content from presentation, using accessibility standards and methods, etc…
Unfortunately I don’t have very much experience in that area, so apart from the obvious things regarding color and all that you’ve already mentioned, I don’t think I’d be qualified.
If we’re not strictly talking about software, then I encourage you to take a look at Design Meets Disability by Graham Pullin. From what I’ve heard and read about it, it’s apparently an excellent book for understanding product design with people with disabilities in mind.
I’d like you to comment on the topic of why there’s a perceived dichotomy between the power and conceptual depth of the UI and its usability and coolness, and what could be done about it. I mean, why geeks love ugly UIs like the command-line and keyboard shortcuts, why most GUIs are so difficult to script and combine, why the mouse is associated with self-explanatory UIs and the keyboard evokes dauntingly obscure incantations; what are the key issues, is it interactivity vs batch processing, verbal representation vs graphical representation.. and what is so special about verbal representations in programming? Is it a good idea to make communication more graphical and graphics more “verbal”? Should our documents all look like bland defaults (boring) or should they be as unique as physical documents (confusing). Why are defaults so bland, and what if you want a specific bland look for your document which happens to be the default look? (what’s the best way to deal with abstraction and escaping).
I have some vague ideas on what a dream UI would be like (maybe something between Croquet and Tangible Functional Programming), but I’m sure your views are much more interesting.
Some of those are pretty fundamental questions and I don’t know if I would be able to answer them with a featured article, but I’ll just try it right here:
With Don Norman’s definition of complexity in mind, I’d say that the mouse has a rather low intrinsic complexity compared to the keyboard. It allows us to move something (a cursor, most of the time) around on the 2D plane, and to “click” to induce an action, and that’s pretty much it. In contrast, the keyboard offers dozens of discrete possibilities for input, hundreds if you count every possible key combination using Ctrl, Shift or Alt.
Assuming that geeks are people who use their computers often, it seems obvious that they’d be looking for ways to speed up repetitive tasks. The keyboard is a great way to speed things up, because you have the aforementioned hundreds of ways to instantly tell the computer to do something. Every geek realizes at some point in their life that it is much faster to press Ctrl+X than to navigate the mouse to “Edit”, click, navigate down to “Cut” and click again.
If it’s faster, why doesn’t everyone do it this way? Because keyboard combinations, just like the commands in any CLI, need to be memorized to be used, and that’s not a priority for non-geeks. They use the mouse because that way the feature is discoverable. The mouse has, by virtue of its limitied possibilities for interaction, broken down the complex task of hitting exactly the right two keys on your keaboard to a sequence of simpler steps: moving the mouse, clicking, and then moving and clicking some more. What’s noteworthy about this, is that precisely because it consists of several steps and takes longer to do, the user can get feedback along the way (visual or otherwise) and thus can check if the procedure is going as planned. For people who don’t use their computers as often or don’t feel as confident using them, the speed loss is an acceptable tradeoff.
Does that answer some of your questions? I have a feeling you’ve had quite a few own thoughts on this, and I also think you’re overestimating me.
Thanks for your quick reply. It’s a great starting point. I just wanted to point out some paradoxes in those common observations, and how I think they might be solved, and how it could help to get some perspective in UI design.
Regarding the mouse vs the keyboard, the first surprising thing is that the mouse is actually a much more powerful piece of IO hardware, in terms of bits per second. Its true power is shown in graphical applications, such as the ones for drawing and modelling. Notice that, in principle, you could use the mouse for hitting keys in a virtual keyboard, therefore the difference is not in the mouse per se, it’s in the hierarchical menu interface; that’s why Ncurses UIs are about as newbie-friendly as GUIs once you learn the basics. They are also about as slow. Then we have, hierarchical menus are discoverable and easy on your memory, but slower. So, many graphical applications add keyboard shortcuts, and they add the shortcuts in their menu entries, which is great. But why should keyboard shortcuts require so much memorization? They should give you the available options as you go, just like menus do. I’ve seen some of this in Emacs and ION3, but maybe there’s still some room for improvement. Emacs lets you cancel the whole command (C-g), but I haven’t seen a prominent option to withdraw one key stroke and pick another one in a long key chord, as you would go back and forth in a menu. I guess you can do it with recursive edition of the minibuffer. Also, not just short descriptions, but longer explanations should be available as you type.
My take is that the main advantage of the keyboard over the mouse is the multitouch interaction. The human nervous system is better adapted to position ten points with low precision than one point with extreme precision, and thus the superior bit rate of the mouse is not helpful in most cases. That’s also one reason why multitouch screens are so interesting, because they combine the best features of keyboard and mouse.
By the way, the CLI becomes much less daunting once you learn some little tricks such as the up-arrow, tab completion, man, –help, apropos and the like. It also is great when you ask in forum how to do something, and then you can just copy-paste the answer. You can’t do that with a GUI! Why not? I think you should be able to. Just like your past CLI commands are available to copy and paste, so should be your past chosen menu options.Also, GUI scripting should be as easy as CLI scripting.
Thanks for your appreciation. Yes, I have a few ideas, but they sometimes contradict each other and the overall picture is still a bit messy
“How do we deliver good usability without compromising feature breadth? Should we be striving for simplicity, as it is so often demanded?”
When Larry Wall designed the Perl language, his guiding principle was: make simple things simple, difficult ones, possible. This can be applied to any UI. The point is that commonly-used features should be easy to invoke and the not so common, possible, though it may take some effort.
As for error messages, too many programmers use the word illegal when they mean invalid. Illegal means it’s against the law, invalid means it doesn’t make sense.
That really is my mantra “easy things easy, difficult things possible”. However, there is always an argument as to what “easy” means.
“Easy” for Larry Wall meant easy for some one who has mastered the language and will be using it frequently.
That is why perl has a humorous reputation as being a write only language.
IMHO, that has to be balanced with a the other sense of “Easy”: Easy for a novice to quickly understand.
The cockpit of a helicopter can be Perl easy, as it takes a lots of training for a pilot to operate a plane safely. No one outside of action movies is going to have to jump into the cockpit and learn how to use it in 10 seconds.
A casual website, needs to be Easy for new users to get. If you want users.
The bit missing from that principle (and conspicuously also absent from Perl) is that doing things which are inadvisable should be in the second category, not the first – possible, but not by accident.
Very nice article about UI design. Concepts like complexity, complicated, simple are very clearly explained. Thanks a lot
I’ve been saying for years now, selling phones, that I’m a firm believer in the difference between complex and complicated. Good to see I’m not the only one who sees this. As an examole, I’d call any recent Nokia attempt at a smart phone complicated (it’s so hard to do common things, people think they’re easy because, perversely, it’s too much effort do do more than call or text). I would, however, consider Android to be complex. iPhones, to be sure, are simpler to look at than Android devices, but far more complicated to use. I’ve made plenty an iOS to Android conversion in my job, and those customers never look back.
Just a couple notes in general on the article:
Google docs seems to be simpler to use, not more complex when looking at the newer interface. They did not follow the MS Office treadmill of adding more and more tool bars. They’ve added complex functionality while hiding it form users that don’t need it and making the simple things easier to use.
Joel on software. *shakes head* Why do people quote him as if he’s the final answer on software design and development? Its not like you’re quoting Einstein on his theory of relativity : a remarkable work that has withstood rigorous testing. He’s just a guy that once worked on excel and later started his own software company that makes software very few people use. Oh yeah and he started a blog and wrote a book on software design. He has some good ideas, some bad ideas. Just because an idea came from him, doesn’t make it an awesome “end of discussion” case resting point.
Ah, here we go, thank you. I’d already started to feel light-headed from all the praise…
Re: Google Docs, this is probably a good point in time to admit that I’ve never used it very seriously and haven’t created any real documents with it. So if you feel that I’m wrong, that’s likely the reason, and in that case feel free to dismiss my opinion. Still, I’d like to clarify that I don’t think Google Docs is necessarily more complicated or difficult to use today than it was three or four years ago, just that it is more complex because it has many more features than back then, today you can do much more with it. Hiding advanced features well enough is a viable option for coping with increasing functional complexity.
As for Joel Spolsky, I didn’t intend to frame him as some sort of ultimate authority. He simply wrote something that fit very well into the article, and (I am taking him at face value here) he has several years of experience doing what he does at least semi-successfully. The quote in the article was not intended as religious dogma, but as an illustration of my point. I didn’t make my intentions clear enough in that regard, sorry for that.
Thanks for the clarification. I think I was a bit off in the previous post as well.
There really is a difference between complexion and complication. A good UI should be complex but not complicated. Microsoft ( prior to the ribbon interface) would have defended the vast number of tool bars as necessary given the complexity of the features available. The genius of the ribbon was not that they made their software less complex, but made it less complicated. That’s what Google did to docs as well.
At one point in my life I was teaching underprivileged kids basic computer skills on win 98 machines, using MS office 2000. The kids were really interested in clicking everything in sight without much thought as they were very curious. The result was the interface for Word often became indescribably messed up.
Frequent senarios:
1) Every tool bar imaginable would be on the screen leaving an inch for actual editing.
2) Default Template changed to a document they had written
3) No tool bar visible, combined with the inability ( for me anyways) to figure out how to put any of them back.
4) Prompt on start up to insert a disk to complete installing some half installed feature.
The point is the UI allowed people to do stupid things that no one really would ever want to do. while making it difficutl to undo any of it. That was stupid. Don’t do that in your UI’s. The ribbon interface, I’d imagine would have prevented some of those problems.
I know some people love Joel. I don’t know why. I understand why some people have an inordinate amount of love and respect for Jobs, Ives, Raskin, the developers of Amiga, BeOs, Linux, BSD and perl. They made really good software and designed it pretty darn good, millions use their products every day, because they *want* to. Joel was part of a team that ripped off Lotus 1-2-3 and visicalc’s interface.
Your article is insightful and well written. Good job.
I think when people refer to “simplicity” they are really talking about how easy an object is to use and figure out.
The “too-many-features” aspect is really a software engineering problem and not necessarily a usability problem.
For example, an application can have lots of features where each feature is well designed, easy to use and figure out. Usability wouldn’t be a problem in such an application. The problem most likely becomes the engineering cost of those features or of adding more features.
Every feature costs something. A new feature doesn’t come for free. In particular, the time it takes to research and design a feature “properly” so that it’s easy to use and figure out is significant. The cost of implementing the feature is not trivial either.
That’s why there’s a growing trend for simple and minimal software applications (e.g apps on your mobile devices). It’s just cheaper and practical from a software engineering perspective. The bonus may, or may not, be that the application has high usability.
Just from my experience, applications that are focused in their responsibilities are more usable than those that have too many auxiliary responsibilities. The trend to keep things simple is wise and practical both from a software engineering and interface design perspective.
Edited 2011-03-12 00:11 UTC
Congrats for the introductory article, I greatly applaude your attitude towards educating developers in usability issues. That’s really one thing that every dev should study and apply on their daily work.
I also want to recommend some great articles for beginners on UX:
Great introductory article on Interaction Design:
http://www.uxbooth.com/blog/complete-beginners-guide-to-interaction…
Illustrated article talking about this complexity subject. This one is for UI designers, but developers can also learn a thing or two with it:
http://www.smashingmagazine.com/2009/10/07/minimizing-complexity-in…
My initial thought would be why you didn’t wait and post it all as one article. This article defines it’s terms relatively well but that’s all it does.
I’d also be curious to see where you place consistency vs. novelty in the list of qualities and priorities. It would certainly be pertinent to the ‘muddle through and post hoc rationalize’ tunnel-vision of most users, which you explicitly brought up.
Good point. I actually didn’t want to make it any longer than it already is because I tried to hopefully entice a few readers who wouldn’t otherwise read essays about usability. People have mentioned above that they deem the article rather accessible and approachable. I suspect being concise and limiting its scope was a big part of that.
Also, I wasn’t sure if OSNews readers cared about this topic much at all, and I wanted to test the waters, so to speak.
i like the number 7, the logarithm, and spheres.
the number 7 because it is the maximum number of items I can stare at without panicking.
the logarithm because N menu items with N menu-subitems each, M times, will deliver you N^M options at just M clicks. eg: 49 options at just 2 clicks.
spheres because they balance the effort equally across all its surface just as UIs should balance the effort of the user across the time: frequent actions should consume best interaction resources (items on primary screens, shorter shortcuts on keyboard, …), infrequent actions should not.
Edited 2011-03-12 15:39 UTC
It would be nice if you could analyze a few popular programs. With screenshots and point out what is good and bad about them.
The keyboard should also be consigned to history. It is an anachronistic carry-over from the mechanical era.
For phones and notebooks a far better solution is a simple chorded keypad.
http://en.wikipedia.org/wiki/Chorded_keyboard
Typing should be done with an EEG. Just by wearing a headset and mentally spelling the words
We have been using chorded input systems very successfully for thousands of years in musical instruments. There is an initially steep learning curve but the end result is far greater efficiency.
Conventional keyboards only exist because typewriters were purely mechanical devices that directly connected the key to the typebar. Keyboards (particularly QWERTY types) are incredibly bulky, cumbersome and inefficient.
A full QWERTY keyboard on a phone or tablet is absolutely farcical when only 5 standardised buttons are needed to represent all functions on a conventional keyboard. A far better solution is to place the buttons on the perimeter of the phone.
5 buttons only mean 2^5 = 32 possible combinations, if I’m not misunderstood (unless you want to bring back the multiple tap hell). So you’d not even get 26 letters and 10 digits, let alone other very important things like the space bar, the shift key, punctuation, parentheses and other brackets, diacritical marks for languages which have them, and other signs used rather frequently like +-*/=$@ (and, for programmers, &|^#).
I think that like on musical instruments, we’d need to have a bit more buttons, and to move the fingers between them. AFAIK, that’s how existing one-handed chorded keyboard implementations work.
Edited 2011-03-14 10:06 UTC
Plus, you cannot use five fingers on a one-handed input device. You can only use four.
Something needs to provide a force for your four fingers to press against and that something is the thumb. That means the thumb can’t be used for input.
So that leaves you with only 16 chord combinations.
The palm of your hand provides something to counteract the forces of your fingers. The forces involved are tiny if you use capacitative switches.
You can also use multi-function buttons which have been on digital watches since the 1970s
You would simply use a 4-position thumb slider or dual pressure switches.
An electronic keyboard functions like a piano for purely historical reasons. There is no real reason why an electronic keyboard couldn’t have single-key chords or the ability of a single key to play several different notes using either a rocker or pressure switch.
EEG control would be about the least efficient mechanism imaginable. Even very simple actions such as turning a switch on or off via EEG require a considerable amount of concentration.
Edited 2011-03-14 03:42 UTC
Currently, yes, but who knows what we’ll be able to do in the future, when we have more precise hardware and better analysis algorithms at a lower price.
I’ve also seen some attempts to combine detection of conscious thoughts with emotion detection (which is apparently simpler) and facial expression analysis. Such “hybrid” mechanisms based on a wider data set could also be a promising path.
Edited 2011-03-14 09:47 UTC
Muscular movements are not controlled directly by the motor cortex of they brain. They are mediated through the central nervous system which provides a constant feedback via multiple sensory systems including visual, auditory, heat/cold, pressure and proprioception (spatial perception).
If you try and directly control a mechanism via EEG you don’t have this constant autonomic feedback. This means vastly greater level of conscious control is necessary – imagine having to always deliberately think about how to breathe or swallow. In effect you permanently remain at the “baby steps” stage
Indeed, this makes a lot of sense to me. The lack of feedback is the reason why I dislike touchscreens so much (although it sounds like I’ll have to get used to them anyway).