Linked by Thom Holwerda on Mon 19th Dec 2011 20:11 UTC
Google Once upon a time, in a land, far, far away, there were two mobile operating systems. One of them was designed for mobile from the ground up; the other was trying really hard to copy its older, desktop brother. One was limited in functionality, inflexible and lacked multitasking, but was very efficient, fast, and easy to use. The other had everything and the kitchen sink, was very flexible and could multitask, but had a steep learning curve, was inconsistent, and not particularly pretty.
Permalink for comment 501076
To read all comments associated with this story, please click here.
RE[3]: Comment by frderi
by Neolander on Fri 23rd Dec 2011 13:41 UTC in reply to "RE[2]: Comment by frderi"
Member since:

Most of [software usability books] were written for the traditional WIMP paradigm. While WIMP has served us well, they don't really take into account the unique features of these new devices.

Well, it seems to me the aforementioned principles are very general and could apply to non-software UIs such as that of coffee machines or dish washers.

Your perception might differ, but the current surge of smartphones in the marketplace don't really make them a product failure now does it.

Being commercially successful is not strongly related to usability or technical merits. For two example of commercially successful yet technically terrible products, consider Microsoft Windows and QWERTY computer keyboards.

For smartphones, mostly screen estate and handling. There's simply no room for a conventional menu paradigm on a smartphone.

So, how did keypad-based cellphones running s40 and friends manage to use this very paradigm for years without confusing anyone ?

But a skeuomorphic design does not need to imply that things are not labeled.

Depends if the real-world object you mimick does have labels or not.

A big problem which I have with this design trend is that it seems to believe that past designs were perfect and that shoehorning them on a computer is automatically the best solution. But as it turns out, modern desktop computers have ended up gradually dropping the 90s desktop metaphor for very good reasons...

You could design a skeuomorphic virtual amplifier where the knobs are labeled (Treble, Reverb, Volume, ...), for example. Only manipulating a knob with a WIMP design is awkward. With touch it becomes a breeze.

Disagree. Virtual knobs are still quite awkward on a touchscreen, because like everything else on a touchscreen they are desperately flat and slippery. When you turn a virtual knob on a touchscreen, you need to constantly focus a part of your attention on keeping your hand on the virtual knob, which is a non-issue on physical buttons which mechanically keep your hand in place.

Coherence is meant to facilitate predictability. The need for predictability by convention implies the paradigm itself is too complex to be self-explanatory.

Well, that is a given. Only few very simple devices, such as knives, can have a self-explanatory design. As soon as you get into a workflow that is a tiny bit complex, you need to cut it in smaller steps, preferably steps that are easy to learn.

When processor power increased, so did the feature set of applications, and new applications overshot the design limitations of the initial WIMP devices for a great deal, leading to these giant monolithic applications, where most users don't even know or use 95% of the entire application.

What you are talking about is feature bloat, which is not an intrinsic problem of WIMP. As an example, modern cars are bloated with features no one knows or care about. The reason why they remain usable in spite of this feature overflow is that visual information is organized in such a way that users does not have to care.

Information hierarchization is something which WIMP can do, and which any "post-WIMP" paradigm would have to integrate for powerful applications to be produced. Zooming user interfaces is an interesting example of how this can be done on touchscreens, by the way.

[Puzzle Bobble example] Things like controlling velocity or force have always been awkward with buttons.

Tell that to video game consoles, which have had pressure-sensitive buttons for ages ;) The reason why desktop computers did not get those is that they were designed for work rather than for fun.

Now, I agree that skeumorphic interfaces can be quite nice for games, especially when coupled with other technologies such as accelerometers. My problem is their apparent lack of generality : it is not obvious what the answer of "post-WIMP" to common computer problems, such as office work or programming. Does it fail at being a general-purpose interface design like WIMP is ?

Lets take another example : a PDF reader. You could design its UI with traditional UI elements : menus, resizable windows, scrollbars, ... or you could design it just fullscreen and that you can flick a page with your finger. Which one of the two is more intuitive and more adapted to a smartphone screen real estate?

This kind of paradigm works for simple tasks, but breaks down as soon as you want to do stuff that is a tiny bit complex. How about printing that PDF, as an example ? Or jumping between chapters and reading a summary when you deal with technical documentation that's hundreds of pages long ? Or finding a specific paragraph in such a long PDF ? Or selectively copying and pasting pictures or text ?

It is not impossible to do on a touchscreen interface, and many cellphone PDF readers offer that kind of features. They simply use menus for that, because they offer clearly labeled feature in a high-density display. And what is the issue with that ?

Thats why you have things like tap to focus and pinch to zoom, to complement information density.

And on one application, a tap will zoom, on another application, it will activate an undiscoverable on-screen control, on a third application will require a double-tap, whereas on a fourth application said double tap will open a context menu...

Beyond a few very simple tasks, such as activating buttons, scrolling, and zooming, gestures are a new form of command line, with more error-prone detection as an extra "feature". They are not a magical way to increase the control density of an application up to infinity without adding a bit of discoverable chrome to this end.

- its interfaces as we know them are highly abstract, there is little correlation between what we see on screenand what humans know outside the world of the computer screen,

Oh, come on ! This was a valid criticism when microcomputers were all new, but nowadays most of what we do involves a computer screen in some way. Pretty much everyone out there knows how to operate icons, menus, and various computer input peripherals. The problem is with the way application which use these interface components are designed, not with the components themselves !

- The set of objects in traditional WIMP interfaces are quite limited. This was less of an issue when computers weren't all that powerful and thus couldn't do that much, but the system has since grown way beyond its initial boundaries, making featureful applications overly complex.

Basing a human-machine interface on a small number of basic controls is necessary for a wide number of reasons, including but not limited to ease of learning, reduction of the technical complexity, and API code reusability.

Adding millions of nonstandard widgets to increase an application's vocabulary is possible in a WIMP design, good programmers only avoid it because they know how much of a usability disaster that turns out to be.

You could make more direct error messages instead of having to rely on the primitive WIMP feedback mechanisms like dialogs.

Such as ?

Consider an application that allows you to control the speed and the pitch of audio in real time. Implement it in WIMP-driven desktop or laptop first using the normal HI conventions, then implement it in a Skeuomorphic way on a touch screen. Which will be faster to use?

A common argument that has never been proven to hold in the real world. When I'm in front of an analog audio mixing console, I generally only manipulate one control at a time, except for turning it off, because otherwise I can't separate the effects of the two controls in the sensorial feedback that reaches my ear. More generally, it has been scientifically proven many times that human beings suck at doing multiple tasks at the same time.

Reply Parent Score: 1