Linked by Thom Holwerda on Mon 19th Dec 2011 20:11 UTC
Google Once upon a time, in a land, far, far away, there were two mobile operating systems. One of them was designed for mobile from the ground up; the other was trying really hard to copy its older, desktop brother. One was limited in functionality, inflexible and lacked multitasking, but was very efficient, fast, and easy to use. The other had everything and the kitchen sink, was very flexible and could multitask, but had a steep learning curve, was inconsistent, and not particularly pretty.
Thread beginning with comment 501057
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Comment by frderi
by frderi on Fri 23rd Dec 2011 09:50 UTC in reply to "RE: Comment by frderi"
frderi
Member since:
2011-06-17

I happen to have at hand a nice book about usability in software UIs which I find very well-written.


I happened to have read a couple of them as well. Most of them were written for the traditional WIMP paradigm. While WIMP has served us well, they don't really take into account the unique features of these new devices.

For your information, skeumorphic UIs on cellphone-sized touchscreens fail at


Your perception might differ, but the current surge of smartphones in the marketplace don't really make them a product failure now does it.


1) Why use clearly labeled and visible controls when you can use obscure gestures instead ?


For smartphones, mostly screen estate and handling. There's simply no room for a conventional menu paradigm on a smartphone. But a skeuomorphic design does not need to imply that things are not labeled. You could design a skeuomorphic virtual amplifier where the knobs are labeled (Treble, Reverb, Volume, ...), for example. Only manipulating a knob with a WIMP design is awkward. With touch it becomes a breeze.

2) Coherence (Need to explain ?)


Coherence is meant to facilitate predictability. The need for predictability by convention implies the paradigm itself is too complex to be self-explanatory. The first commercial WIMP devices were conceived to be self explanatory in the first place. The menu bar was invented because people would not have to remember commands. It was an essential part of the design that made a Mac to be as self learning as possible. The Xerox Alto and Star, void of application menu bars still required users to remember all the commands by heart, just like the more primitive programs on CP/M and DOS. The goal of the first commercial WIMP computers was that you would not need a manual to operate the computer. (i said goal, if they succeeded is another matter). The point of menus is that you can lookup commands fast, at will and execute them directly if need be. When processor power increased, so did the feature set of applications, and new applications overshot the design limitations of the initial WIMP devices for a great deal, leading to these giant monolithic applications, where most users don't even know or use 95% of the entire application.


-Conventions (Because in the end, your touchscreen remains a flat surface that does not behave like any other real-world object, except maybe sheets of paper. As a developer, attempting to mimick real world objects on a touchscreen is simply cutting yourself from the well-established PC usage conventions and forcing users to learn new UI conventions *once again*, except this time it's one new UI convention per application)


I think you're failing to see the ingenuity of post-WIMP interaces here. I'll give a simple example : The game of Puzzle Bobble. On a traditional WIMP devised system like a desktop computer, its played with the keyboard. Because it is, the imput methods are highly abstracted from the game play itself and gap between what the user sees and what needs to be done to control the slingshot is quite big. So there's an initial barrier to overcome before these movements are stored in motor memory and the control becomes natural. Compared to keyboards, the mouse pointer already lowered this barrier a great deal, albeit not completely. One could design Puzzle Bobble to be played with a mouse pointer, which would lower the bar, but still have quite a few limitations in terms of presicion, and muscular strain when playing for extended periods of time. On a more general note, people who never used a mouse before initially struggle with it as well. On a post-wimp smartphone device, the barrier is much lower than the keyboard or even the mouse. In Puzzle Bobble, the player can manipulate the slingshot directly. He he can play the entire game with one finger, instead of using a multitude of buttons. Because he is able to manipulate the object so directly, things like controlling the velocity of the ball become possible. Things like controlling velocity or force have always been awkward with buttons.

Lets take another example : a PDF reader. You could design its UI with traditional UI elements : menus, resizable windows, scrollbars, ... or you could design it just fullscreen and that you can flick a page with your finger. Which one of the two is more intuitive and more adapted to a smartphone screen real estate?


-Information (Modern cellphones are already bad at this due to the limitations of touchscreen hardware, combined with a tendency to manufacture them in a very small form factor. Attempting to mimick large objects on such a small screen is only a way to further reduce the allowed information density)


Thats why you have things like tap to focus and pinch to zoom, to complement information density.


-Comprehension (Mostly a limitation of touchscreens rather than skeumorphic design, but since touchscreens offer no form of "hover" feedback and mobile phone screens are way too small, developers often resort to obscure icons in order to shoehorn their UIs in small form factors)


I'd argue that comprehension is the biggest drawback of a traditional WIMP design because :

- its interfaces as we know them are highly abstract, there is little correlation between what we see on screenand what humans know outside the world of the computer screen,

- The set of objects in traditional WIMP interfaces are quite limited. This was less of an issue when computers weren't all that powerful and thus couldn't do that much, but the system has since grown way beyond its initial boundaries, making featureful applications overly complex.


-Error management (When you try to mimick real-world objects, you have to ditch most of the WIMP error feedback mechanisms, without being able to use real world objects' ones because they are strongly related to their three dimensional shape)


You could make more direct error messages instead of having to rely on the primitive WIMP feedback mechanisms like dialogs.


-Speed (Software UIs can offer physically impossible workflows that are much faster than anything real-world objects can do. If you want to mimick the physical world, you have to lose this asset, without losing the intrinsically slow interaction of human beings with touchscreens)


I don't agree with you here. Traditional WIMP UI's can be inherently slower as well depending on the use cases. Consider an application that allows you to control the speed and the pitch of audio in real time. Implement it in WIMP-driven desktop or laptop first using the normal HI conventions, then implement it in a Skeuomorphic way on a touch screen. Which will be faster to use? on a WIMP device, you only have one pointer. So you're never able to manipulate both pitch and speed at the same time, requiring you to jump from one to the other all the time with your pointer. On a post PC with multitouch, this problem does not exist.

Another example : Lets make a software synthesizer. Doing it in a WIMP fashion It will most probably consist of an array of sliders, buttons and labeled input fields. A skeuomorphic one will be composed out of virtual knobs and a virtual keyboard. While the first one might be more precise, the latter one will be a lot more intuitive and be a lot more inviting to tinkering and experimenting, triggering creativity a lot more. And it will be a lot more fun to use!


-Accessibility (Give an touchscreen to your old grandpa who has got Parkinson, and see how well he fares with these small interfaces without any haptic feedback. Not a problem with computer mices, which are relative pointers whose sensitivity can be reduced at will)


Traditional WIMP interfaces fail at persons who are blind. Your point being? And, I bet my old grandpa (if he were still alive) would have a much easier time searching whatever he's forgotten today with Siri, rather than typing things into a google like interface on your WIMP device.


[q]
I believe that most of this still holds for tablets, although some problems related to the

Reply Parent Score: 1

RE[3]: Comment by frderi
by Neolander on Fri 23rd Dec 2011 13:41 in reply to "RE[2]: Comment by frderi"
Neolander Member since:
2010-03-08

Most of [software usability books] were written for the traditional WIMP paradigm. While WIMP has served us well, they don't really take into account the unique features of these new devices.

Well, it seems to me the aforementioned principles are very general and could apply to non-software UIs such as that of coffee machines or dish washers.

Your perception might differ, but the current surge of smartphones in the marketplace don't really make them a product failure now does it.

Being commercially successful is not strongly related to usability or technical merits. For two example of commercially successful yet technically terrible products, consider Microsoft Windows and QWERTY computer keyboards.

For smartphones, mostly screen estate and handling. There's simply no room for a conventional menu paradigm on a smartphone.

So, how did keypad-based cellphones running s40 and friends manage to use this very paradigm for years without confusing anyone ?

But a skeuomorphic design does not need to imply that things are not labeled.

Depends if the real-world object you mimick does have labels or not.

A big problem which I have with this design trend is that it seems to believe that past designs were perfect and that shoehorning them on a computer is automatically the best solution. But as it turns out, modern desktop computers have ended up gradually dropping the 90s desktop metaphor for very good reasons...

You could design a skeuomorphic virtual amplifier where the knobs are labeled (Treble, Reverb, Volume, ...), for example. Only manipulating a knob with a WIMP design is awkward. With touch it becomes a breeze.

Disagree. Virtual knobs are still quite awkward on a touchscreen, because like everything else on a touchscreen they are desperately flat and slippery. When you turn a virtual knob on a touchscreen, you need to constantly focus a part of your attention on keeping your hand on the virtual knob, which is a non-issue on physical buttons which mechanically keep your hand in place.

Coherence is meant to facilitate predictability. The need for predictability by convention implies the paradigm itself is too complex to be self-explanatory.

Well, that is a given. Only few very simple devices, such as knives, can have a self-explanatory design. As soon as you get into a workflow that is a tiny bit complex, you need to cut it in smaller steps, preferably steps that are easy to learn.

When processor power increased, so did the feature set of applications, and new applications overshot the design limitations of the initial WIMP devices for a great deal, leading to these giant monolithic applications, where most users don't even know or use 95% of the entire application.

What you are talking about is feature bloat, which is not an intrinsic problem of WIMP. As an example, modern cars are bloated with features no one knows or care about. The reason why they remain usable in spite of this feature overflow is that visual information is organized in such a way that users does not have to care.

Information hierarchization is something which WIMP can do, and which any "post-WIMP" paradigm would have to integrate for powerful applications to be produced. Zooming user interfaces is an interesting example of how this can be done on touchscreens, by the way.

[Puzzle Bobble example] Things like controlling velocity or force have always been awkward with buttons.

Tell that to video game consoles, which have had pressure-sensitive buttons for ages ;) The reason why desktop computers did not get those is that they were designed for work rather than for fun.

Now, I agree that skeumorphic interfaces can be quite nice for games, especially when coupled with other technologies such as accelerometers. My problem is their apparent lack of generality : it is not obvious what the answer of "post-WIMP" to common computer problems, such as office work or programming. Does it fail at being a general-purpose interface design like WIMP is ?

Lets take another example : a PDF reader. You could design its UI with traditional UI elements : menus, resizable windows, scrollbars, ... or you could design it just fullscreen and that you can flick a page with your finger. Which one of the two is more intuitive and more adapted to a smartphone screen real estate?

This kind of paradigm works for simple tasks, but breaks down as soon as you want to do stuff that is a tiny bit complex. How about printing that PDF, as an example ? Or jumping between chapters and reading a summary when you deal with technical documentation that's hundreds of pages long ? Or finding a specific paragraph in such a long PDF ? Or selectively copying and pasting pictures or text ?

It is not impossible to do on a touchscreen interface, and many cellphone PDF readers offer that kind of features. They simply use menus for that, because they offer clearly labeled feature in a high-density display. And what is the issue with that ?

Thats why you have things like tap to focus and pinch to zoom, to complement information density.

And on one application, a tap will zoom, on another application, it will activate an undiscoverable on-screen control, on a third application will require a double-tap, whereas on a fourth application said double tap will open a context menu...

Beyond a few very simple tasks, such as activating buttons, scrolling, and zooming, gestures are a new form of command line, with more error-prone detection as an extra "feature". They are not a magical way to increase the control density of an application up to infinity without adding a bit of discoverable chrome to this end.

- its interfaces as we know them are highly abstract, there is little correlation between what we see on screenand what humans know outside the world of the computer screen,

Oh, come on ! This was a valid criticism when microcomputers were all new, but nowadays most of what we do involves a computer screen in some way. Pretty much everyone out there knows how to operate icons, menus, and various computer input peripherals. The problem is with the way application which use these interface components are designed, not with the components themselves !

- The set of objects in traditional WIMP interfaces are quite limited. This was less of an issue when computers weren't all that powerful and thus couldn't do that much, but the system has since grown way beyond its initial boundaries, making featureful applications overly complex.

Basing a human-machine interface on a small number of basic controls is necessary for a wide number of reasons, including but not limited to ease of learning, reduction of the technical complexity, and API code reusability.

Adding millions of nonstandard widgets to increase an application's vocabulary is possible in a WIMP design, good programmers only avoid it because they know how much of a usability disaster that turns out to be.

You could make more direct error messages instead of having to rely on the primitive WIMP feedback mechanisms like dialogs.

Such as ?

Consider an application that allows you to control the speed and the pitch of audio in real time. Implement it in WIMP-driven desktop or laptop first using the normal HI conventions, then implement it in a Skeuomorphic way on a touch screen. Which will be faster to use?

A common argument that has never been proven to hold in the real world. When I'm in front of an analog audio mixing console, I generally only manipulate one control at a time, except for turning it off, because otherwise I can't separate the effects of the two controls in the sensorial feedback that reaches my ear. More generally, it has been scientifically proven many times that human beings suck at doing multiple tasks at the same time.

Reply Parent Score: 1

RE[3]: Comment by frderi
by Neolander on Fri 23rd Dec 2011 13:42 in reply to "RE[2]: Comment by frderi"
Neolander Member since:
2010-03-08

Another example : Lets make a software synthesizer. Doing it in a WIMP fashion It will most probably consist of an array of sliders, buttons and labeled input fields. A skeuomorphic one will be composed out of virtual knobs and a virtual keyboard.

Introducing ZynAddSubFX : http://zynaddsubfx.sourceforge.net/images/screenshot02.png

A very nice piece of open-source software, truly, although it takes some time to get used to. It has both a range of preset patches for quick fun and very extensive synthesis control capabilities for the most perfectionist of us. And it does its thing using only a small amount of nonstandard GUI widgets, that have for once been well thought-out.

While you're mentioning on-screen keyboards on touchscreens, ZynAddSubFX is more clever by using the keyboard that comes with every computer in a creative way instead. The QSDF key row is used for white piano keys, whereas the AZERTY row is used for black piano keys. Of course, you only get a limited amount of notes this way, just like on a tablet-sized touchscreen, but any serious musician will use a more comfortable and powerful external MIDI keyboard anyway.

Why would one need a touchscreen for software synthesis that works like on a real-world synthesizer ?

While the first one might be more precise, the latter one will be a lot more intuitive and be a lot more inviting to tinkering and experimenting, triggering creativity a lot more. And it will be a lot more fun to use!

No, it won't be any more intuitive than a well-done regular synthesizer GUI. Actually, if it aims at mimicking a real-world synthesizer, it may turn out to be as overwhelmingly loaded with controls as a real-world synthesizer, which is arguably ZynAddSubFX's biggest problem.

A well-designed WIMP program, to the contrary, could use information hierarchy to hide "advanced controls" away from direct user sight, in a fashion that make those accessible for experienced users without harming newcomer's user experience. This allows software interface to have a softer learning curve than analog appliances, arguably voiding the core point of skeumorphism advocates.

Traditional WIMP interfaces fail at persons who are blind. Your point being?

That touchscreens are just as much of a mess as mices for blind people (even more so, because they cannot embrace a spoken hover feedback as current touchscreens are not capable of hover feedback), but actually cater to a much smaller range of users.

And, I bet my old grandpa (if he were still alive) would have a much easier time searching whatever he's forgotten today with Siri, rather than typing things into a google like interface on your WIMP device.

Siri is a command-based voice interface designed for very specific use cases that are hard-coded at the OS level, I thought we were talking about general-purpose touchscreen GUIs so far ?

Reply Parent Score: 1

RE[4]: Comment by frderi
by frderi on Sat 24th Dec 2011 11:43 in reply to "RE[3]: Comment by frderi"
frderi Member since:
2011-06-17

Well, it seems to me the aforementioned principles are very general and could apply to non-software UIs such as that of coffee machines or dish washers.


Do you really want dishwashers and coffee machines with resizable windows, double clickable icons and a pointer device?


Being commercially successful is not strongly related to usability or technical merits.


True, but in this case of an emerging customer driven smartphone market, I tend to disagree.



So, how did keypad-based cellphones running s40 and friends manage to use this very paradigm for years without confusing anyone ?


Certainly not by using a WIMP paradigm.



A big problem which I have with this design trend is that it seems to believe that past designs were perfect and that shoehorning them on a computer is automatically the best solution.


Oh, I'm not saying skeuomorphic designs are the answer to everything. I'm saying that for quite a few applications on a Post-PC device, they make a lot more sense than a traditional WIMP paradigm would.


When you turn a virtual knob on a touchscreen, you need to constantly focus a part of your attention on keeping your hand on the virtual knob


How many times do you keep your hand on a knob?



As soon as you get into a workflow that is a tiny bit complex, you need to cut it in smaller steps, preferably steps that are easy to learn.


As all physical devices, WIMP and Post-WIMP devices do, so no difference there.


What you are talking about is feature bloat, which is not an intrinsic problem of WIMP.


Well, it kinda is, by design. It wasn't anticipated when they first came up with it, so it has become a problem for the paradigm. Solvable by conventions, yes, but conventions are more a bandaid than a real fix are they.


The reason why desktop computers did not get those is that they were designed for work rather than for fun.


Then what were all those PC's doing in our homes in the nineties?


it is not obvious what the answer of "post-WIMP" to common computer problems


Of course is not obvious. Do you think coming up with a working WIMP paradigm was all that obvious to begin with? Just look at the multitude of WIMP-based GUI solutions that were out there in the eighties. Nowadays pretty much everyone is emulating the Mac. Its still early days for post-pc.


This kind of paradigm works for simple tasks, but breaks down as soon as you want to do stuff that is a tiny bit complex.


Then aren't we the luckiest guys on earth that smartphones are a perfect fit for simple tasks?



How about printing that PDF, as an example ? Or jumping between chapters and reading a summary when you deal with technical documentation that's hundreds of pages long ? Or finding a specific paragraph in such a long PDF ? Or selectively copying and pasting pictures or text ?


These are all possible on present day devices, so i fail to see your point here.



And on one application, a tap will zoom, on another application, it will activate an undiscoverable on-screen control, on a third application will require a double-tap, whereas on a fourth application said double tap will open a context menu...


I have yet to run into this issue with my iOS device, maybe because in iOS its an API feature and its consistent over all applications.


They are not a magical way to increase the control density of an application up to infinity without adding a bit of discoverable chrome to this end.


Nothing is perfect. I disagree though that its a much better idea to use a classical WIMP design instead. Customers seem to agree, and since they are voting with their wallets and make developers like you come to work everyday, i think that's what matters in the end.


This was a valid criticism when microcomputers were all new, but nowadays most of what we do involves a computer screen in some way.


You'd be surprised. In the comfortable confinements of the world you live in, that might be the case, but my experience tells me something completely different in that regard, and I'm not even really "out there" like some others are.


Adding millions of nonstandard widgets to increase an application's vocabulary is possible in a WIMP design, good programmers only avoid it because they know how much of a usability disaster that turns out to be.


That might be the case on a desktop or laptop, but is not so much the case on a post-pc device for already mentioned reasons.


Such as ?


One could use color, sound, vibrations, ...


A common argument that has never been proven to hold in the real world.


Oh, thats strange. I hear DJ's and sound engineers do it all the time.


Introducing ZynAddSubFX


The application you mention uses a mixture of WIMP controls and Skeuomorphic elements, its hardly a synthesizer designed in the traditional WIMP paradigm. If it was, it wouldn't have the knobs or presets. If it were, it would have save files and sliders instead. Have you ever worked with real audio equipment? You'll notice that they also use knobs and come with presets.


The QSDF key row is used for white piano keys, whereas the AZERTY row is used for black piano keys.


My Commodore back in the day already did that. What an awkward way to play notes! Hardly usable at all. One of the worst ideas ever.



Why would one need a touchscreen for software synthesis that works like on a real-world synthesizer ?


For the same reason as why we needed general purpose computers in the first place. Versatility of function. No need to carry around heavy boxes of devices which only do one thing. Instead we can cope with only a few devices which can each do a multitude of things.


No, it won't be any more intuitive than a well-done regular synthesizer GUI.


For a lot of musicians, it will.


That touchscreens are just as much of a mess as mices for blind people (even more so, because they cannot embrace a spoken hover feedback as current touchscreens are not capable of hover feedback), but actually cater to a much smaller range of users.


Autism patients seem to disagree with you, touch based tablets are transforming their lives and have allowed them to communicate with the world around them, something traditional computers never did. Also, using voiceover to control a GUI a very broken concept, its just a bandaid, it doesn't fix what's broken in the first place in these use cases.

Siri is a command-based voice interface designed for very specific use cases that are hard-coded at the OS level, I thought we were talking about general-purpose touchscreen GUIs so far ?


Why would we limit the user interface of a device to the visual aspect? Its not because traditional WIMP interfaces were only visual because of technology constraints at the time, that we should keep this limitation in the future devices that get built.

Reply Parent Score: 1