Linked by Thom Holwerda on Mon 19th Dec 2011 20:11 UTC
Google Once upon a time, in a land, far, far away, there were two mobile operating systems. One of them was designed for mobile from the ground up; the other was trying really hard to copy its older, desktop brother. One was limited in functionality, inflexible and lacked multitasking, but was very efficient, fast, and easy to use. The other had everything and the kitchen sink, was very flexible and could multitask, but had a steep learning curve, was inconsistent, and not particularly pretty.
Thread beginning with comment 500764
To read all comments associated with this story, please click here.
Comment by frderi
by frderi on Wed 21st Dec 2011 07:36 UTC
frderi
Member since:
2011-06-17

Nice writeup, albeit quite thin at times and, well, just plain wrong at others.

Firstly, I wouldn't call the present day smartphone UI paradigm WIMP, since they're mostly, and iOS to a large extent, void of any user-manipulatable windows or menus. One is better off to define this post-WIMP paradigm as FICT : Fullscreen Icon Column Touch. One could argue that these are mere details and the one is just an alternate form of the other. One would be very wrong at making this statement.

Why would one be? Because the move from WIMP to a Post-WIMP environment allows for a whole other UI paradigm altogether in terms of user interaction, directly leading to the abolishment of the traditional HI derived interfaces, and the rise of the Skeumorphic UI design language. A lot of people who have their heads and hearts in the past don't seem to like this, stating lack of UI consistency and plain dumbness of the device as its hurdles. What they fail to see is that it's the paradigm of the whole device itself that's shifting : Post-PC devices are no longer "UI's in a box" like their predecessors were, and by the definition of their interaction characteristics and computational capabilities, do no longer require this traditional paradigm in order to function properly. In fact, as the history of tablet computers can testify to greatly : merely treating them as such has only made them fail in the marketplace, since they just end up doing a worse job than traditional personal computers. Thus, for any post-pc device aspiring to be truly successful, it must throw these conventions out of the window in favor of a more direct way of communicating with the user and to better facilitate the user excerting control over the device.

Over the decades, the "box" in the "UI in a box" has been reduced to the point where it's been to regulated to a quasi non-existant state : from room size, to fridge size, to shoebox size, to book size, to frame size, its evolution has been quite staggering. In terms of handling, the box itself has fallen in the league of traditional portable single task devices like calculators, portable music players, etc. With the addition of touch on the UI level and better graphical capabilities, Skeumorphic designs have gained a clear edge: in a traditional WIMP paradigm, they have often proven to be an infuriating and frustrating design to work with. With Touch based hardware to drive them, they are a much more natural fit. Skeumorphic designs really shine on Post-PC devices and they are certainly one of the reasons why certain Post-PC devices have become so popular : They peel away a layer of abstract convention between the user and the device, making the interaction more natural and direct. There is very little UI convention to learn on a Post PC device simply because there is so little of UI in the first place. What is left is a simple grid, of which each item represents a virtual device. The perks of the traditional WIMP device is, frankly, just a casualty along the way of taking user interaction to the next level. On a Post PC device, WIMP is just a dead end. The carcasses of the ill-fated of pre-iPad tablets are all ugly witnesses to this. On a larger scale, WIMP is silently on its way to becoming an epîsode in the history of computing, just like the command line interfaces before them. Will WIMP dissapear completely? If history is destined to repeat itself, its highly unlikely, although WIMP will be regulated to an ever smaller growing group of users rather than being the mainstream. With both hardware and operating software seemingly reduced to its barest essentials, and increasingly becoming one and the same thing; what will remain for the user in the future will in the scope of things be merely its function.

The history of trying to build a Post-WIMP paradigm has been long in the making. One of the earliest examples we can find in Apple's products is not found on a portable device, but on the Macintosh platform, as At Ease. It did not do away with the WIMP convention an sich; but it certainly did away with some of its earliest and less user-friendly derived conventions, most notably its file system and desktop metaphors. Instead, it introduced a fixed grid with single clickable buttons, each button being either a program or application. While seasoned PC users would raise more than one eyebrow at having such a crude and dumbed down tool to work with on a desktop computer, it certainly lowered the bar for a lot of users in an emerging desktop computer world.

I think you'll be hard pressed to find people to state that iOS is a direct descendant of the Newton. One would be much better off not to draw direct lines between iterations of mobile communication concepts as its structure holds much more queues to biological evolution than to linear algebra. The Newton is one dead branch on the tree of mobile device evolution. PalmOS is another. However, in the tree, Android is sitting awfully close and on top of iOS, and anyone which bothered to check the facts surrounding these two know iOS inspired Android to such an extent that its development took quite an U-Turn in terms of user interface. Just like there's no denying that Palm took quite a few queues of the devices that came before it and improved on existing conventions vastly, just as the iPhone did on the its predecessors and upped the bar on previous generations smartphones significantly. Downplaying the importance of this is like saying Dinosaurs weren't a significant step forward in the evolution of life on earth, simply because they look a lot like reptiles : However : the changes heralded in dinosaurs allowed them to go become much more successful and be the dominant species for , thus ending up being dramatically more successful than its cold blooded ancestors, and altering the face of life on earth. Just like Android and its spiritual father iOS have augmented modern smartphone handset usecases significantly and consequently changed the face of the mobile computing landscape.

You might also want to look up the definition of crapware, Crapware and bundled software are not the same. While crapware is a form of bundled software, crapware is third party software which ships on a device and for which the device manufacturer was paid for, but is low quality or of little value to its user. on the windows side, MSN Messenger, MineSweeper, or Terminal Client are not crapware, and neither is Photo Booth, iBooks or Youtube.

On customizability, after years of tweaking and tinkering with UI's, window managers, icons, hacking ICNS and other resources (Anyone remember Kaleidoscope?) I must say I've come to a similar zen-like conclusion than Anarcho-syndicalist Hakim Bey had about technology as a whole : They offer great toys, but are terrible distractions. The purpose for the UI is to facilitate user interaction, not initiate it. When time progresses, all UI paradigms and conventions will eventually fade anyway, and resizing windows, flicking trough screens, or tapping icons will look as old hat as olivetti typewriters or mechanical calculators.

Reply Score: 0

RE: Comment by frderi
by Neolander on Wed 21st Dec 2011 11:09 in reply to "Comment by frderi"
Neolander Member since:
2010-03-08

I happen to have at hand a nice book about usability in software UIs which I find very well-written. In its argumentation, it exposes 12 pillars of software usability :


1/Architecture (Content is logically hierarchized in a way that makes it easy to find)

2/Visual organization (Every piece of UI is designed in a way that makes it easy to understand, noticeably by avoiding information overflow without hiding stuff in obscure corners. Information hierarchy plays a big role there)

3/Coherence (The UI behaves in a consistent way)

4/Conventions (The UI is consistent with other UIs that the users are familiar with)

5/Information (The UI informs the user about what's going on, at the right moment, and gives feedback to user action)

6/Comprehension (Words and symbols have a clear meaning, in particular icons are not used to replace words except when their meaning is perfectly unambiguous to all users)

7/Assistance (The UI helps the user at its task and guides him/her, noticeably by using affordant elements at the right place)

8/Error management (The UI allows the user to make mistakes, actively tries to prevent them, and helps correcting them)

9/Speed (Tasks are performed as fast as possible, especially when they are common, with minimal redundancy)

10/Freedom (The user must, under any circumstance, stay in control)

11/Accessibility (The UI can be used by all target users, including if they have bad sight, Parkinson disease, or whatever)

12/Satisfaction (In the end, users are happy and feel that it was a pleasant experience)


For your information, skeumorphic UIs on cellphone-sized touchscreens fail at


-Visual organization (Why use clearly labeled and visible controls when you can use obscure gestures instead ?)

-Coherence (Need to explain ?)

-Conventions (Because in the end, your touchscreen remains a flat surface that does not behave like any other real-world object, except maybe sheets of paper. As a developer, attempting to mimick real world objects on a touchscreen is simply cutting yourself from the well-established PC usage conventions and forcing users to learn new UI conventions *once again*, except this time it's one new UI convention per application)

-Information (Modern cellphones are already bad at this due to the limitations of touchscreen hardware, combined with a tendency to manufacture them in a very small form factor. Attempting to mimick large objects on such a small screen is only a way to further reduce the allowed information density)

-Comprehension (Mostly a limitation of touchscreens rather than skeumorphic design, but since touchscreens offer no form of "hover" feedback and mobile phone screens are way too small, developers often resort to obscure icons in order to shoehorn their UIs in small form factors)

-Error management (When you try to mimick real-world objects, you have to ditch most of the WIMP error feedback mechanisms, without being able to use real world objects' ones because they are strongly related to their three dimensional shape)

-Speed (Software UIs can offer physically impossible workflows that are much faster than anything real-world objects can do. If you want to mimick the physical world, you have to lose this asset, without losing the intrinsically slow interaction of human beings with touchscreens)

-Accessibility (Give an touchscreen to your old grandpa who has got Parkinson, and see how well he fares with these small interfaces without any haptic feedback. Not a problem with computer mices, which are relative pointers whose sensitivity can be reduced at will)


I believe that most of this still holds for tablets, although some problems related to the small screen sizes of cellphones are lifted.

Edited 2011-12-21 11:11 UTC

Reply Parent Score: 2

RE[2]: Comment by frderi
by frderi on Fri 23rd Dec 2011 09:50 in reply to "RE: Comment by frderi"
frderi Member since:
2011-06-17

I happen to have at hand a nice book about usability in software UIs which I find very well-written.


I happened to have read a couple of them as well. Most of them were written for the traditional WIMP paradigm. While WIMP has served us well, they don't really take into account the unique features of these new devices.

For your information, skeumorphic UIs on cellphone-sized touchscreens fail at


Your perception might differ, but the current surge of smartphones in the marketplace don't really make them a product failure now does it.


1) Why use clearly labeled and visible controls when you can use obscure gestures instead ?


For smartphones, mostly screen estate and handling. There's simply no room for a conventional menu paradigm on a smartphone. But a skeuomorphic design does not need to imply that things are not labeled. You could design a skeuomorphic virtual amplifier where the knobs are labeled (Treble, Reverb, Volume, ...), for example. Only manipulating a knob with a WIMP design is awkward. With touch it becomes a breeze.

2) Coherence (Need to explain ?)


Coherence is meant to facilitate predictability. The need for predictability by convention implies the paradigm itself is too complex to be self-explanatory. The first commercial WIMP devices were conceived to be self explanatory in the first place. The menu bar was invented because people would not have to remember commands. It was an essential part of the design that made a Mac to be as self learning as possible. The Xerox Alto and Star, void of application menu bars still required users to remember all the commands by heart, just like the more primitive programs on CP/M and DOS. The goal of the first commercial WIMP computers was that you would not need a manual to operate the computer. (i said goal, if they succeeded is another matter). The point of menus is that you can lookup commands fast, at will and execute them directly if need be. When processor power increased, so did the feature set of applications, and new applications overshot the design limitations of the initial WIMP devices for a great deal, leading to these giant monolithic applications, where most users don't even know or use 95% of the entire application.


-Conventions (Because in the end, your touchscreen remains a flat surface that does not behave like any other real-world object, except maybe sheets of paper. As a developer, attempting to mimick real world objects on a touchscreen is simply cutting yourself from the well-established PC usage conventions and forcing users to learn new UI conventions *once again*, except this time it's one new UI convention per application)


I think you're failing to see the ingenuity of post-WIMP interaces here. I'll give a simple example : The game of Puzzle Bobble. On a traditional WIMP devised system like a desktop computer, its played with the keyboard. Because it is, the imput methods are highly abstracted from the game play itself and gap between what the user sees and what needs to be done to control the slingshot is quite big. So there's an initial barrier to overcome before these movements are stored in motor memory and the control becomes natural. Compared to keyboards, the mouse pointer already lowered this barrier a great deal, albeit not completely. One could design Puzzle Bobble to be played with a mouse pointer, which would lower the bar, but still have quite a few limitations in terms of presicion, and muscular strain when playing for extended periods of time. On a more general note, people who never used a mouse before initially struggle with it as well. On a post-wimp smartphone device, the barrier is much lower than the keyboard or even the mouse. In Puzzle Bobble, the player can manipulate the slingshot directly. He he can play the entire game with one finger, instead of using a multitude of buttons. Because he is able to manipulate the object so directly, things like controlling the velocity of the ball become possible. Things like controlling velocity or force have always been awkward with buttons.

Lets take another example : a PDF reader. You could design its UI with traditional UI elements : menus, resizable windows, scrollbars, ... or you could design it just fullscreen and that you can flick a page with your finger. Which one of the two is more intuitive and more adapted to a smartphone screen real estate?


-Information (Modern cellphones are already bad at this due to the limitations of touchscreen hardware, combined with a tendency to manufacture them in a very small form factor. Attempting to mimick large objects on such a small screen is only a way to further reduce the allowed information density)


Thats why you have things like tap to focus and pinch to zoom, to complement information density.


-Comprehension (Mostly a limitation of touchscreens rather than skeumorphic design, but since touchscreens offer no form of "hover" feedback and mobile phone screens are way too small, developers often resort to obscure icons in order to shoehorn their UIs in small form factors)


I'd argue that comprehension is the biggest drawback of a traditional WIMP design because :

- its interfaces as we know them are highly abstract, there is little correlation between what we see on screenand what humans know outside the world of the computer screen,

- The set of objects in traditional WIMP interfaces are quite limited. This was less of an issue when computers weren't all that powerful and thus couldn't do that much, but the system has since grown way beyond its initial boundaries, making featureful applications overly complex.


-Error management (When you try to mimick real-world objects, you have to ditch most of the WIMP error feedback mechanisms, without being able to use real world objects' ones because they are strongly related to their three dimensional shape)


You could make more direct error messages instead of having to rely on the primitive WIMP feedback mechanisms like dialogs.


-Speed (Software UIs can offer physically impossible workflows that are much faster than anything real-world objects can do. If you want to mimick the physical world, you have to lose this asset, without losing the intrinsically slow interaction of human beings with touchscreens)


I don't agree with you here. Traditional WIMP UI's can be inherently slower as well depending on the use cases. Consider an application that allows you to control the speed and the pitch of audio in real time. Implement it in WIMP-driven desktop or laptop first using the normal HI conventions, then implement it in a Skeuomorphic way on a touch screen. Which will be faster to use? on a WIMP device, you only have one pointer. So you're never able to manipulate both pitch and speed at the same time, requiring you to jump from one to the other all the time with your pointer. On a post PC with multitouch, this problem does not exist.

Another example : Lets make a software synthesizer. Doing it in a WIMP fashion It will most probably consist of an array of sliders, buttons and labeled input fields. A skeuomorphic one will be composed out of virtual knobs and a virtual keyboard. While the first one might be more precise, the latter one will be a lot more intuitive and be a lot more inviting to tinkering and experimenting, triggering creativity a lot more. And it will be a lot more fun to use!


-Accessibility (Give an touchscreen to your old grandpa who has got Parkinson, and see how well he fares with these small interfaces without any haptic feedback. Not a problem with computer mices, which are relative pointers whose sensitivity can be reduced at will)


Traditional WIMP interfaces fail at persons who are blind. Your point being? And, I bet my old grandpa (if he were still alive) would have a much easier time searching whatever he's forgotten today with Siri, rather than typing things into a google like interface on your WIMP device.


[q]
I believe that most of this still holds for tablets, although some problems related to the

Reply Parent Score: 1