Linked by Hadrien Grasland on Tue 11th Jan 2011 13:40 UTC
Graphics, User Interfaces Nowadays smartphones, tablets and desktop/laptop computers are all siblings. They use the same UI paradigms and follow the same idea of a programmable and flexible machine that's available to everyone. Only their hardware feature set and form factor differentiate them from each other. In this context, does it still make sense to consider them as separate devices as far as software development is concerned? Wouldn't it be a much better idea to consider them as multiple variations of the same concept, and release a unified software platform which spreads across all of them? This article aims at describing what has been done in this area already, and what's left to do.
Permalink for comment 456985
To read all comments associated with this story, please click here.
RE[2]: Two words:
by TheGZeus on Tue 11th Jan 2011 15:59 UTC in reply to "RE: Two words:"
TheGZeus
Member since:
2010-05-19

Gripe one's solution: Sub-menus.From 8 main choices you can get 7 sub-options. You can also get a different menu from 3 buttons on the mouse, and you have modifier keys on the keyboard that can more than triple that.
With a multitouch screen on any handset you can support up to 3 individual menus. 1-finger, 2-finger, and 3-finger, further expandable by tap or tap-and-hold.

Many of the options in that example don't even need to be menu items. A key command to bring up a dialogue or simply having a dialogue for some of them already visible makes much more sense to me.
Not everything can be explained with icons, but icons can be explained, and hopefully remembered.

I think too much effort is put into discoverability in UI, and not enough into actual usability.

Often I find user interfaces that are meant to be easy to navigate the first time you use them to become cumbersome as you familiarise yourself with them.



I've put alot of thought into this (drawn out diagrams, thought about how they could be configured) and done research (books, papers, old articles on the subject) and I really think user-interfaces have gone down the toilet due to programmers patronising and looking down on 'users'.
It's hurt accessibility, and widened the gap between programmers and so-called 'users'.
"No one wants to learn a programming language to set up a program!" No one wants to use a program to accomplish a task. No one wants to do anything to get that task done. They want the end result. Effort is rewarded, and making things 'simpler' could be re-defined as 'minimalising the capabilities of a system'.
People don't want to read manuals before they start using something, but end up reading 10 websites and reading and re-reading these menu options, digging through Help pages _after_ they start...
It all seems... sideways to me. Like opening the hood of your car and tightening things until you have to read the manual and/or call a mechanic anyway.
Can you tell I live in Emacs?

Edited 2011-01-11 16:04 UTC

Reply Parent Score: 5