Linked by Thom Holwerda on Mon 19th Dec 2011 20:11 UTC
Google Once upon a time, in a land, far, far away, there were two mobile operating systems. One of them was designed for mobile from the ground up; the other was trying really hard to copy its older, desktop brother. One was limited in functionality, inflexible and lacked multitasking, but was very efficient, fast, and easy to use. The other had everything and the kitchen sink, was very flexible and could multitask, but had a steep learning curve, was inconsistent, and not particularly pretty.
Thread beginning with comment 501166
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Comment by frderi
by Neolander on Sat 24th Dec 2011 13:48 UTC in reply to "RE[4]: Comment by frderi"
Neolander
Member since:
2010-03-08

The application you mention uses a mixture of WIMP controls and Skeuomorphic elements, its hardly a synthesizer designed in the traditional WIMP paradigm. If it was, it wouldn't have the knobs or presets.

I think there is a misunderstanding between us as for what WIMP means. For me, WIMP means Windows, Icons, Menus, and Pointer, and that's all it is about. The rest is left to each individual toolkit implementation, whose widget toolkit typically includes a full equivalent of the de facto standard electronics UI (push and bistable buttons, faders, knobs), plus hierarchical and scrollable controls and displays that are not possible on regular electronics (such as list views and comboboxes).

WIMP does not imply use of a mouse, and does not limit the set of usable widgets. It just happens that clever developers attempt not to overflow their user with a ton of new widgets to get familiar with at once, and that using the standard widgets of your toolkit is the perfect way to do that.

WIMP does not imply that data must necessarily be accessed through explicit loading and saving either. As an example, Mac OS X, which is as you said is the stereotypical WIMP GUI of today, recently introduced a file manipulation scheme in which you do not have to manually save files anymore, which I like quite a lot.

If it were, it would have save files and sliders instead. Have you ever worked with real audio equipment? You'll notice that they also use knobs and come with presets.

So for the sake of convention respect, it is a good thing to replicate those hardware UI mechanisms within a WIMP computer interface, right ?

If you have a close look at virtual audio hardware, you'll notice that its knob are generally not manipulated by a circular gesture, but by a linear gesture. This is something which would be physically impossible in the real world, but which allows for more precise value setting (your knob becomes like a fader) and is much more comfortable in the context of a pointer-based interface.

"The QSDF key row is used for white piano keys, whereas the AZERTY row is used for black piano keys."

My Commodore back in the day already did that. What an awkward way to play notes! Hardly usable at all. One of the worst ideas ever.

Well, this is pretty much what I think of touchscreen-based keyboards too, but as I said before real musicians use a MIDI keyboard and nontechnical people who just want to have fun enjoy it anyway.

*fond memories of his childhood playing silly tunes with a DOS software called "pianoman" or something like that*

"Why would one need a touchscreen for software synthesis that works like on a real-world synthesizer ?"

For the same reason as why we needed general purpose computers in the first place. Versatility of function. No need to carry around heavy boxes of devices which only do one thing. Instead we can cope with only a few devices which can each do a multitude of things.

That's not my question. Why do you need a touchscreen for audio synthesis ? What is the problem with other input peripherals such as mice and styluses, as long as the UI is slightly tweaked to adapt itself to these devices just like touchscreen UIs are ?

Autism patients seem to disagree with you, touch based tablets are transforming their lives and have allowed them to communicate with the world around them, something traditional computers never did.

And I read a newspaper article the other day about someone who didn't have enough hand-eye coordination to manipulate a pen, and was able to succeed at primary school due to the use of a computer keyboard coupled with a command-line interface.

Autism is not about physical issues with the manipulation of user interfaces, to the best of my knowledge. Are you sure that it isn't tablet software that made the difference ? For all I hate touchscreens, one thing I have to concede is that their low precision and limited capabilities make developers realize that UI design is just as important as functionality.

Also, using voiceover to control a GUI a very broken concept, its just a bandaid, it doesn't fix what's broken in the first place in these use cases.

What's broken in the first place is that most people can see and a few people cannot. We don't want to give up on graphical user interfaces because they are very nice for people who can see, but we have to assess that they will always be quite bad for people who cannot with respect to command-line and voice-based interfaces.

To the best of my knowledge, this core problem would be very difficult to solve. Good input peripherals with hover feedback, such as mice or Wacom styluses, offer an interesting workaround. Touchscreens, on the other hand, offer nothing more than a flat surface.

"Siri is a command-based voice interface designed for very specific use cases that are hard-coded at the OS level, I thought we were talking about general-purpose touchscreen GUIs so far ?"

Why would we limit the user interface of a device to the visual aspect? Its not because traditional WIMP interfaces were only visual because of technology constraints at the time, that we should keep this limitation in the future devices that get built.

I have discussed this before on this website, I wonder if that was not with you. If we want to build user interfaces that cater to the need of all human beings, not only those with good sight and precise pointing skills, then we need to design user interfaces at a very fundamental level of human-computer interaction.

We must work with concepts as abstract as text IO, information hierarchy, emphasis, and so on. This is for once an area where web standards are miles ahead any form of local GUI, as long as they are used properly.

I have tried to work on such a UI scheme, but it ended up being too complicated for me and I given up. You need whole research teams full of specialists, not one-man projects, to tackle that kind of monster problem. Unless this is done, everything will be hacks like Siri where you have to explicitly support voice command and design a voice-specific interface for your software to be "compatible" with blind people.

Reply Parent Score: 1

RE[6]: Comment by frderi
by frderi on Sat 24th Dec 2011 17:59 in reply to "RE[5]: Comment by frderi"
frderi Member since:
2011-06-17

I'm talking about the aforementioned principles of architecture, visual organization, coherence, etc.


We were talking about WIMP versus post-WIMP GUI.


if I take a typical keypad-based OS like Nokia s40, you get…


I advise you to look up what WIMP actually stands for. Smartphones with pull down menus, resizable windows and pointing devices such as a stylus have fell out of fashion a long time ago, and this paradigm certainly wasn't used in the Nokia S40.


No, it isn't. Modern cars are feature-bloated, home heater controller are feature-bloated, dedicated stopwatches used to be feature bloated before cellphones started to eat up that functionality, and I could go on and on.


Except when the engineer who makes the car, he has the chance to change the control paradigm to make it simple and straighforward. When adhering to WIMP's principles, you are bound by a fixed paradigm, thus convention becomes important.


I'm not sure I understand what you mean, sorry.


Personal computers were initially conceived for consumers, not for work.


Well, so far you have defined post-WIMP UIs as some sort of postmodern user interface that breaks free from all conventions and does whatever is suitable for the task at work. Does not sound like a good start to create a paradigm.


Well its still a hundred times better to try out new things and innovate rather than staying stagnant with a paradigm that needs fixing left, right and under because it has been superseded by a world thats changing around it.


Sure, but aren't we talking about tablets and other post-PC devices which aim at being more than a portable video game console with a touchscreen or a cellphone with toys installed on it ?


Tablets sit somewhere in the middle between computers and phones. They allow for more functionality than a phone at the expense of less portability, but less functionality than a computer with the benefit of offering more portability. The thing to understand is that the success of these new devices is not about replicating all the use cases of the older devices, it is to augment them with some of the tasks they do, but in a more fluid way that is somehow more closer to the user. For some people, these basic use cases will be enough, so they won't need the full blown computer experience anymore.


My point was that menu-based WIMP workflows are reintroduced for that kind of tasks, which shows that skeumorphic designs are not a universal UI paradigm, as you have acknowledged earlier in this post.


They aren't. Check out AirPrint, it doesn't need a menu driven application to be able to print. Copy pasting also on post pc also doesn't need the traditional menu paradigm.


So you are telling me that if you use a pinching gesture or a double tap in any (and I really mean any) iOS application, it will have a consistent behaviour ? Doubt it.


If they use the standard methods present in UIKit, they will. No need to reinvent the wheel when the functionality is already present. Which is kinda the point of using Apple's Cocoa API's in the first place.


Sorry, I do not develop software for a living, it's more like a hobby. This way, I can avoid overhyped technology, underpaid hackjobs, and boring programming environments, and focus on what I like to do.


-sarcasm- Oh no! A hobbyist developer! The worst of the bunch! -/sarcasm-

So you think that WIMP interfaces are unable to use feedback based on color and sounds ? I think you should spend more time using them.


I didn't say they are unable to do so. However, when they do, they are not using the WIMP paradigm.


I think there is a misunderstanding between us as for what WIMP means.


I think you don't really understand what WIMP stands for and interprete its concept way too broadly.


Mac OS X, which is as you said is the stereotypical WIMP GUI of today


I didn't say the Mac from today is a stereotypical WIMP GUI. When referring to WIMP on the Mac, i meant the original Macintosh as launched in 1985. Since the introduction of Mac OS X, Apple has been silently moving away from traditional WIMP design in small steps, from which Lion is the furthest iteration away with the introduction of full screen applications and the launchpad.


So for the sake of convention respect, it is a good thing to replicate those hardware UI mechanisms within a WIMP computer interface, right ?


It depends on the use case. My argument is that on laptops and desktops, they tend to make things more awkward. For post-pc devices, they tend to work better than a traditional WIMP paradigm.


Well, this is pretty much what I think of touchscreen-based keyboards too, but as I said before real musicians use a MIDI keyboard and nontechnical people who just want to have fun enjoy it anyway.


Again, it depends on the application. A lot of musicians seem to like garageband on the iPad, because it so happens to allow you to do quite a bit of prototyping and roughs on the fly while being on the road when you don't have access to a MIDI keyboard.


What is the problem with other input peripherals such as mice and styluses, as long as the UI is slightly tweaked to adapt itself to these devices just like touchscreen UIs are ?


Styluses get lost easily. They are also notorious for scratching the surface of a tablet device, leaving ugly marks. You also don't need them for text input, since writing is way slower than typing. Using a mouse isn't practical for a mobile device either, and kinds of defeats the advantage of the mobility of the device. Using a mouse is a bit like using wired network on a laptop.


Are you sure that it isn't tablet software that made the difference ?


Ofcourse it was more than the device alone and they used a special designed tablet software. But the tablet form factor and the touch interface were an essential part of the success of the project. They couldn't have done it a desktop or laptop.


We don't want to give up on graphical user interfaces because they are very nice for people who can see, but we have to assess that they will always be quite bad for people who cannot with respect to command-line and voice-based interfaces.


That doesn't mean you can't come up with a non-WIMP paradigm for people who can't see.


Why would we limit the user interface of a device to the visual aspect? Its not because traditional WIMP interfaces were only visual because of technology constraints at the time, that we should keep this limitation in the future devices that get built.


Because by doing so the conversation would no longer be about the merits of WIMP vs post WIMP.

Reply Parent Score: 1