Linked by Thom Holwerda on Mon 19th Dec 2011 20:11 UTC
Google Once upon a time, in a land, far, far away, there were two mobile operating systems. One of them was designed for mobile from the ground up; the other was trying really hard to copy its older, desktop brother. One was limited in functionality, inflexible and lacked multitasking, but was very efficient, fast, and easy to use. The other had everything and the kitchen sink, was very flexible and could multitask, but had a steep learning curve, was inconsistent, and not particularly pretty.
Permalink for comment 501166
To read all comments associated with this story, please click here.
RE[5]: Comment by frderi
by Neolander on Sat 24th Dec 2011 13:48 UTC in reply to "RE[4]: Comment by frderi"
Member since:

The application you mention uses a mixture of WIMP controls and Skeuomorphic elements, its hardly a synthesizer designed in the traditional WIMP paradigm. If it was, it wouldn't have the knobs or presets.

I think there is a misunderstanding between us as for what WIMP means. For me, WIMP means Windows, Icons, Menus, and Pointer, and that's all it is about. The rest is left to each individual toolkit implementation, whose widget toolkit typically includes a full equivalent of the de facto standard electronics UI (push and bistable buttons, faders, knobs), plus hierarchical and scrollable controls and displays that are not possible on regular electronics (such as list views and comboboxes).

WIMP does not imply use of a mouse, and does not limit the set of usable widgets. It just happens that clever developers attempt not to overflow their user with a ton of new widgets to get familiar with at once, and that using the standard widgets of your toolkit is the perfect way to do that.

WIMP does not imply that data must necessarily be accessed through explicit loading and saving either. As an example, Mac OS X, which is as you said is the stereotypical WIMP GUI of today, recently introduced a file manipulation scheme in which you do not have to manually save files anymore, which I like quite a lot.

If it were, it would have save files and sliders instead. Have you ever worked with real audio equipment? You'll notice that they also use knobs and come with presets.

So for the sake of convention respect, it is a good thing to replicate those hardware UI mechanisms within a WIMP computer interface, right ?

If you have a close look at virtual audio hardware, you'll notice that its knob are generally not manipulated by a circular gesture, but by a linear gesture. This is something which would be physically impossible in the real world, but which allows for more precise value setting (your knob becomes like a fader) and is much more comfortable in the context of a pointer-based interface.

"The QSDF key row is used for white piano keys, whereas the AZERTY row is used for black piano keys."

My Commodore back in the day already did that. What an awkward way to play notes! Hardly usable at all. One of the worst ideas ever.

Well, this is pretty much what I think of touchscreen-based keyboards too, but as I said before real musicians use a MIDI keyboard and nontechnical people who just want to have fun enjoy it anyway.

*fond memories of his childhood playing silly tunes with a DOS software called "pianoman" or something like that*

"Why would one need a touchscreen for software synthesis that works like on a real-world synthesizer ?"

For the same reason as why we needed general purpose computers in the first place. Versatility of function. No need to carry around heavy boxes of devices which only do one thing. Instead we can cope with only a few devices which can each do a multitude of things.

That's not my question. Why do you need a touchscreen for audio synthesis ? What is the problem with other input peripherals such as mice and styluses, as long as the UI is slightly tweaked to adapt itself to these devices just like touchscreen UIs are ?

Autism patients seem to disagree with you, touch based tablets are transforming their lives and have allowed them to communicate with the world around them, something traditional computers never did.

And I read a newspaper article the other day about someone who didn't have enough hand-eye coordination to manipulate a pen, and was able to succeed at primary school due to the use of a computer keyboard coupled with a command-line interface.

Autism is not about physical issues with the manipulation of user interfaces, to the best of my knowledge. Are you sure that it isn't tablet software that made the difference ? For all I hate touchscreens, one thing I have to concede is that their low precision and limited capabilities make developers realize that UI design is just as important as functionality.

Also, using voiceover to control a GUI a very broken concept, its just a bandaid, it doesn't fix what's broken in the first place in these use cases.

What's broken in the first place is that most people can see and a few people cannot. We don't want to give up on graphical user interfaces because they are very nice for people who can see, but we have to assess that they will always be quite bad for people who cannot with respect to command-line and voice-based interfaces.

To the best of my knowledge, this core problem would be very difficult to solve. Good input peripherals with hover feedback, such as mice or Wacom styluses, offer an interesting workaround. Touchscreens, on the other hand, offer nothing more than a flat surface.

"Siri is a command-based voice interface designed for very specific use cases that are hard-coded at the OS level, I thought we were talking about general-purpose touchscreen GUIs so far ?"

Why would we limit the user interface of a device to the visual aspect? Its not because traditional WIMP interfaces were only visual because of technology constraints at the time, that we should keep this limitation in the future devices that get built.

I have discussed this before on this website, I wonder if that was not with you. If we want to build user interfaces that cater to the need of all human beings, not only those with good sight and precise pointing skills, then we need to design user interfaces at a very fundamental level of human-computer interaction.

We must work with concepts as abstract as text IO, information hierarchy, emphasis, and so on. This is for once an area where web standards are miles ahead any form of local GUI, as long as they are used properly.

I have tried to work on such a UI scheme, but it ended up being too complicated for me and I given up. You need whole research teams full of specialists, not one-man projects, to tackle that kind of monster problem. Unless this is done, everything will be hacks like Siri where you have to explicitly support voice command and design a voice-specific interface for your software to be "compatible" with blind people.

Reply Parent Score: 1