Over at the dot, they interview Peter Grasch, lead coder of simon. simon provides a unique way of interacting with your computer using voice recognition (not dictation, yet) which integrates well with KDE.
Over at the dot, they interview Peter Grasch, lead coder of simon. simon provides a unique way of interacting with your computer using voice recognition (not dictation, yet) which integrates well with KDE.
voice interaction is great for people with physical impariments, allowing them to work with their pc. It’s also pretty good for some computer novices – as voice control is just a lot more natural than mouse/keyboard (probably not to you, or I, we’re used to it now, I’m talking real novices – computer phobics)
However, for everyone else, voice isn’t such a great feature – in an office it’s just damned annoying having someone sit there talking to their pc all the time:
“Mouse down, new tab, next window” etc
Voice interaction is also painfully slow, having to explain to the computer which button to press, which tab to select, which window to move.. it’s much much quicker to just do it yourself with the mouse/keyboard, if you can.
Keep developing voice commands, they are needed, just not for the masses (at least not yet)
In the current context of user interfaces, it’s less useful, I agree. Consider an application built around voice apis such as simon: instead of “next tab, next tab, copy, next tab, paste” you write the application to meet a more natural language “copy first name and last name to memo field”.
The creative developer has limitless possibilities once a decent speech recog engine exists.