“Below we present some of the outstanding recent developments in the field of user experience design. Most techniques may seem very futuristic, but they are reality. And in fact, they are extremely impressive. Keep in mind: they can become ubiquitous over the next years.” More here.
I thought Thom was running a series on this topic. Wouldn’t this article fit in better there?
That question was pretty much rhetorical. 😉
Not really. It’s a news report on the article.
All these new interfaces look pretty cool and all, but as somebody with nerve damage from repetitive computer use, I don’t know how practical any of them are in the long term.
A great interface would just know what I was thinking so I wouldn’t have to move my arms anymore.
Arms are what humans were made for. We climbed trees and hit each other with bones. UIs aren’t currently designed with these sort of actions in mind. It’s repetitive, subtle actions like typing, or clicking a mouse button, that do the damage (AFAIK, I’m no doctor or anything…).
I want to be able to browse the web just by dragging my fist across the desk, and occasionally thumping or kicking it. When I need to type, I should be able to talk instead. I should be able to stand up and wander round, still using the computer by talking and waving my arms about.
There’s still a vast amount of work to be done within the UI field. You’re right – a computer should know what you’re thinking. The computer should study my behavior and start making educated guesses about what I want to do next. That’s the sort of thing that doesn’t need fancy new input or output devices. It’s the kind of work that should be happening right now, somewhere in the alternative OS world.
The thing about these new interfaces is, they won’t gradually replace mice and keyboards. Instead, they’ll be research toys for ages until suddenly, they’re ready for use and obviously much better than mice/keyboards, at which point they’ll take over as fast as TFT monitors took over from CRTs.
Just don’t hold your breath waiting…
So you want a secretary that can use a computer?
“Get me this webpage!”
“Here it is, would you like a cup of coffee?”
People have been talking about voice input/output since the 1980’s, but it just doesn’t catch on. Why? Because so far, no one has come up with an *easier* interface than Ye Olde Keyboarde and Mouse.
I could browse my desktop, navigate the web, open applications, all via voice command in 1996 (OS/2 Warp 4.0), but I could do all of those same things much, much faster with a keyboard and/or mouse. I also didn’t have to worry about my boss shouting “Format c:!” (didn’t work, but was funny).
Voice dictation is nice if you have a corner office, but if you’re in a cube farm, or writing a document while someone in the room is reading, voice dictation sucks. I also tend to type faster than I talk, since I don’t get held up by punctuation when typing (Paging Victor Borge!).
As for touch-screen and surface interfaces, they look cool, and for some applications (iPhone/iPod), they’re very useful. For a general purpose computer interface, you wind up in a situation where you now need the keyboard and gestures, instead of a keyboard and mouse– You’re still breaking the rhythm between data input and UI control.
One option might be something like Surface to replace a keyboard+mouse combo, but now you’ve gone from good tactile feedback (mechanical keyboards) to lousy feedback (membrane keyboards) to *no* tactile feedback.
If, however, you’re still using a physical keyboard, then the touch-screen is probably much farther away from the keyboard than the mouse currently is (see complaint about leaving “data entry” mode to go into “user interface” mode, and double it).
This presentation is quite impressive:
http://www.ted.com/index.php/talks/view/id/129
Wow… Who needs cheap oil when hundreds of millions of people with digital cameras take photos that cover most of the world’s interesting places, upload them to the web for convenience, and then Photosynth comes along and stitches them all together into a navigable 3D model of the world. Suddenly Google Earth seems kind of lame…
Who needs cheap oil when hundreds of millions of people with digital cameras take photos that cover most of the world’s interesting places, upload them to the web for convenience, and then Photosynth comes along and stitches them all together into a navigable 3D model of the world. Suddenly Google Earth seems kind of lame…
That’s the way forward. It’s the only way up-to-date, unsanitised images are going to be obtainable. You can imagine some kind of holodeck in the future, stitched together from 2D pictures to re-create old historical monuments, historical scenes from history captured on camera and images from your past. Absolutely awesome.
Now that Microsoft have acquired it, it’s probably about the only killer thing they might have with Windows Live.
This, to me is the most practical of the bunch. I can see not only things like booths in airports having this, but also, as a tabletop jukebox for pubs and clubs etc
Very interesting indeed, it’s all about prototypes except for the so called “3D Operating System”. Which is a video of SUN’s Looking Glass. I downloaded a Slack based LiveCD including this Looking Glass, last summer and it actually worked (to some extend, due to the lack of supported apps). Anyway, Compiz-Fusion has beaten Looking Glass already by far IMHO.
Maybe I’m just getting to old for this stuff 🙂
See, my first computer booted in a BASIC interpreter, so “the cube” is already SciFi to me (sort of)…
the reactable, vertical multi-touch and microsoft surface is more or less the same tech.
hell, the reactable and surface could be said to be clones, as im sure that one could turn a surface into a reactable with the right software installed.
btw, that reactable is damn fun to watch in action.
and i could probably see the future computer be two lcd screens wired together so that one have a nice angle to act as keyboard or other inputs when needed, and the other is normally vertical but can also be angled for input if thats needed.
makes those insane multiscreen layouts that hollywood is so in love with seem much less insane.
hell, isnt there patches out for xorg to support multi-touch?
hmm, maybe i should play around with some setup…
No, there’s no patch for X to support multi touch but there’s something more interesting: mpX – The multi pointer X server. You can see it at: http://wearables.unisa.edu.au/mpx/
ok, i was under the impression that it was scheduled to be folded into xorg at some point.
Nintendo DS FTW!
(Except that the top screen of the DS isn’t a touchscreen.)
ah yes, the DS.
now that that, make both screens touch sensitive, expand them to fill the size of the device, and make the user able to hold it portrait style to read ebooks and similar.
thats what i would call a sweet device.
only thing missing then is that one can turn one screen around so that one can use it without opening the device itself.
hear that nokia, thats the design for your next N8xx!
Maybe I’m getting old or something, but except in limited niche applications, general computer use seems to me to be far less efficient waving your arms around or talking to your computer or any of the other supposedly “futuristic” methods of interaction we tend to see in sci-fi movies and articles like this one. With the exception of a direct neural link, of course.
Who wants to wave their hands around like an idiot all day at work when you could be sitting in a nice chair? Plus, the standard kb/mouse interface is universal across applications, unlike most of these interfaces will have to be to be useful. Even the popularity of the Wii confounds me, as it seems far less precise a way to interact with a device. But, like I said, maybe I’m getting old.
A lot of the interface visions remind me of Sun’s Starfire vision, anno 1992…
http://www.asktog.com/starfire/
Notepad on the new Microsoft Touch OS only requires 16 TB of ram. If you have a recent 64-core machine and at least a multi-pipeline holobyte graphics engine, it can actually keep up with the speed of your typing! Amazing, really.
Another science fiction article. We do know such articles since decades, nice to read, massive lack of reality. But alas I like SF =)
such research toys is one thing – to ship a product everybody can buy is the other. The iPhone is the first product that implements a Gui controlled via touching in a usefull way.
some of those look terrible… BumpTop? ugh. i watched videos of it in the past. I don’t WANT my computer to look like my messy desk.
Have to pry my keyboard and basic GUI away from my cold, dead, fingers.
I like all of these interfaces (except for the BumpTop, seems more like eye-candy than anything). Problem is they all seem like too much of a leap from the keyboard/mouse.
The main problem I have with current user interfaces is that they are virtual rather than physical. I think this is why a lot of people prefer the command line to the gui, because it’s more direct. I’ve spent some time thinking about how the keyboard could be improved and so here’s some food for thought:
Get rid of the mouse, and replace the arrow keys on the keyboard with a laptop style touch pad, put another touch pad to the left of the keyboard somewhere near the caps lock key.
Now, we can use the right pad to move the pointer (which would select which ui widget has keyboard focus) and the left touch pad to adjust values. For example if the pointer is hovering over some numeric value (say a cell in a spreadsheet), vertical movement on the left pad would increase/decrease the value, and horizontal movement could be used to adjust the precision with which the value is altered (100, 10, 1, 0.1, 0.01, etc).
Having two pads would also be beneficial for navigating 3D environments, the right pad could be used to look around, while the left pad could be used for movement (basically the same as the mouse + wsad keys used by a lot of FPS games, except you get analog style control over the movement that you don’t get with the keys).
Taking the idea a little further, we could throw some knobs on the keyboard which would be especially useful for audio applications, spline editing, and other things that are often difficult to do with a mouse and/or keyboard. Knobs could also be useful for menu navigation. Let’s say we have a knob that can also act as a button, you could press the knob to bring up a menu (application menu, context menu, ect) turn the knob to highlight the item you want and finally press the knob again to select the item.
All of this is not too much of a leap from how we currently interact with our computers, and would not require too many changes to hardware or software in order implement it either.