Remember the ‘Minority Report’ scenes in which Tom Cruise and others used their hands to manipulate data on giant computer screens? One man is on a mission to make that gestural interface technology commonplace on every desktop.
Wave Your Hands Like Tom Cruise
About The Author
Ex-programmer, ex-editor in chief at OSNews.com, now a visual artist/filmmaker.
Follow me on Twitter @EugeniaLoli
2007-05-05 10:23 pmWorknMan
That’s indeed nice, but wouldn’t be voice recognition more useful?
Perhaps, but not when you’re in cubicles sharing an office with 200 other people. That could get noisy real fast
2007-05-06 5:39 ambutters
Screw gesturing and voice recognition. How much harder would it be to have the computer tune into the user’s brain waves (or whatever we actually broadcast to the government without realizing it)? Did anyone else watch Minority Report and get drawn in more by the computer interface that the precogs used than the one that Tom Cruise used?
Personally, I question the degree to which people want to be like Tom Cruise anyway. We lost the Tom Cruise from Risky Business and Cocktails, and what we have now is a stranger, more socially deviant version. Like Mel Gibson, he crossed a line with his views on society and religion that compromised his ability to appeal to just about anybody. As trivial as it sounds, this company will have to contend with the subconscious negative reactions that will result from the association with Tom Cruise.
But back to the BrainFrame. Our digital computing technology is great for stream processing, but it doesn’t even come close to approaching a reasonable comparison with the human brain’s capacity for considering possibilities (i.e. branch prediction and speculative processing). If we’re going to advance this idea of synergistic computing–offloading stream-based computations to what we’ve been calling a GPU–then we need to consider how to deal with extremely branch-heavy code. Unless we are content with processors that approach the branching capacity of a bee (or should I say, the late bee, may the species rest in peace), we can’t take the idea of a human/computer hybrid processing model off the table.
No, I’m not confident that this is achievable in my lifetime…
Edited 2007-05-06 05:41
2007-05-06 6:46 pmCoxy
I don’t think there trying to do this so that people can be like Tom Cruise! 😉
Sci-fi just influenced technology… just like star trek did with their communicators, or James Bond did with his pagers and in-car navigation systems.
Wave my hands like Tom Cruise? Oooooh, my dear, just too wildly exciting. Besides, no cover: if you’re not waving they know you’re not working. I’ll have to pass.
2007-05-07 12:07 amHappyGod
LOL. Good point moleskine, didn’t think of that!
The other problem of course is that with a mouse, you can lean on your right (or left) arm and move the cursor with very little actual hand/arm movement.
This is convenient when you are working for 9+ hours a day. Imagine having to wave your arms around for 9 hours, you’d be exhausted!
Perhaps a sofa could be converted into a giant track pad that “clicks” when the user jumps up and down on it.
2007-05-07 2:22 pmKelly Rush
Look, tupp, tupp…tupp, the thing is…I know about psychiatry, tupp. And you don’t…
I hate Tom Cruise.
It looked stupid in the movie, and it’ll look stupid in real life. Can you imagine a whole roomful of business computer users all flailing around like mimes?
The technology will probably find a place in the gaming industry. But it’ll never catch on in the business world or among casual computer users – not as long as our species maintains a minimal level of dignity and self-consciousness anyway.
I’m not going to jump up and down on my couch either…
2007-05-05 9:36 pmsamad
“The technology will probably find a place in the gaming industry.”
It already has, and it’s called Nintendo Wii.
2007-05-05 9:52 pmisaba
Yeah! And I can find some other examples:
Apple’s iPhone uses gestures for scrolling and zooming, and it does look excellent.
And on another scale, the Nintendo DS does the same.
I bet it will be common in the near future and we all will see it.
I edit here: take a look at this:
Edited 2007-05-05 21:57
2007-05-05 10:29 pmalexandru_lz
I’ve seen the demo and while I agree that it works very well for the Nintendo Wii, there are a couple of reasons I really do like my mouse right now, and why this is highly unlikely to change any time soon:
The demo shows the system doing a truly wonderful job at manipulating objects in 3D. I do enjoy the concept of a 3D desktop, but my windows are still 2D. As a consequence, I am quite eager to see how the system applies to real technology on “every desktop”. For instance, how will it perform when using your… gesture for selecting text or moving the cursor in word processor? The natural solution in this case would be to touch the screen — and then how is this usefully different from a touchscreen?
Another problem I have with this system getting on “every desktop out there” as the article says is ergonomy. Sure, the demo looks very cool on a screen that is the size of a human, but how exactly will it work on a 17″ screen?
I am having all these questions because from what I’ve seen in the demo, this system cannot do anything more than moving windows around. While this is not just interesting, but also meaningful and useful in true 3D desktops, like the Looking Glass project, for example, I am having some trouble realizing how could this be helpful for desktop computing.
In addition to this, I am having a slight problem with seeing this system getting to the *precision* of a mouse. I do understand that things like minimizing or maximizing windows can be done through appropriate gestures, but a hand with no exact representation on screen is hardly as precise as a mouse pointer. Even touchscreens have some problems with this — and you are not holding your hand several inches from it.
Nevertheless, it would pe worth pointing out that such a system could be quite useful for relieving RSI as far as I can guess. I can also imagine it being useful itself, but only in addition to a concurrent input system, like voice recognition (which is itself impractical for tasks like window manipulation).
2007-05-06 6:54 pmCoxy
I think the problem that you have trying to see how this technology may be used is that your trying to imagine it on your desktop of today. By the time this is being used we’ll probably have a whole wall sized screen in everyones house (well rich people anyway), the whole concept of what a computer is and how it’s used will be completely different when this technology is mature.
When I received my brothers 48K spectrum from him when he purchased an Amiga, I couldn’t see the need for this new things that came with it called a mouse… or that other new thing: a disk drive.
Computer interfaces of the future will require you to jump, wave, squint and babble and just generally stand on your head just to move a window around. Ahh… interactive interfaces… And it’s about time somebody thought to save the humanity from the convenience of computers.
Some of these ‘interactive’ interfaces would be nice for some tasks.. but NOT for general computing…
2007-05-07 12:33 amMichael
What do you mean by “general computing”? For typing of course, the keyboard is king, although voice/handwriting recognition could replace those in many cases.
In the film, the interface was NOT a general computer interface. It was used for exammining many different types of data, mostly images and video. With a very natural looking flick of the hand, he could dismiss a file (effectively minimising it). Just as easily he could scroll through text/print media or navigate timed media (sound and video). It was a great way to organise and view files.
Compare these kinds of activities to the way computers are used today. This is exactly what people want to do with their home computers – browse the web, their photo albums, movie and music collections. Browse and organise. This probably isn’t the best way of doing that but it is a valid subject for research and experimentation.
Should we be exactly mimicing Tom Cruise? Certainly not. It worked well in the film, but so do all Hollwood GUIs. But we should be considering the value of reading naturalistic, physical gestures in consumer interfaces.
2007-05-08 3:00 amhelf
no, what he was doing was pretty cool. but a lot of people want to use this kind of stuff for everything!
also, I can type MUCH faster than I can write. I hate handwriting recognition on anything but a pda. Even on a thumbpad I can type faster than I can write.
“‘…the Filmmakers–the science fiction writers–imagine stuff before the engineers do, and there is a feedback loop between fiction and science that seems to be influencing each other,’ said Underkoffler.”
Right. Like how there are cardboard box computers walking around everywhere, as imagined by many filmmakers in the 60’s?
2007-05-06 12:58 amanevilyak
Just curious, can you name an example? I’m half tempted to see this
2007-05-06 1:45 amAlmafeta
Danger, Will Robinson.
2007-05-06 1:52 amsamad
I can’t think of anything on top of my head, but I remember watching some sci-fi program on PBS* where there was some kind of war between humans and “robots.” These robots were tall, cardboard box looking. I think the program was the Twilight Zone, but I’m not sure. I also remember in Steve Wozniak’s biography, he noted that when he started developing the personal computer, an impediment to the widespread adoption of computers was the popular misconception that they were blinking, beeping boxes with their own nefarious consciousnesses.
*PBS: For those not from the US, it is a publicly funded broadcasting organization. It is notable for producing a very popular children’s show called Sesame Street.
“You don’t know OS design. I do.”
can you imaging what would happen when someone tried to visit a porn site ?
or phones. You’d end up releasing the thing and break it on the floor
if anything that movie did nothing good for gesture based systems. same problem with the wii commercials with people almost dodging out the window.
you do not need those kinds of gestures if the system is built right. and by using a glove with markers simple, small hand and finger movements can be enough.
i can see this replacing, or maybe working alongside, the mouse. but it will not replace the keyboard. at least not until the gloves work so that you dont need a camera to track the movements.
gesture based controls will come to its own when we have HMD based, truly personal and mobile computers.
this is a long term research project imo…
Nah, it looks very tiresome. I dont want to do acrobatics to use my computer.
I`d rather stick to keyboards. My motto is:
“I wont touch any machine that has less than 100 buttons.”
I would not like to do anything like Tom Cruise….
Actually, imagine this:
Place your hand on your desk, just like it was cupping a traditional mouse except without the mouse. Now imagine the device can watch your hand and identify gestures to perform actions.
Index finger tap = left-click
Ring finger tap = right-click
Shift hand to the left = leftward movement
Shift hand to the right = rightward movement
Draw an X on the desk with your index finger = delete/close
Press down on the desk with your palm = minimize all
I could definitely see this happening. The whole Tom Cruise hand motions thing though, not very practical.
The biggest problem facing any new interface technology are the users unwilling to learn how to use it. Many new interface devices try to overcome this problem by emulating other common devices, such as guitars and golf clubs.
This is why the most widespread UI technologies are all still point and click. Even the keyboard we use today is a point and click interface. However, it is also a device that you can be trained to use more effectively. And the most widespread interface technology in the world, language, cannot be used without training at all.
The only reason voice is not more widespread is that it is extremely difficult to do reliably, and it makes for a noisy office. Useful gesture recognition is perhaps even more difficult to implement, but is potentially an even richer communications medium. Beyond pointing, sign language, drawing glyphs in the air, and pantomime are incredibly effective at quickly communicating complex concepts. Such an interface also has the beneficial side effects of avoiding the conditions that cause CTS, and allowing people to get some exercise while working (which based on what I’ve seen is direly needed).
Another related issue is the strange fact that people would rather reuse existing words to refer to new concepts instead of create new words, but that’s a discussion for another time.
Owning an eyetoy (i actually bought the ps2 for that thing) i can say that it works for crude games but not for detailed point and click work. But if they can find a way that works that will be really cool.
Finding a balance between sitting for 8 hours and waving your arms will work for a lot of people, i know it would for me.
That’s indeed nice, but wouldn’t be voice recognition more useful? E.g. for disabled people … ? Of course seeing the many sections of disability, both of them should be default in computers – control by voice and by hand (like the above method).