Computers of the future could be controlled by eye movements, rather than a mouse or keyboard. Scientists at Imperial College, London, are working on eye-tracking technology that analyses the way we look at things. The team are trying to gain an insight into visual knowledge – the way we see objects and translate that information into actions. Read the report at BBC News.
Huh. I remeber seeing something a long ago (1980s) on one of the (US) public science shows that the US military was working on this. The idea was pilots could look at a switch or gauge to activate it.
It may have been a show dealing with pilots being overstressed in modern combat aircraft.
Pilots already do use it with visual targeting systems (Apache helicopters, etc)
So what happens when you have a stigmatism like I do? Heck, mouse pointer would jump around almost as much as it did when I used to use a wheel mouse instead of optical
Or look out the window? <g>
So, instead of the carphal tunnel (or whatever it spells) on the hand, we’ll have serious eyes damage…
Lol. Just a few weeks ago I had brought up eye tracking in a informal discussion of the PC’s future. My ideas, however, were much more simplistic in implementation than what is aimed for by these scientists.
I really like the idea of using eye tracking to replace the mouse for the pointing part of point and click. The speed and accuracy of my eyes is so much greater than manipulating a mouse to move a pointer. I also wouldn’t waste time trying to find the mouse pointer when I lose it (yeah, yeah, I’m a loser ).
Trying to determin an action I want to take by the way I look at something just doesn’t seem practical to me. I would much rather have “action keys” on my keyboard for that purpose.
Beyond the point and click, I don’t know how much use eye tracking can be though. It’s not really suited for spatial input, like drawing for example. Your eyes need to be referencing other parts of your work as you draw, they can’t be tied to the point of input. I would still want to use a hand controlled input device for such activites. My preference would be a tablet rather than a mouse for that.
The two combined, eye tracking and a mouse or tablet, would create the most productive environment where spatial input is needed, IMHO. The best part of popups is the reduced distance needed to be travled by the pointer to invoke actions and then return to the point of input. Moving menu/toolbar selections completely off the pointer and onto eye tracking allows the pointer to remain solely the focal point of whatever it is your working on.
This wouldn’t get anywhere in the public services office 🙂
lol – just out of coincidence, I am about to start experimenting on people looking at web pages using eye tracking. Had a go at the hardware & software yesterday and it’s quite funky.
sounds cool… what about how they control thier computers in the movie minority report….that looks cool
track hand movement instead, when it comes withing a certain reach of the ‘tracking device’
what happens when you go away from your chair with eye tracking? will it try to follow you? dave… get back in your chair, dave!!
websites with those animated graphics that are specificly designed to grab your attention, all href linked to some advertising agency’s client…
that will screwup a web surfers experience…
no thanks, i will stick to point & click for surfing…
verbal commands would be cool!!!
computer, open mozilla to google, mozilla, go to OSNews…
I can’t watch porn while doing other stuff anymore?! Seriously , though, I have a TV card and my eyes go back and forth between the TV window and the application I’m using all the time. It would make my life alot more difficult if the TV window got “selected” just by looking at it.
Actually, this eye-tracking thing sounds pretty good. After all, your eyes are always looking at what you specifically want to see, so if the mouse or cursor follows your eye movements, great! No more moving the mouse or trackball to keep up. If your eyes move off-screen, so what? The pointer just waits until the eyes are focusing on the screen again.
But I agree that using eye movement for actual selection of items would be more difficult. However, there might be other, limited uses of eye-tracking.
Imagine a free-style drawing program that let you draw by tracking your eye-movements, for example.
Wonder how these thingies would effect my DoD/CS stats?
RSEye
Now that the computer can know which part of the screen you are focusing on, it can spend more time drawing just that portion of the 3D graphics in hi-res, while the other areas of the screen can be a lower resolution approximation (since you aren’t looking at them except peripherally anyway). This means you’ll (a) get more bang for your video-card-buck, and (b) anyone watching over your shoulder will see a very interesting special effect that you won’t see… an area of hi-res graphics flitting around amongst the lo-res background.
Okay, there are reasons why it probably wouldn’t work very well, but it would be fun to try it out :^)
A lot of people have proposed something like this as a stepping stone to the Right Thing for window focus rules.
The traditional Windows focus rule, “click to focus” is hopeleslly unwieldy, it is comparable to a double-declutch gear box.
Right now most power users choose “sloppy focus” which in Windows is closely approximated by so-called “xmouse”. In a sloppy focus system the focus is given to the last window that you moved the mouse over.
One step further would be observe eye movements and use “focus follows eyes”, but as others have observed in this thread, what you’re doing and what you’re looking at are not necessarily one and the same thing.
The Right Thing is called “focus follows brain”, in which the rule is that focus is given to the window you’re thinking about. This is hard to implement because we don’t yet know how to interface the version of Unix which presumably runs on the human brain to the X server running on the version of Unix on the computer
With “focus follows brain” some old features of X which feel rather clunky today could be re-activated. You could apply focus not just to “top level” windows, but also to individual elements. This is much faster than using some keyboard shortcuts to move around a web page or form.
If this group of scientists do manage to accuratly identify the way we look at what we are looking at, we could get very close to your “Focus follows brain” method.
Watch yourself (if that’s possible) sometime when you are working with multiple windows, one (or more) of which you you are sending input to. While you will not always be looking at the point of entry, do you first glance at the expected point of entry before you start typing? This would be simular to the way you would glance down at a piece of paper when you put your pen to it to start writing. You won’t necessarly watch yourself as you’re writing, but you would first visually reference the point at which you will start writing before you start.
We probably usually do this before we start typing also, glance to the location we expect our input to go to. If the eye tracking can accuratly catch when we make this type of glance, it could be used to set the point of input. This would even be able select window parts, as “The Right Thing” should.
Thank you, I’m starting to grasp a potential benifit to their research that I would be interested in using.