Within the last few days we read the news about Apple’s Siri AI personal assistant, and about a brain implant that lets monkeys control virtual limps & feel virtual objects. I believe that if someone is to also combine a few more technologies (e.g. high-res eyeware, appropriate operating system changes), we will be looking at the next user interface revolution, after the inventions of the computer mouse and touch interfaces.
I’m a big fan of futuristic-everything. I’ve worked with user interfaces when I used to live in the UK, and since then I’ve always tried to think of possible ways to better the ways we interact with machines. Kind of a hobby of mine.
Let’s start by saying that the future is mobile, and there’s little dispute about this. Desktop machines will only be used for very heavy, specialized purposes, the same way trucks are used today. Most of the people will just own fast-enough mobile, portable devices, rather than desktops. Even tablets will hit the ceiling of what people would want to carry with them. Basically, anything bigger than a 5″ screen smartphone will be too much to carry with us. In time, carrying a 10″ tablet will be seen no different than the way we today feel about 1981’s businessmen, carrying around the Osborne 1 “portable” computer.
The biggest problem that such small devices face is their minuscule screen, while the web expands to SXGA and higher resolutions as a minimum. Sure, they all use very high resolutions these days (1080p is around the corner, 720p is already here), but the small physical screen poses problems for applications that actually do require a lot of widgets, or a large real screen estate working area (e.g. most modern web sites, serious video/still editors, 3D editors etc).
There are two ways to deal with the problem. One, is holographic projected displays. Unfortunately, we’re far away technologically from something like it, and no one really wants to see your huge display in the middle of the street, while you’re trying to get directions. Even if we had the technology for it, it’s too visually messy for everyone else. The second way, which is both a cleaner solution and closer to our technological reach, is a projected display via glasses, brainwaves, and an AI-oriented OS.
The idea is this: Classic-looking smartphone in the pocket, connected wirelessly (via Bluetooth or WiFi), to special eyeware (that double as prescription glasses). When the glasses are activated, a transparent, high-resolution data screen is shown in front of the user, at a certain perceived distance — similar to the screen-glass technology fighter jet helmets have. Voice recognition will only be an alternative way to use the system. Why use voice recognition when we are so close into controlling the UI with brainwaves alone? The video below showcases an early version of such technology that requires no surgical implants:
Our AI assistant doesn’t always need to talk back to us, but rather silently execute the actions our brain has ordered. The operating system supporting such feedback of course needs to be tweaked accordingly, and some applications might need to be redesigned too. No need to be clicking endless buttons on Blender 3D for example, you simply think of what you want to accomplish and the app does it (if it has the ability).
Let’s not run ahead of ourselves though. A technology that does all that in a nice, neat commercial-ready package is at least 10 years off. Right now, the various technologies involved are both in their infancy, and fragmented in various companies or research institutes. Obviously, the right visionary tech company that is able to first secure the patents or rights for all the needed parts, will have a good head-start.
For the first time such a new user input design is not mostly-hardware as it used to be for older UI inventions. In fact, most of the needed hardware technology already exists in one form or another. But software needs to be rethought quite drastically. Maybe new programming languages need to be invented too to assist in the development of such thought-oriented software. Security would need to be extra tight too!
The impact of such a UI design is of course tremendous. Work can happen much faster (especially CGI), virtual reality will actually feel like reality that we can touch, while augmented reality will change how we function. In essence, we would be connecting with each other telepathically!
As system-in-a-chip systems become smaller, eventually the smartphone in the pocket will be replaced by a wristwatch-sized gadget, and eventually by the eyeware itself. No reason to carry two devices when we’ll at last have the technology to fit everything on one.
And even further down the road we won’t need eyeware at all. A small device attached somewhere on our head will be able to not also receive, but also send us brainwaves, making us “see” the data in our head (using the same shortcut hallucinating people see things that don’t really exist). It remains to be seen how mentally safe such an approach really is. But one thing is for sure, such a design would be using our body as an antenna for the closest “tower”, and possibly as a power source too.
The era of real Cyborgs will begin. And it will feel as nothing more but natural technological evolution.
Note: This is an updated article from the one I published last year on my blog.
The things aren’t running exactly how you try to picture them. Nobody will let his brain to interface with suspect hardware and tablets, desktops and laptops will be with us at least a century or so.
Maybe at some point in the future the humans will interact with hardware in some very creative ways, but not in the foreseeable future.
Trust is a marketing point, not necessarily technical (as Apple has showcased many times). People in the past wouldn’t entrust their money on banks for example, and while there are good reasons to not trust them, most people have their money on banks anyway.
When personal computers started becoming popular, people were afraid to trust them too. Your post sounded like the Orthodox Church and their fear for computers in the ’80s and how we would all lose our individuality, and have a chip in our forehead as a personal ID, that’s controlling us (I watched various such “documentaries” on the Greek TV as a teen). It’s all FUD. Eventually things like that get ironed out, security also matures, not just software features.
Remember the iPhone and how everyone “hated it” in the beginning (before it got released 2 months later) because it had no stylus or hardware buttons. A lot of whining about that back in early 2007. But since its touch UI made sense eventually, people followed it anyway when the device actually came out. And they loved it.
The UI I suggest is even more intuitive and natural. There’s nothing stopping progress. If something is useful, it gets adopted, despite the risks.
Edited 2011-10-10 21:07 UTC
I hadn’t heard that bit about the Orthadox church but I’d rather there be a well defined interface based on open standards between the brain and computer to ensure that it was only able to gleem so much information.
Typically I use my brian as a buffer for things I might say/do/type before I actually do one of those things, then they are run through a couple situation dependant filters before they are possibly acted upon. I would like to retain that ability, even with brain interfaces.
Do yourself a favor and watch a Anime OVA/Moive called Macross Plus and you’ll realize just how utterly stupid what you are suggesting really is.
To make things simple, A test pilot hooked to an interface similar to what you are suggesting started daydreaming about causing his rival test pilot aircraft to crash and burn. Guess what happend? Yep, the interface took that idle thought and acted upon it, and there was nothing the pilot who thought the command could do to stop it.
Some future to forward to isn’t it?
Apparently, the future is now: http://www.post-gazette.com/pg/11283/1181062-53.stm
I’ve been using my brain to control computers for years. The most practical brain-computer-interface I’ve tried so far is digital, i.e. using my brain to control the digits on my hands to strike keys on a keyboard. I especially like the feedback mechanisms, which are both visual and tactile, so that I can read the output immediately for easily calibration of the input mechanism. Often, I will intuitively ‘feel’ when the input is wrong, even without watching the output.
Yeah, it’s amazing.
+1. Some people have zero sense of humor.
Well, it is nice if you have 2 hands. But some persons are not that lucky.
An excellent point. Brainwave control has its place, but it’s a prosthesis. Physically healthy humans are just too well adapted for direct interaction with a spacial world for it to make sense, IMO.
Fully agree. Something similar can be said about people having non-neurotypical brainwaves – or just not the required set of two 100% working eyes to participate in 3D hype. 🙂
And even when you have the eyes… I have two 100% working eyes, but it was not the case when I was younger, so my brain worked things out in an unusual fashion and my perception of relief is not based on binocular vision, but rather on other things like shadows, perspective and parallax effects.
For me, the main difference between 3D cinema and 2D cinema is that the first one is more expensive and requires you to wear stupid glasses. So in my perception of things, this 3D hype is truly based on thin air
Edited 2011-10-11 18:57 UTC
Cheap scifi film effects unfortunately totally hijacked the meaning of the word “holographic” over the decades, so the public imagines …roughly this “huge display in the middle of the street” or, generally, what really seems more like a volumetric display (usually while projecting somebody in “scifi videoconferencing”)
But that is NOT holography; the word has very specific meaning. And, sadly, people don’t even realize how impressive real, already actually existing holograms feel (one rapidly found example http://www.youtube.com/watch?v=Xp7BP00LuA4 another nice one http://www.youtube.com/watch?v=6AVAzGQMxEg ), basically for a few decades already; people seem to not realize about them to the point of doubting what they see on such videos, thinking it’s just a trick (no, it’s not… they feel awesome when held, the light actually really behaving like the plate has some “inside”)
Yes, static so far, also with poor colour gamut and such. Good holographic video display will require effective size of pixels comparable to the wavelength of light, plus processing and memory we’re nowhere near yet.
But once we’re there… a display will feel kinda like a window or mirror (bonus: at that point, we probably won’t have to endure any more the efforts, every 2 decades or so despite numerous failures, to push on us the cheap and inherently flawed trick of stereography)
All this essentially being an example of the unfortunate effects of “scifi cargo cultism” that I sometimes point out – fluffy fantasies displacing, masking, causing the people to miss the wild possibilities in actually existing universe.
Edited 2011-10-10 22:07 UTC
I want one of those. Holy crap that was amazing.
See, what I was saying – even technology enthusiasts, and such, not realizing about this fairly old[1] stuff (easily older than you, Thom) …so what hope there is for the general public, whose “imagination” is shaped by cheap popular scifi productions?
1. Old in terms of recent technology progress; the implementation I randomly found on YT not really being so ground-breaking as they claim, it mostly “just” seems to revolve around being a decently marketable product. Any self-respecting physics department probably has some holograms, and there are a few ~commercial labs around the world that make them (contact one, Thom? Maybe they’ll even have some defectively developed one – which could just look weird in part of the “image” or under certain viewing angles – but probably still impressive, clearly with depth[2])
2. Yes, in holograms that is essentially a real depth, they are about reproducing how the light would really behave if the plate had some insides.
Edited 2011-10-10 22:26 UTC
PS. BTW, you can easily do a basic, but still fun imitation of sorts (one which relies more on psycho-visual effects in how our brains are wired to see ~faces than on actual scattering of light in a “proper” way), if you have a printer!
Hollow-face illusion http://en.wikipedia.org/wiki/Hollow-Face_illusion & three dragons to print out http://www.grand-illusions.com/opticalillusions/three_dragons/
Actually, inspired by how one preschool-theatre costume (of a… cat; with proper ears) supposedly induced a panic attack in the kitten of my buddy, I essentially reworked the dragon once, to be more “danger! Possible unknown big cat!”-like. Yup, my cate became… nervous.
(quick Google search for the above Wiki page even revealed one with a cat design http://allgraphical.blogspot.com/2010/10/paper-craft-illusion.html
…I can’t vouch for how convincing it is, though)
Bonus points: place a large version in a street-facing window
Then there are also some software toys like this DSi game http://www.youtube.com/watch?v=h5QSclrIdlE which, essentially, partly simulate – to one viewer at a time – roughly how a hologram would feel.
A display like this does not change much in the grand scheme of things. It’s still a display, that happens to display in a 3D manner. It’s an evolution on the current 2D UIs, but not a revolution (especially if someone has to carry lights). Such displays still have a rather large, physical X x Y. They’re good to display stuff to a large crowd, but not for personal computing. The point is to eliminate the need for big displays. Most of us already wear eye-glasses. Not using them for anything else is just a waste of resources and lack of ingenuity.
Edited 2011-10-10 22:44 UTC
Main point was – don’t call it holographic, don’t reinforce such usage, don’t waste the “holography” term by using it on some pop-cultural contraptions of, probably, ever dubious practicality. The term is much more specific, and at the same time might very well give something much (figuratively) bigger, nicer, broader than limited visions of directors or visual effects artists would suggest.
But also don’t dismiss it so readily. Few points:
Remember, a good proper holographic display, if it also tracks viewers (comparatively trivial, relative to its other advances), could easily display completely different thing to each and every pair of eyes looking at it, that is just its inherent property.
(those constraints of “imposed imagination”, ultimately limited one – vs. actually “applied” one, on the basis of how the world, science, etc. are – that I mentioned)
If some promising paths prove fruitful (possibly, say, applications of graphene and their implications), decent holographic displays could as well end up covering almost everything, at least in places with any notable (“valuable”? ;/ ) human concentrations and where they would likely look. And/or they could have the physical form of, essentially, wallpaper.*
Yes, to you (or to me for that matter) that would seem “crazy” and insanely wasteful – but, consider how people living just few short centuries ago would think the very same thing about covering whole facades with something so valuable as glass (especially one so incredibly translucent and smooth!), or aluminium and such.
Heck, “glass houses” were used less than a mere century ago in one locally notable novel ( http://en.wikipedia.org/wiki/The_Spring_to_Come )
as an idea, symbol representing unrealistic dreams of perfect place. Now look around you …no perfect place in sight, glass houses everywhere.
And that’s only when touching on physical in-setting screens.
Because, see, what you don’t realize is that there would be most likely a major technological overlap between such good holographic displays and …good eye-displays. They are not so separate as they appear, able to use fundamentally similar technology in their quest to be any good.
What present eye-screens seem mostly “good” for – if they don’t have optical systems making their size, weight & price not trivial – is giving people headaches.
However: the “substrate” required for good holographic screens basically would be also a perfect optical system (for the wavelengths larger than its “pixels”, at least). You could possibly, perhaps, even hack a “wallpaper display” and reconfigure it (via ~firmware!) into something acting as & easily rivalling (being perfect optically) some tediously laboured lenses or mirrors in best telescopes.
That’s also something which would be very helpful in good eye-screen (the other approach which seems promising is a direct laser projection on the retina, though possibly with more “schematic” graphics – but that’s OK, since it will most likely be practical much sooner)
Generally, unspecific fantasies (how would it work, roughly, while doing it good?) are not ingenuity. Too often they are closer to wishy-washy visions which miss both the issues of what they propose, and many upcoming opportunities in somewhat different approaches.
We don’t “waste” glasses for anything (good), not at this point.
(but BTW, I do have a related small & silly pet project on hold / waiting for some basically already “here” technology… but that’s all I’m saying now! ;> )
*Best of all, if covering the inside walls, imagine: it could easily seem like every room has great view Also when it doesn’t actually have any windows! (say, in some monolithic mega-unit to find housing space for massive overpopulation ). More, each of the occupants could have the view they prefer (as long as all scenes comparable lighting, I imagine; otherwise it would probably often lead to weird mood mix in the room), even “back to nature” – for example, looking like an open tent inside of a forest (yeah, probably more depressing if anything; but perhaps it points to implications of another hypothetical major advancement way further down the line – if we would “augment” our bodies with some forms of technology allowing us to not care about cold, rain, elements overall, what difference do the walls make?)
“especially if someone has to carry lights”?
Humans already have a spectacularly efficient brain/machine interface. It is called the hand. The hand provides spatial (proprioception) and tactile feedback, It is semi-autonomous with training.
Aviation has avoided using voice control because it is extremely inefficient, imprecise and slow compared with using the hand.
Music players, phones and tablets are already causing significantly higher rates of vehicular accidents and pedestrian deaths. This is because the human brain has virtually no ability to multi-task.
First of all, voice was only mentioned as an alternative to brainwaves, so I don’t see why you are mentioning just that. Secondly, many things require lots of clicks to do by using a hand, but it can be done instantly with a thought. Because the thought doesn’t ask the computer to “click an icon”, as the finger would do, but it actually carries out full actions. So if I want to incorporate a bicycle in my CGI scene or 2D pic, I simply think it. I transmit the mind picture to the app and the app figures out how to create it and display it. Using the hand to actually design such a thing in 3D, it can take about 3 days if it’s to be done properly.
Edited 2011-10-10 23:13 UTC
But how are you going to program the computer to recognise those brainwaves?
It takes humans months of crawling before we can learn how to walk. That’s months of learning the size and shape of our limbs and how to move them. But that’s not all we learn. Everything is “recorded”; our thoughts and emotions during that process. Smells, sights, sounds and even tastes. All of that on top of the thought processes to actually just move our limbs the correct amount and in the correct order. All of that wired and rewired on the fly and in a totally unique pattern to how our brother / sister, our cousins, our parents and our friends. All of us wired differently.
So how on Earth will you program a computer to understand which neurons firing will relate to “paste bicycle image over CGI scene” and which relate to “oh I like that song that’s playing on the radio as it reminds me of yoghurt“?
Rudimentary stuff like copy/paste from clipboard can be programmed in. But then you end up in a situation where you have to chain these rudimentary thoughts in sequence like you would clicks on a mouse. So you’re no better off. In fact worse off as you’re now having to program your thoughts into a computer before using it (which is currently a very lengthy process), you’re having to learn how to control your own thoughts so you don’t have the mental equivalent of a muscle spasm everytime a tampax advert comes on the TV thus resulting in your computer shutting down and you losing your work. And all you gain from this is the reaction time that would have been spent between thinking about moving your left finger and your left finger clicking the left mouse button.
As I said in my other epic post, the technology isn’t the only hurdle we face with this ideology of yours – it’s human development.
Edited 2011-10-11 00:16 UTC
The same way the technology currently exists where people can control UI using such an interface. Follow the links, watch the video.
I’ve seen those videos and countless more like them. They just highlight my point that these kits need to be pre-populated with a map of the users pathways, which is something that can only be done on a command by command process. This is very time consuming and significantly reduces the power of these devices to far below what your expectations are.
So your reply doesn’t address my criticisms in the slightest.
Maybe it’s like flying cars. It sounds awesome at first but then you realize something. Do you really want those schmucks who drive recklessly in two dimensions having a 3d dimension to worry about and fsck up? Never mind that at a couple of 100 feets even minor accidents become catastrophes.
This is a common and completely incorrect assumption about how the human motor control (movement) system functions. The body’s motions are controlled by continuous feedback loops provided by muscles, nerves and the motor cortex not by conscious thought. Movement is only consciously controlled when we initially learn a new task. Conscious thought is only used to initiate a movement eg the desire to go and get a cup of coffee. The actual movements are essentially an automated process.
In fact mind control is extremely tiring because there are no real feedback loops. It is equivalent to being perpetually stuck at the ability level of your first driving lesson.
The only realistic use for mind control or voice control is to allow disabled people to perform simple tasks.
Edited 2011-10-11 04:09 UTC
I wasn’t aware of much of that either. Thank you.
One thing I will add, is that even in the case of disabled where the subject is an amputee, it’s more likely that any ‘thought control’ would be controlled via the nervous system using the impulses for the limbs they no longer have.
Try to move your hand exactly 1mm by conscious thought – it is virtually impossible. However you can effortlessly do far more precise movements such as placing a mouse on a specific screen pixel when you have continuous positive and negative feedback from stretch receptors in muscles, pressure sensors in your fingertips and visual input.
I think this bike example wouldn’t work, because thoughts are not detailed enough for the computer to know exactly what you want if you think “I need a bike there”.
Have you ever faced a situation where you have something in mind, you think you know exactly what it is, but as soon as you want to explain it or create it (if it is a physical object), you hesitate and must think some more ? I believe this reflects the way our mind works. We have a blurry image, and we work out details as needed. Like with vision : we only see a huge load of blur, with a tiny neat region in the middle, but our brain and eyes silently fetch and parse details on demand, so fast we don’t notice.
Again, maybe someone who knows more about the subject than me can confirm. But if it’s true, adding a bike in a CGI scene would need as much attention to details with a mind control interface as with a mouse. It would remain very lengthy, because although the brain-computer link could be made a bit faster, brain speed wou
Edited 2011-10-11 10:51 UTC
brain speed would be the limiting factor in the end*
(stupid phone browser character limit)
This is a lengthy post, but it’s a subject that has interested me for a while and so I’ve read a few papers on this over the years.
There’s a fundamental flaw with peoples argument of using brainwaves to control hardware and that’s the fact that no two peoples brainswaves are the same. We’re all wired differently – which means different parts of out brain “lights up” when different people think the same thought. This means that not only does the user have to teach themselves how to use these new computers, but the computer also has to teach itself to read the user.
Things are then further complicated when you factor in how difficult it is for humans to focus on one thought at a time. Humans (like most animals) have scattered thoughts – it’s a survival mechanism. So, for example, to play a racing game and to focus on just thinking “left” or “right” to steer the car is incredibly hard. Particularly as most games (even basic driving simulators) have more to concentrate on than simply just left or right turns. Thus training your brain to isolate those thoughts is a /huge/ learning curve which would otherwise been unnecessary with traditional human interfaces (and all of that is after you’ve already spent half your day just training the computer to recognise your left thoughts from your right thoughts).
Furthermore, the reason tactile and verbal interfaces work so well is because they’re natural to the way we interact with the world. We don’t “will” an object to move, we touch it and it moves. Or we shout at our family to move it for us. But either way, it’s a physical or verbal process. And while the process is being performed, more often than not we’d be thinking about something else (eg I’d move a vase but that process will have me thinking about what I’d be buying my partner for Christmas). So how do you get the computer to separate actual instructions from natural tangents (again, going back to the driving sim, how would the computer know the difference between you thinking “break” and thinking “i need to break at the next turn”)
The only research in this field that I’ve thought was even close to being practical was the research with amputees. However the only reason that was successful was simply because those subject already had pre-existing pathways that they’d practised using for 20+ years of their life and which, sadly, are now unused. So they could send the message to move their left finger and that would trigger whatever device was configured to react to that pathway. However even that research is impractical for those of us lucky enough to have a full compliment of limbs as if I send a message to move my left finger, my left finger moves. So if I’m moving my limbs anyway, I might as well just use a tactile interface.
We also have the problem that current technology simply cannot give us a fine enough detail on our firing neurons. Thus this makes reading our thought patterns infinitely more difficult. However I will concede that this point should / will disappear as technology advances and we find more accurate ways to monitor the electrical activity in our brains.
So don’t get me wrong, I’m not saying that we’ll never have thought controlled hardware, I’m sure with time we’ll gradually get exposed to more and more novelty toys based around this tech and as future generations grow up, they’ll just be so exposed to these things that eventually humans will have both mastered a technique of “thinking” with these devices, but also the technology will be in a better position to accurately read our thoughts. But that time, we’ll also have learned from failed experiments and have developed better ways for computers to learn our individual brain patterns with greater speed than the lengthy process it is at the moment. When all that falls into place than this technology may well be viable for practical applications. But I very much doubt this will happen in our life time.
[edit]
Wow, my post is almost as long as Eugenia’s article hehe
Edited 2011-10-10 23:17 UTC
We’re all wired differently
So we just need to produce an uber-human and clone it.
Then fill the brains of clones with the same data.
Problem solved
Edited 2011-10-11 00:00 UTC
Maybe we can then make them our slaves.
Ahh hang on, that would make them voice activated and thus we wouldn’t be using thought control. Damnit.
Solution is not ideal, but they can use mind control to do things for us 🙂
We learn to control movement almost entirely by trial and error. This creates a hard-wired algorithm in the neuromuscular system based on continuous feedback. The more we practice the stronger and more refined the algorithm becomes.
It is extremely unlikely that anyone could ever do a highly complex task by mind control alone. Because there is virtually no feedback the brain finds it extremely difficult to learn. The neuromuscular system takes only seconds to master simple tasks, such as flicking a light switch. The same task takes hours to learn to control by the mind.
I think people have a real problem with imagination to just love spend their day typing in a chair, maybe they love their Alt+Tabbing of multiple Windows/Applications.
The thing is I believe that brainwaves as a user interface will be the future, but not with implants and not with complex AIs messing around. It’ll just be better if we can use a display in glasses and/or contact lenses and if we discretize the elements of the user interface in such a way that we would not need any complex AI to be programmed and trained just to turn the device into something useful.
I think that this would be a better solution than messing with implants that would need a wholly class of new regulations (probably international regulations).
Every night when I spend some time browsing the internet with my iPod before I sleep I think that it would be really nice just to read, zoom, “type” if I just needed to think in press “A”, then “R”, then “S” instead of going and typing ARS or pinching the screen, it would be marvelous as well if I could read holding the device in front of me.
I believe this will be the future, it’s better, more accessible, even to someone who cannot control his hands or do not have hands, this type of people exist, that what we have today, and I think will be the new touch when it finally arrives, I just hope to be alive in this happens.
That’s what the article says too. No implants, just eyeware displays in the beginning, and small brainwave attachments in the far future.
http://innovega-inc.com
microdisplay+lenses
You can see virtual images that fill your entire field of view.
Edited 2011-10-11 06:42 UTC
I had some similiar thoughts what can be the next breakthrough in computer technologies: http://technokloty.blogspot.com/2011/08/what-are-next-steps-for-aug…
What do you think about having 2 cams integrated in glasses, so you can do gestures to control your “computer”? Of course it looks silly when people start pointing to nowhere, but speaking in nowhere when you have a bluetooth headset was funny as well but people got used to it. I agree that voice recognition is very slow and unprecise.
As a main, prolonged way of control, this probably can work well only in films. Or on the ISS, perhaps.
Some starting points…
http://en.wikipedia.org/wiki/Touchscreen#Gorilla_arm
http://en.wikipedia.org/wiki/Gesture_recognition#.22Gorilla_arm.22
BTW, I don’t think we really got used to people who loudly “talk to themselves” – we might accept it more readily, but it’s still uncanny. Probably (still) triggers, and will continue to, some strong pathways in our brains which cause us to (unnecessarily) divert our attention to that speaker (to whom nobody replies – hence “wait, does that ape want something from me?!”)
What noone has mentioned as far as I could see skimming the comments, is the great potential for hacking, and the immense dangers that would ensue. With glasses for augmented reality (blinding someone, for example) and even more with brain implants (the article already mentioned hallucinations, what if an evil hacker sends you those “hallicinations”)? And, to take it into the realm of “trust noone”: what if the government would abuse your implants? I shudder to think of the possibilities the less benign governments would have (did I say less benign? German governement spyware anyone?).
The thought of a hallucination version of “Congratulations, You are the 1.000.000th visitor.” makes me shudder.
“…Think of a pink elephant to allow downloading your personal details…[1]”
@Eugenia, if you want to research a little more deeply into this subject, (specifically human-machine integration), look into trans-humanism. It’s really a fascinating topic.
One of my favorite online magazines about the h+ community and the future of humanity is hplusmagazine.com. I don’t agree with everything there, but it’s very thought provoking.
It’s also good to see you on the front page again.
A few questions for people who know more about the field than me.
1/Would it be possible to efficiently use brainwaves not as a full mind control interface, but as a keyboard + mouse replacement ? Like, to move an on-screen pointer, type words letter by letter or execute keyboard shortcuts using thoughts. The number of signals that the computer must recognize would become much smaller, and well defined so it seems to be a simpler technical problem.
2/If people were trained to use such a system from a young age, would they be as good using it as they are using physical pointing and typing devices, treating the computer as an extra limb so to speak ?
At my local hospital there is a “game” where one has to think a rocket up and down. I got to play with it and after a few minutes is is very easy.
This was just a y-axle, but I guess it is no problem to extend it with a x-axle.
Also note that this was about 10 years ago.
I’d be interested on some more details about this, if you still remember it Could you modulate the speed at which that rocket moved ? Did you need to spend some time setting up the computer so that it recognizes your brain waves ? What kind of stuff did you have to think about in order to make the rocket move ?
I’ll try to describe it the best I can. It is difficult to explain because it is very subjective I think.
You how know when you want to hear something better you ‘shift your focus’ to your ears and how you can do this with other parts of your body. (There’s a well known meditation technique where you do this with your whole body starting from your feet.)
I focused on a part of my brain like this, at first the rocket did nothing but when I repeated this the rocket did start to react. It did this very quickly, well under a minute. It did not react smoothly however but after a while (maybe a minute) it reacted very well.
I remember testing if focusing on other parts of my brain gave effect, which was a negative.
EDIT: By thinking/focussing ‘louder’ I could make the rocket increase it’s speed.
The fact that I did this by focusing on a part of the brain was just my way of doing things. I just thought that I needed to create some electrical charge at a place of my brain where there where many electrodes. The operator told me to ‘just move it up’.
My interpretation is that it doesn’t move because you are thinking “please move up”, but that you are just pressing buttons with you head. The computer is just looking for peaks in the brainwave signal, maybe it can read frustration or accomplishment so it can know if it did the right thing.
Sorry this is all I can remember though… Recently they where testing me for a rare form of epilepsy. I got many electrodes glued to my head going to an amplifier and then via a LPT-connector to a recording box.
Next time I have to wear this at home, and I will hook myself up to my computer. I’ll do some experiments and post the results.
Edited 2011-10-11 17:31 UTC
Perhaps this game was based on the detection of meditative brain states through beta waves, like this toy ? http://www.youtube.com/watch?v=IHA4j66MCa0
Edited 2011-10-11 19:03 UTC
Doug Englebart gave his infamous “mother of all demos” back in 1968. It was not until Apple launched the Macintosh in 1984 that many of the concepts he demonstrated became mainstream. Some of the things he demonstrated have only recently become an every-day reality.
Speech recognition, and speech driven user interfaces, have been touted since the early 80s. We’ve had to wait until 2011 to get Siri, and whilst Siri does appear to be fairly smart I strongly suspect it may take another decade for it to truly mature.
Back in the 80s we all believed that we’d be using speech as a primary interaction method by the mid 90’s. We saw plenty of speech recognition demos back then, and it looked like the big problems had been solved. We vastly underestimated the complexity of the task.
There is no doubt that progress has been made in “brainwave interfaces”, with fairly impressive demonstrations of people moving mouse pointers with their minds, and monkeys moving robot arms. Direct, or indirect, neural interfacing is however several orders of magnitude more complex than speech interaction. I have no doubt that it’s a problem that will eventually get solved but I’m in my late 30s now and, given the historical rate of progress we’ve seen on other forms of human computer interaction, I doubt very much that it is something that is going to get solved within my lifetime.
This pretty much matches up exactly with my vision of the future of HCI. I didn’t really expect the technology to develop this fast, though…