Within the last few days we read the news about Apple’s Siri AI personal assistant, and about a brain implant that lets monkeys control virtual limps & feel virtual objects. I believe that if someone is to also combine a few more technologies (e.g. high-res eyeware, appropriate operating system changes), we will be looking at the next user interface revolution, after the inventions of the computer mouse and touch interfaces.
I’m a big fan of futuristic-everything. I’ve worked with user interfaces when I used to live in the UK, and since then I’ve always tried to think of possible ways to better the ways we interact with machines. Kind of a hobby of mine.
Let’s start by saying that the future is mobile, and there’s little dispute about this. Desktop machines will only be used for very heavy, specialized purposes, the same way trucks are used today. Most of the people will just own fast-enough mobile, portable devices, rather than desktops. Even tablets will hit the ceiling of what people would want to carry with them. Basically, anything bigger than a 5″ screen smartphone will be too much to carry with us. In time, carrying a 10″ tablet will be seen no different than the way we today feel about 1981’s businessmen, carrying around the Osborne 1 “portable” computer.
The biggest problem that such small devices face is their minuscule screen, while the web expands to SXGA and higher resolutions as a minimum. Sure, they all use very high resolutions these days (1080p is around the corner, 720p is already here), but the small physical screen poses problems for applications that actually do require a lot of widgets, or a large real screen estate working area (e.g. most modern web sites, serious video/still editors, 3D editors etc).
There are two ways to deal with the problem. One, is holographic projected displays. Unfortunately, we’re far away technologically from something like it, and no one really wants to see your huge display in the middle of the street, while you’re trying to get directions. Even if we had the technology for it, it’s too visually messy for everyone else. The second way, which is both a cleaner solution and closer to our technological reach, is a projected display via glasses, brainwaves, and an AI-oriented OS.
The idea is this: Classic-looking smartphone in the pocket, connected wirelessly (via Bluetooth or WiFi), to special eyeware (that double as prescription glasses). When the glasses are activated, a transparent, high-resolution data screen is shown in front of the user, at a certain perceived distance — similar to the screen-glass technology fighter jet helmets have. Voice recognition will only be an alternative way to use the system. Why use voice recognition when we are so close into controlling the UI with brainwaves alone? The video below showcases an early version of such technology that requires no surgical implants:
Our AI assistant doesn’t always need to talk back to us, but rather silently execute the actions our brain has ordered. The operating system supporting such feedback of course needs to be tweaked accordingly, and some applications might need to be redesigned too. No need to be clicking endless buttons on Blender 3D for example, you simply think of what you want to accomplish and the app does it (if it has the ability).
Let’s not run ahead of ourselves though. A technology that does all that in a nice, neat commercial-ready package is at least 10 years off. Right now, the various technologies involved are both in their infancy, and fragmented in various companies or research institutes. Obviously, the right visionary tech company that is able to first secure the patents or rights for all the needed parts, will have a good head-start.
For the first time such a new user input design is not mostly-hardware as it used to be for older UI inventions. In fact, most of the needed hardware technology already exists in one form or another. But software needs to be rethought quite drastically. Maybe new programming languages need to be invented too to assist in the development of such thought-oriented software. Security would need to be extra tight too!
The impact of such a UI design is of course tremendous. Work can happen much faster (especially CGI), virtual reality will actually feel like reality that we can touch, while augmented reality will change how we function. In essence, we would be connecting with each other telepathically!
As system-in-a-chip systems become smaller, eventually the smartphone in the pocket will be replaced by a wristwatch-sized gadget, and eventually by the eyeware itself. No reason to carry two devices when we’ll at last have the technology to fit everything on one.
And even further down the road we won’t need eyeware at all. A small device attached somewhere on our head will be able to not also receive, but also send us brainwaves, making us “see” the data in our head (using the same shortcut hallucinating people see things that don’t really exist). It remains to be seen how mentally safe such an approach really is. But one thing is for sure, such a design would be using our body as an antenna for the closest “tower”, and possibly as a power source too.
The era of real Cyborgs will begin. And it will feel as nothing more but natural technological evolution.
Note: This is an updated article from the one I published last year on my blog.
The things aren’t running exactly how you try to picture them. Nobody will let his brain to interface with suspect hardware and tablets, desktops and laptops will be with us at least a century or so.
Maybe at some point in the future the humans will interact with hardware in some very creative ways, but not in the foreseeable future.