When computers–evolutive machines which may be reprogrammed at will–became widely available, it generated a well-deserved buzz. Some (scientists, banks, insurance companies) felt like they were dreaming, others (like some SF writers) rather thought of it as a nightmare. Today, like it or not, they’re everywhere However, part of the individual-oriented computers are going rather far from the original programmable machine model. They rather look like usual tools, with a fixed use. Their customizable internals are only accessible to the people who engineered them. Is there no market anymore for general-purpose personal programmable machines, able to do about anything? I’ll try to answer this question, taking into account two major trends in the personal computing market : touchscreen-powered devices and cloud computing.
The first thing that may kill personal computing as we know it today, from Windows to Photoshop, is the hype around the touchscreen thing. Most cellphone and laptop / desktop computers manufacturers seem to see it as the interface of tomorrow. It’s fun to use, it’s rather intuitive (just point with your finger, as your mother always forbid you to do) and it makes zooming and rotating images much, much nicer, but I think that general-purpose computing cannot exist on a touchscreen device for the following reasons:
Touchscreen are manipulated using fingers. Aside from being greasy and leaving ugly traces on the screen, fingers are quite big. Just put your finger aside with your mouse pointer to see what I mean. Moreover, they’re not a precise way of pointing something: if you try to quickly put your finger on some on-screen target, you’ll actually hit somewhere in a circle of ~1 inch diameter around the place you were looking at. This is partly due to the fact that your finger as you see it (a big cylinder with a rounded end) is not the same at all as your finger as the screen sees it (an ugly shape corresponding to the part of your finger’s surface that touches the screen), and except completely reinventing the way touchscreen works, there’s no way to overcome this issue.
All this means that no one may imagine using a regular computer interface with a touchscreen. Windows 7 perfectly illustrates the failure of this way of thinking. Controls have to be made a lot bigger, say 3â€“4 times bigger. This means dividing information density on screen by 3 or 4. Unless computers become gigantic, there’s no way one may use Adobe Photoshop / GIMP, Adobe Illustrator / Inkscape, a text editor, or any other kind of high-end interactive software using a touchscreen. A phone-sized touchscreen can only display some toys, a tablet-sized touchscreen can get feature parity with a button-powered phone, a laptop-sized touchscreen can get feature parity with a regular netbook, and getting a touchscreen-powered device that allows feature parity with modern desktop software would require a Surface-sized touchscreen, which would both cost an arm and be highly impractical to use.
- Interaction capabilities:
Let’s just take any modern 3-button mouse, and not even think about what one can do using a keyboard and more buttons on the mouse. Using that mouse, the available regular interactions are hovering, clicking, double-clicking, right-clicking, scrolling using the wheel, and clicking on the scroll wheel. On a touchscreen, you get two general-purpose interactions: tapping and scrolling. Zooming and rotating may added, but they tend to result in false-positives in detection of gestures, and more importantly have a very specific range of applications. Double-tapping is rarely used because it’s far less practical than having a button under your finger, and currently proposed right-click replacements (two-finger tapping and tap-and-hold) are highly unintuitive and impractical, so will probably never make it into everyday use.
This means that not only touchscreens reduce information density, but they also make context-sensitive menus and actions less practical. The lack of hovering reduce the ability to introduce context-sensitive information too: you have to put labels everywhere in order to explain in detail what each control does, instead of using tooltips and other cursor-changing tricks for that purpose. This makes touchscreen-oriented complex interfaces even more complex than their regular equivalent, because of the higher amount of information that your brain has to decode when it encounters them.
Some people like on-screen keyboards, some hate them, but almost everyone agrees that they’re not good at typing large amounts of text (let’s talk about several pages). Be it because of arm strain, finger strain, or simply because of the slow performance. The latter is partly caused by the need to use slow eye-hand coordination instead of fast tactile feedback. It’s also partly caused because you more often have to scroll in order to check what you’re typing, since on-screen keyboards take up most of the screen and only let you check the few last lines you’ve been typing.
There’s a lot to say about the ergonomic implications of controls that keep appearing everywhere on the screen with a lack of things that stay in place, too, but I’ll restrain myself from saying that if you can’t type a lot of text, you can’t work on your device, and more importantly you can’t program anything on it. So touchscreen devices completely differ from the regular computers they’re supposed to replace in that they’re not independent: to write code for it, you need to plug a keyboard on it (in which case it’s not a pure touchscreen device anymore), or to use another computer in which a keyboard can be plugged.
As one can see, if touchscreen devices become the norm in the area of personal computing (which remains to be seen), it means that general-purpose computing almost certainly dies in that area. I’m not sure whether the touchscreen thing would make its way into the office or not. I don’t think so because of the aforementioned keyboard issue. Even if it wouldn’t, it looks like general-purpose computing is also doomed in that area due to manager’s new obsession of control over employees’ activities which will almost certainly eventually lead to a full lock-down of office computers for anything that is not work-oriented. That is, if cloud computing does not catch up…
In the stone age of computing, computers were so expensive that only huge companies could afford getting one. These companies then rented out computing power for insanely high fees to people who really needed some, having a dictatorial control over their data and actions. Then, as minicomputers and microcomputers appeared and as computer prices dropped, individuals and organizations managed to become independent from that horrible system which more or less disappeared from the surface of the Earth. Only traces of it remain in the form of supercomputer rental for a few highly computer-intensive tasks. However, many companies have very good memories of those times and would like to get them back in the form of what’s called “cloud computing“, which is the second massive-destruction weapon targeting general-purpose personal computers.
Cloud computing is about having people rely on online services (such as Google’s or Apple’s) for tasks where they could nowadays use their computer and remain independent from those services and their owners. Cloud advocates defend themselves by invoking reliability and simplicity (those companies do backups, don’t open executable files in spams when running as an admin user, don’t try to crack software, and generally have skilled computers engineers taking care of the system and its data so they indeed get much more reliable and snappy systems than the average Joe’s computer), along with the ability for one to get access to his data from everywhere (relying on the assumption that the user is stupid and won’t understand that he won’t be the only one getting ubiquitous access to his data). Interestingly enough, the cloud concept sometimes merges together with the idea of getting everything from a single vendor for the sake of better consistency (giving said vendor insane power, amounts of money, and knowledge of your life).
As I said before, the cloud idea is inherently opposed that of general-purpose, “smart” personal computers. Cloud companies, to the contrary, want their users to get dumb terminals, computers which are almost solely able to browse the web (and more specifically optimize access to the vendor’s online services). By almost, I mean that some “transition” OSes exist at the moment. Since some tasks that can be done offline can’t be done online at the moment, those OSes allow to do such things, whilst putting any other task “on the cloud”. The most perfect example of a cloud-oriented OS is Google’s Chrome OS, with Apple’s iPhone OS being another vision of the cloud concept which leads to the same result, but has more power and less reliability than a pure webby OS because it uses native code in places.
If the cloud idea catches on and proves to be technically realistic (whether games and video editing can be put on the cloud remains to be seen), general-purpose personal computers as we know them will gradually disappear, replaced by dumb devices that can be carried everywhere and are almost solely designed for using online services. As network technology improves more and more things will be done on the web and less things will be done locally, making the desktop and laptop market shrink dramatically.
In the end, Google and Apple will get into some big fight for world domination and the Earth will be destroyed through nuclear weapons… Just kidding. I think.
And you, what do you think about that subject? Do you think that general-purpose personal computing is going to become marginal, that low-end computers which are nothing more than tools are the way to go for individuals? Or that this market is here to stay, that touchscreen devices will be able to get into general-purpose computing, that we’ll reach some kind of equilibrium where both devices have their respective place, and that the whole cloud computing thing will be put to a halt by enlightened people brutally understanding how wrong the idea of giving too much power to big companies is?
About the author:
Neolander is a physics student and a computer enthusiast, which spends too much of his spare time looking at computer-oriented news sites, writing an OS, and reading computer science books. He’s the author of http://theosperiment.wordpress.com, a blog about his attempts at designing and making a personal computing-oriented OS, currently in the process of moving to WordPress for un-googling purposes.
I believe that all the devices and concepts have their uses. The only thing that differs is the number of users, but even the dumbest gadgets are used by someone. That’s why the general-purpose computing (GPC) will probably never completely die out.
Now, the GPC has proven its worth over a long period of time. That and the very limitations of the specialized devices presented in the article, in my opinion, ensures that the GPC is here to stay for the foreseeable future.
New ideas emerge (or the old ones get reevaluated) and naturally, to make room for them the current ones have to suffer somewhat. However, I don’t think the GPC is not in any real danger as the new technologies are not valid replacements (yet).
The insurance for a technology’s survival is the demand, and I think I lot of people will demand the GPC for a long time to come. Me included.
It’s true that more and more specialized computing devices have entered and are entering our lives. TV-recorders, game consoles, phones, navigation systems, mobile media players, restaurant order PDAs, cash registers, ATMs and countless other categories.
But on the other hand general purpose computers are still used and in great numbers. Its main strength remains being able to do everything in one device (games, media playback, calculation). It’s the only machine, that can gain additional functionality at any time. The downside is of course reduced usability and reliability.
Specialized devices are exactly what the name emplies – specialized. These allow more people access to technology to improve their lives. E.g. the iPad makes the Web accessible to people who never know which mouse button to click or what a file is supposed to be. This is a good thing.
Not everybody needs a general pupose computer of their own. But lots of people do and always will.
All this complaining a la Cory Doctorow about the end of computing as we know it, reminds me of another time. The time we finally got easy to use GUIs for our PCs. All the old guard complained, how this would dumb everything down too much. Soon nobody of these WIMPs would know how to properly use a computer anymore. They were false then and are false now.
Every one of us will use a lot more computers in the future, than we do today. Also more people who don’t use one today will use one in the future. And of those only a fraction will be PCs as we know them today. But they won’t cease to exist.
We must take in consideration that most (I said MOST) of today’s average computer users don’t care about that kind of thing. They just want to turn on their machines and look at some social networking website. If it’s “cool”, if “everybody is using it”, why would they care if all their personal data is stored remotely? So what if the page has more ads than content?
Sometimes I feel that people like us will become some sort of “resistence” against some giant company that will rule the world. Sounds stupid now, but maybe in a few years?
Suddenly I remembered the movie WALL-E. All humans had a screen right in their faces. They didn’t even need to touch it. And they liked it…
Yep, general purpose computing will die. Enterprises will replace thousands of PCs with iPads. Workers will type emails on fiddly touch screens. Sensitive proprietary data will be stored with Google. CAD drawings will be produced on iPhones.
This seems like a general get off my lawn argument. You are comparing older applications to a newer interface. Yes the current versions of photoshop won’t work well with a touch screen… It doesn’t mean it is the end. Photoshop will need to be designed for touch screen. There are a lot of advantages to the touch screen that you cannot do with the mouse. Pinch Zoom is one of them. Sure you can zoom in with a mouse gesture but a pinch zoom is often more useful.
Secondly the argument is really based on Pixel based design, this is slowing going away as the pixel isn’t as accurate as it use to be. With Anti-Aliasing, and image compression if you are off by a few pixles so what.
The argument smells a lot like when we started to move off Command line and went to Point and Click WIMP interfaces. How the mouse just isn’t as accurate, or as functional as they keyboard and it really makes things that much worse.