The first thing that may kill personal computing as we know it today, from Windows to Photoshop, is the hype around the touchscreen thing. Most cellphone and laptop / desktop computers manufacturers seem to see it as the interface of tomorrow. It's fun to use, it's rather intuitive (just point with your finger, as your mother always forbid you to do) and it makes zooming and rotating images much, much nicer, but I think that general-purpose computing cannot exist on a touchscreen device for the following reasons:
Touchscreen are manipulated using fingers. Aside from being greasy and leaving ugly traces on the screen, fingers are quite big. Just put your finger aside with your mouse pointer to see what I mean. Moreover, they're not a precise way of pointing something: if you try to quickly put your finger on some on-screen target, you'll actually hit somewhere in a circle of ~1 inch diameter around the place you were looking at. This is partly due to the fact that your finger as you see it (a big cylinder with a rounded end) is not the same at all as your finger as the screen sees it (an ugly shape corresponding to the part of your finger's surface that touches the screen), and except completely reinventing the way touchscreen works, there's no way to overcome this issue.
All this means that no one may imagine using a regular computer interface with a touchscreen. Windows 7 perfectly illustrates the failure of this way of thinking. Controls have to be made a lot bigger, say 3–4 times bigger. This means dividing information density on screen by 3 or 4. Unless computers become gigantic, there's no way one may use Adobe Photoshop / GIMP, Adobe Illustrator / Inkscape, a text editor, or any other kind of high-end interactive software using a touchscreen. A phone-sized touchscreen can only display some toys, a tablet-sized touchscreen can get feature parity with a button-powered phone, a laptop-sized touchscreen can get feature parity with a regular netbook, and getting a touchscreen-powered device that allows feature parity with modern desktop software would require a Surface-sized touchscreen, which would both cost an arm and be highly impractical to use.
- Interaction capabilities:
Let's just take any modern 3-button mouse, and not even think about what one can do using a keyboard and more buttons on the mouse. Using that mouse, the available regular interactions are hovering, clicking, double-clicking, right-clicking, scrolling using the wheel, and clicking on the scroll wheel. On a touchscreen, you get two general-purpose interactions: tapping and scrolling. Zooming and rotating may added, but they tend to result in false-positives in detection of gestures, and more importantly have a very specific range of applications. Double-tapping is rarely used because it's far less practical than having a button under your finger, and currently proposed right-click replacements (two-finger tapping and tap-and-hold) are highly unintuitive and impractical, so will probably never make it into everyday use.
This means that not only touchscreens reduce information density, but they also make context-sensitive menus and actions less practical. The lack of hovering reduce the ability to introduce context-sensitive information too: you have to put labels everywhere in order to explain in detail what each control does, instead of using tooltips and other cursor-changing tricks for that purpose. This makes touchscreen-oriented complex interfaces even more complex than their regular equivalent, because of the higher amount of information that your brain has to decode when it encounters them.
Some people like on-screen keyboards, some hate them, but almost everyone agrees that they're not good at typing large amounts of text (let's talk about several pages). Be it because of arm strain, finger strain, or simply because of the slow performance. The latter is partly caused by the need to use slow eye-hand coordination instead of fast tactile feedback. It's also partly caused because you more often have to scroll in order to check what you're typing, since on-screen keyboards take up most of the screen and only let you check the few last lines you've been typing.
There's a lot to say about the ergonomic implications of controls that keep appearing everywhere on the screen with a lack of things that stay in place, too, but I'll restrain myself from saying that if you can't type a lot of text, you can't work on your device, and more importantly you can't program anything on it. So touchscreen devices completely differ from the regular computers they're supposed to replace in that they're not independent: to write code for it, you need to plug a keyboard on it (in which case it's not a pure touchscreen device anymore), or to use another computer in which a keyboard can be plugged.
As one can see, if touchscreen devices become the norm in the area of personal computing (which remains to be seen), it means that general-purpose computing almost certainly dies in that area. I'm not sure whether the touchscreen thing would make its way into the office or not. I don't think so because of the aforementioned keyboard issue. Even if it wouldn't, it looks like general-purpose computing is also doomed in that area due to manager's new obsession of control over employees' activities which will almost certainly eventually lead to a full lock-down of office computers for anything that is not work-oriented. That is, if cloud computing does not catch up...
In the stone age of computing, computers were so expensive that only huge companies could afford getting one. These companies then rented out computing power for insanely high fees to people who really needed some, having a dictatorial control over their data and actions. Then, as minicomputers and microcomputers appeared and as computer prices dropped, individuals and organizations managed to become independent from that horrible system which more or less disappeared from the surface of the Earth. Only traces of it remain in the form of supercomputer rental for a few highly computer-intensive tasks. However, many companies have very good memories of those times and would like to get them back in the form of what's called "cloud computing", which is the second massive-destruction weapon targeting general-purpose personal computers.
Cloud computing is about having people rely on online services (such as Google's or Apple's) for tasks where they could nowadays use their computer and remain independent from those services and their owners. Cloud advocates defend themselves by invoking reliability and simplicity (those companies do backups, don't open executable files in spams when running as an admin user, don't try to crack software, and generally have skilled computers engineers taking care of the system and its data so they indeed get much more reliable and snappy systems than the average Joe's computer), along with the ability for one to get access to his data from everywhere (relying on the assumption that the user is stupid and won't understand that he won't be the only one getting ubiquitous access to his data). Interestingly enough, the cloud concept sometimes merges together with the idea of getting everything from a single vendor for the sake of better consistency (giving said vendor insane power, amounts of money, and knowledge of your life).
As I said before, the cloud idea is inherently opposed that of general-purpose, "smart" personal computers. Cloud companies, to the contrary, want their users to get dumb terminals, computers which are almost solely able to browse the web (and more specifically optimize access to the vendor's online services). By almost, I mean that some "transition" OSes exist at the moment. Since some tasks that can be done offline can't be done online at the moment, those OSes allow to do such things, whilst putting any other task "on the cloud". The most perfect example of a cloud-oriented OS is Google's Chrome OS, with Apple's iPhone OS being another vision of the cloud concept which leads to the same result, but has more power and less reliability than a pure webby OS because it uses native code in places.
If the cloud idea catches on and proves to be technically realistic (whether games and video editing can be put on the cloud remains to be seen), general-purpose personal computers as we know them will gradually disappear, replaced by dumb devices that can be carried everywhere and are almost solely designed for using online services. As network technology improves more and more things will be done on the web and less things will be done locally, making the desktop and laptop market shrink dramatically.
In the end, Google and Apple will get into some big fight for world domination and the Earth will be destroyed through nuclear weapons... Just kidding. I think.
And you, what do you think about that subject? Do you think that general-purpose personal computing is going to become marginal, that low-end computers which are nothing more than tools are the way to go for individuals? Or that this market is here to stay, that touchscreen devices will be able to get into general-purpose computing, that we'll reach some kind of equilibrium where both devices have their respective place, and that the whole cloud computing thing will be put to a halt by enlightened people brutally understanding how wrong the idea of giving too much power to big companies is?
About the author:
Neolander is a physics student and a computer enthusiast, which spends too much of his spare time looking at computer-oriented news sites, writing an OS, and reading computer science books. He's the author of http://theosperiment.wordpress.com, a blog about his attempts at designing and making a personal computing-oriented OS, currently in the process of moving to WordPress for un-googling purposes.