Is General-Purpose Personal Computing Doomed?

When computers–evolutive machines which may be reprogrammed at will–became widely available, it generated a well-deserved buzz. Some (scientists, banks, insurance companies) felt like they were dreaming, others (like some SF writers) rather thought of it as a nightmare. Today, like it or not, they’re everywhere However, part of the individual-oriented computers are going rather far from the original programmable machine model. They rather look like usual tools, with a fixed use. Their customizable internals are only accessible to the people who engineered them. Is there no market anymore for general-purpose personal programmable machines, able to do about anything? I’ll try to answer this question, taking into account two major trends in the personal computing market : touchscreen-powered devices and cloud computing.

The first thing that may kill personal computing as we know it today, from Windows to Photoshop, is the hype around the touchscreen thing. Most cellphone and laptop / desktop computers manufacturers seem to see it as the interface of tomorrow. It’s fun to use, it’s rather intuitive (just point with your finger, as your mother always forbid you to do) and it makes zooming and rotating images much, much nicer, but I think that general-purpose computing cannot exist on a touchscreen device for the following reasons:

Precision:

Touchscreen are manipulated using fingers. Aside from being greasy and leaving ugly traces on the screen, fingers are quite big. Just put your finger aside with your mouse pointer to see what I mean. Moreover, they’re not a precise way of pointing something: if you try to quickly put your finger on some on-screen target, you’ll actually hit somewhere in a circle of ~1 inch diameter around the place you were looking at. This is partly due to the fact that your finger as you see it (a big cylinder with a rounded end) is not the same at all as your finger as the screen sees it (an ugly shape corresponding to the part of your finger’s surface that touches the screen), and except completely reinventing the way touchscreen works, there’s no way to overcome this issue.

All this means that no one may imagine using a regular computer interface with a touchscreen. Windows 7 perfectly illustrates the failure of this way of thinking. Controls have to be made a lot bigger, say 3–4 times bigger. This means dividing information density on screen by 3 or 4. Unless computers become gigantic, there’s no way one may use Adobe Photoshop / GIMP, Adobe Illustrator / Inkscape, a text editor, or any other kind of high-end interactive software using a touchscreen. A phone-sized touchscreen can only display some toys, a tablet-sized touchscreen can get feature parity with a button-powered phone, a laptop-sized touchscreen can get feature parity with a regular netbook, and getting a touchscreen-powered device that allows feature parity with modern desktop software would require a Surface-sized touchscreen, which would both cost an arm and be highly impractical to use.

Interaction capabilities:

Let’s just take any modern 3-button mouse, and not even think about what one can do using a keyboard and more buttons on the mouse. Using that mouse, the available regular interactions are hovering, clicking, double-clicking, right-clicking, scrolling using the wheel, and clicking on the scroll wheel. On a touchscreen, you get two general-purpose interactions: tapping and scrolling. Zooming and rotating may added, but they tend to result in false-positives in detection of gestures, and more importantly have a very specific range of applications. Double-tapping is rarely used because it’s far less practical than having a button under your finger, and currently proposed right-click replacements (two-finger tapping and tap-and-hold) are highly unintuitive and impractical, so will probably never make it into everyday use.

This means that not only touchscreens reduce information density, but they also make context-sensitive menus and actions less practical. The lack of hovering reduce the ability to introduce context-sensitive information too: you have to put labels everywhere in order to explain in detail what each control does, instead of using tooltips and other cursor-changing tricks for that purpose. This makes touchscreen-oriented complex interfaces even more complex than their regular equivalent, because of the higher amount of information that your brain has to decode when it encounters them.

Keyboard:

Some people like on-screen keyboards, some hate them, but almost everyone agrees that they’re not good at typing large amounts of text (let’s talk about several pages). Be it because of arm strain, finger strain, or simply because of the slow performance. The latter is partly caused by the need to use slow eye-hand coordination instead of fast tactile feedback. It’s also partly caused because you more often have to scroll in order to check what you’re typing, since on-screen keyboards take up most of the screen and only let you check the few last lines you’ve been typing.

There’s a lot to say about the ergonomic implications of controls that keep appearing everywhere on the screen with a lack of things that stay in place, too, but I’ll restrain myself from saying that if you can’t type a lot of text, you can’t work on your device, and more importantly you can’t program anything on it. So touchscreen devices completely differ from the regular computers they’re supposed to replace in that they’re not independent: to write code for it, you need to plug a keyboard on it (in which case it’s not a pure touchscreen device anymore), or to use another computer in which a keyboard can be plugged.

As one can see, if touchscreen devices become the norm in the area of personal computing (which remains to be seen), it means that general-purpose computing almost certainly dies in that area. I’m not sure whether the touchscreen thing would make its way into the office or not. I don’t think so because of the aforementioned keyboard issue. Even if it wouldn’t, it looks like general-purpose computing is also doomed in that area due to manager’s new obsession of control over employees’ activities which will almost certainly eventually lead to a full lock-down of office computers for anything that is not work-oriented. That is, if cloud computing does not catch up…

In the stone age of computing, computers were so expensive that only huge companies could afford getting one. These companies then rented out computing power for insanely high fees to people who really needed some, having a dictatorial control over their data and actions. Then, as minicomputers and microcomputers appeared and as computer prices dropped, individuals and organizations managed to become independent from that horrible system which more or less disappeared from the surface of the Earth. Only traces of it remain in the form of supercomputer rental for a few highly computer-intensive tasks. However, many companies have very good memories of those times and would like to get them back in the form of what’s called “cloud computing“, which is the second massive-destruction weapon targeting general-purpose personal computers.

Cloud computing is about having people rely on online services (such as Google’s or Apple’s) for tasks where they could nowadays use their computer and remain independent from those services and their owners. Cloud advocates defend themselves by invoking reliability and simplicity (those companies do backups, don’t open executable files in spams when running as an admin user, don’t try to crack software, and generally have skilled computers engineers taking care of the system and its data so they indeed get much more reliable and snappy systems than the average Joe’s computer), along with the ability for one to get access to his data from everywhere (relying on the assumption that the user is stupid and won’t understand that he won’t be the only one getting ubiquitous access to his data). Interestingly enough, the cloud concept sometimes merges together with the idea of getting everything from a single vendor for the sake of better consistency (giving said vendor insane power, amounts of money, and knowledge of your life).

As I said before, the cloud idea is inherently opposed that of general-purpose, “smart” personal computers. Cloud companies, to the contrary, want their users to get dumb terminals, computers which are almost solely able to browse the web (and more specifically optimize access to the vendor’s online services). By almost, I mean that some “transition” OSes exist at the moment. Since some tasks that can be done offline can’t be done online at the moment, those OSes allow to do such things, whilst putting any other task “on the cloud”. The most perfect example of a cloud-oriented OS is Google’s Chrome OS, with Apple’s iPhone OS being another vision of the cloud concept which leads to the same result, but has more power and less reliability than a pure webby OS because it uses native code in places.

If the cloud idea catches on and proves to be technically realistic (whether games and video editing can be put on the cloud remains to be seen), general-purpose personal computers as we know them will gradually disappear, replaced by dumb devices that can be carried everywhere and are almost solely designed for using online services. As network technology improves more and more things will be done on the web and less things will be done locally, making the desktop and laptop market shrink dramatically.

In the end, Google and Apple will get into some big fight for world domination and the Earth will be destroyed through nuclear weapons… Just kidding. I think.

And you, what do you think about that subject? Do you think that general-purpose personal computing is going to become marginal, that low-end computers which are nothing more than tools are the way to go for individuals? Or that this market is here to stay, that touchscreen devices will be able to get into general-purpose computing, that we’ll reach some kind of equilibrium where both devices have their respective place, and that the whole cloud computing thing will be put to a halt by enlightened people brutally understanding how wrong the idea of giving too much power to big companies is?

About the author:
Neolander is a physics student and a computer enthusiast, which spends too much of his spare time looking at computer-oriented news sites, writing an OS, and reading computer science books. He’s the author of http://theosperiment.wordpress.com, a blog about his attempts at designing and making a personal computing-oriented OS, currently in the process of moving to WordPress for un-googling purposes.

48 Comments

  1. 2010-05-02 1:08 pm
  2. 2010-05-02 1:21 pm
  3. 2010-05-02 1:31 pm
  4. 2010-05-02 1:49 pm
  5. 2010-05-02 3:06 pm
    • 2010-05-02 4:28 pm
    • 2010-05-02 4:54 pm
      • 2010-05-02 5:46 pm
        • 2010-05-02 6:52 pm
      • 2010-05-03 7:01 am
        • 2010-05-03 9:53 am
          • 2010-05-04 2:16 am
    • 2010-05-02 6:25 pm
    • 2010-05-03 6:33 am
    • 2010-05-04 1:09 pm
  6. 2010-05-02 3:16 pm
  7. 2010-05-02 3:29 pm
    • 2010-05-02 3:39 pm
      • 2010-05-02 3:51 pm
      • 2010-05-02 3:54 pm
      • 2010-05-02 4:43 pm
        • 2010-05-02 5:05 pm
          • 2010-05-03 10:04 am
      • 2010-05-04 1:10 pm
  8. 2010-05-02 4:40 pm
  9. 2010-05-02 5:41 pm
    • 2010-05-02 5:51 pm
    • 2010-05-02 8:29 pm
  10. 2010-05-02 5:45 pm
    • 2010-05-03 4:43 am
  11. 2010-05-02 6:20 pm
  12. 2010-05-02 6:25 pm
  13. 2010-05-02 6:32 pm
    • 2010-05-02 6:42 pm
  14. 2010-05-02 6:34 pm
    • 2010-05-02 7:18 pm
  15. 2010-05-02 11:48 pm
    • 2010-05-03 4:40 am
  16. 2010-05-03 3:50 am
  17. 2010-05-03 4:19 am
    • 2010-05-03 4:31 am
  18. 2010-05-03 5:55 am
    • 2010-05-03 6:04 am
  19. 2010-05-03 6:17 am
    • 2010-05-04 8:58 pm
  20. 2010-05-03 11:54 am
  21. 2010-05-03 5:55 pm