When computers–evolutive machines which may be reprogrammed at will–became widely available, it generated a well-deserved buzz. Some (scientists, banks, insurance companies) felt like they were dreaming, others (like some SF writers) rather thought of it as a nightmare. Today, like it or not, they’re everywhere However, part of the individual-oriented computers are going rather far from the original programmable machine model. They rather look like usual tools, with a fixed use. Their customizable internals are only accessible to the people who engineered them. Is there no market anymore for general-purpose personal programmable machines, able to do about anything? I’ll try to answer this question, taking into account two major trends in the personal computing market : touchscreen-powered devices and cloud computing.
The first thing that may kill personal computing as we know it today, from Windows to Photoshop, is the hype around the touchscreen thing. Most cellphone and laptop / desktop computers manufacturers seem to see it as the interface of tomorrow. It’s fun to use, it’s rather intuitive (just point with your finger, as your mother always forbid you to do) and it makes zooming and rotating images much, much nicer, but I think that general-purpose computing cannot exist on a touchscreen device for the following reasons:
- Precision:
-
Touchscreen are manipulated using fingers. Aside from being greasy and leaving ugly traces on the screen, fingers are quite big. Just put your finger aside with your mouse pointer to see what I mean. Moreover, they’re not a precise way of pointing something: if you try to quickly put your finger on some on-screen target, you’ll actually hit somewhere in a circle of ~1 inch diameter around the place you were looking at. This is partly due to the fact that your finger as you see it (a big cylinder with a rounded end) is not the same at all as your finger as the screen sees it (an ugly shape corresponding to the part of your finger’s surface that touches the screen), and except completely reinventing the way touchscreen works, there’s no way to overcome this issue.
All this means that no one may imagine using a regular computer interface with a touchscreen. Windows 7 perfectly illustrates the failure of this way of thinking. Controls have to be made a lot bigger, say 3–4 times bigger. This means dividing information density on screen by 3 or 4. Unless computers become gigantic, there’s no way one may use Adobe Photoshop / GIMP, Adobe Illustrator / Inkscape, a text editor, or any other kind of high-end interactive software using a touchscreen. A phone-sized touchscreen can only display some toys, a tablet-sized touchscreen can get feature parity with a button-powered phone, a laptop-sized touchscreen can get feature parity with a regular netbook, and getting a touchscreen-powered device that allows feature parity with modern desktop software would require a Surface-sized touchscreen, which would both cost an arm and be highly impractical to use.
- Interaction capabilities:
-
Let’s just take any modern 3-button mouse, and not even think about what one can do using a keyboard and more buttons on the mouse. Using that mouse, the available regular interactions are hovering, clicking, double-clicking, right-clicking, scrolling using the wheel, and clicking on the scroll wheel. On a touchscreen, you get two general-purpose interactions: tapping and scrolling. Zooming and rotating may added, but they tend to result in false-positives in detection of gestures, and more importantly have a very specific range of applications. Double-tapping is rarely used because it’s far less practical than having a button under your finger, and currently proposed right-click replacements (two-finger tapping and tap-and-hold) are highly unintuitive and impractical, so will probably never make it into everyday use.
This means that not only touchscreens reduce information density, but they also make context-sensitive menus and actions less practical. The lack of hovering reduce the ability to introduce context-sensitive information too: you have to put labels everywhere in order to explain in detail what each control does, instead of using tooltips and other cursor-changing tricks for that purpose. This makes touchscreen-oriented complex interfaces even more complex than their regular equivalent, because of the higher amount of information that your brain has to decode when it encounters them.
- Keyboard:
-
Some people like on-screen keyboards, some hate them, but almost everyone agrees that they’re not good at typing large amounts of text (let’s talk about several pages). Be it because of arm strain, finger strain, or simply because of the slow performance. The latter is partly caused by the need to use slow eye-hand coordination instead of fast tactile feedback. It’s also partly caused because you more often have to scroll in order to check what you’re typing, since on-screen keyboards take up most of the screen and only let you check the few last lines you’ve been typing.
There’s a lot to say about the ergonomic implications of controls that keep appearing everywhere on the screen with a lack of things that stay in place, too, but I’ll restrain myself from saying that if you can’t type a lot of text, you can’t work on your device, and more importantly you can’t program anything on it. So touchscreen devices completely differ from the regular computers they’re supposed to replace in that they’re not independent: to write code for it, you need to plug a keyboard on it (in which case it’s not a pure touchscreen device anymore), or to use another computer in which a keyboard can be plugged.
As one can see, if touchscreen devices become the norm in the area of personal computing (which remains to be seen), it means that general-purpose computing almost certainly dies in that area. I’m not sure whether the touchscreen thing would make its way into the office or not. I don’t think so because of the aforementioned keyboard issue. Even if it wouldn’t, it looks like general-purpose computing is also doomed in that area due to manager’s new obsession of control over employees’ activities which will almost certainly eventually lead to a full lock-down of office computers for anything that is not work-oriented. That is, if cloud computing does not catch up…
In the stone age of computing, computers were so expensive that only huge companies could afford getting one. These companies then rented out computing power for insanely high fees to people who really needed some, having a dictatorial control over their data and actions. Then, as minicomputers and microcomputers appeared and as computer prices dropped, individuals and organizations managed to become independent from that horrible system which more or less disappeared from the surface of the Earth. Only traces of it remain in the form of supercomputer rental for a few highly computer-intensive tasks. However, many companies have very good memories of those times and would like to get them back in the form of what’s called “cloud computing“, which is the second massive-destruction weapon targeting general-purpose personal computers.
Cloud computing is about having people rely on online services (such as Google’s or Apple’s) for tasks where they could nowadays use their computer and remain independent from those services and their owners. Cloud advocates defend themselves by invoking reliability and simplicity (those companies do backups, don’t open executable files in spams when running as an admin user, don’t try to crack software, and generally have skilled computers engineers taking care of the system and its data so they indeed get much more reliable and snappy systems than the average Joe’s computer), along with the ability for one to get access to his data from everywhere (relying on the assumption that the user is stupid and won’t understand that he won’t be the only one getting ubiquitous access to his data). Interestingly enough, the cloud concept sometimes merges together with the idea of getting everything from a single vendor for the sake of better consistency (giving said vendor insane power, amounts of money, and knowledge of your life).
As I said before, the cloud idea is inherently opposed that of general-purpose, “smart” personal computers. Cloud companies, to the contrary, want their users to get dumb terminals, computers which are almost solely able to browse the web (and more specifically optimize access to the vendor’s online services). By almost, I mean that some “transition” OSes exist at the moment. Since some tasks that can be done offline can’t be done online at the moment, those OSes allow to do such things, whilst putting any other task “on the cloud”. The most perfect example of a cloud-oriented OS is Google’s Chrome OS, with Apple’s iPhone OS being another vision of the cloud concept which leads to the same result, but has more power and less reliability than a pure webby OS because it uses native code in places.
If the cloud idea catches on and proves to be technically realistic (whether games and video editing can be put on the cloud remains to be seen), general-purpose personal computers as we know them will gradually disappear, replaced by dumb devices that can be carried everywhere and are almost solely designed for using online services. As network technology improves more and more things will be done on the web and less things will be done locally, making the desktop and laptop market shrink dramatically.
In the end, Google and Apple will get into some big fight for world domination and the Earth will be destroyed through nuclear weapons… Just kidding. I think.
And you, what do you think about that subject? Do you think that general-purpose personal computing is going to become marginal, that low-end computers which are nothing more than tools are the way to go for individuals? Or that this market is here to stay, that touchscreen devices will be able to get into general-purpose computing, that we’ll reach some kind of equilibrium where both devices have their respective place, and that the whole cloud computing thing will be put to a halt by enlightened people brutally understanding how wrong the idea of giving too much power to big companies is?
About the author:
Neolander is a physics student and a computer enthusiast, which spends too much of his spare time looking at computer-oriented news sites, writing an OS, and reading computer science books. He’s the author of http://theosperiment.wordpress.com, a blog about his attempts at designing and making a personal computing-oriented OS, currently in the process of moving to WordPress for un-googling purposes.
I believe that all the devices and concepts have their uses. The only thing that differs is the number of users, but even the dumbest gadgets are used by someone. That’s why the general-purpose computing (GPC) will probably never completely die out.
Now, the GPC has proven its worth over a long period of time. That and the very limitations of the specialized devices presented in the article, in my opinion, ensures that the GPC is here to stay for the foreseeable future.
New ideas emerge (or the old ones get reevaluated) and naturally, to make room for them the current ones have to suffer somewhat. However, I don’t think the GPC is not in any real danger as the new technologies are not valid replacements (yet).
The insurance for a technology’s survival is the demand, and I think I lot of people will demand the GPC for a long time to come. Me included.
It’s true that more and more specialized computing devices have entered and are entering our lives. TV-recorders, game consoles, phones, navigation systems, mobile media players, restaurant order PDAs, cash registers, ATMs and countless other categories.
But on the other hand general purpose computers are still used and in great numbers. Its main strength remains being able to do everything in one device (games, media playback, calculation). It’s the only machine, that can gain additional functionality at any time. The downside is of course reduced usability and reliability.
Specialized devices are exactly what the name emplies – specialized. These allow more people access to technology to improve their lives. E.g. the iPad makes the Web accessible to people who never know which mouse button to click or what a file is supposed to be. This is a good thing.
Not everybody needs a general pupose computer of their own. But lots of people do and always will.
All this complaining a la Cory Doctorow about the end of computing as we know it, reminds me of another time. The time we finally got easy to use GUIs for our PCs. All the old guard complained, how this would dumb everything down too much. Soon nobody of these WIMPs would know how to properly use a computer anymore. They were false then and are false now.
Every one of us will use a lot more computers in the future, than we do today. Also more people who don’t use one today will use one in the future. And of those only a fraction will be PCs as we know them today. But they won’t cease to exist.
We must take in consideration that most (I said MOST) of today’s average computer users don’t care about that kind of thing. They just want to turn on their machines and look at some social networking website. If it’s “cool”, if “everybody is using it”, why would they care if all their personal data is stored remotely? So what if the page has more ads than content?
Sometimes I feel that people like us will become some sort of “resistence” against some giant company that will rule the world. Sounds stupid now, but maybe in a few years?
Suddenly I remembered the movie WALL-E. All humans had a screen right in their faces. They didn’t even need to touch it. And they liked it…
Yep, general purpose computing will die. Enterprises will replace thousands of PCs with iPads. Workers will type emails on fiddly touch screens. Sensitive proprietary data will be stored with Google. CAD drawings will be produced on iPhones.
Completely ridiculous.
This seems like a general get off my lawn argument. You are comparing older applications to a newer interface. Yes the current versions of photoshop won’t work well with a touch screen… It doesn’t mean it is the end. Photoshop will need to be designed for touch screen. There are a lot of advantages to the touch screen that you cannot do with the mouse. Pinch Zoom is one of them. Sure you can zoom in with a mouse gesture but a pinch zoom is often more useful.
Secondly the argument is really based on Pixel based design, this is slowing going away as the pixel isn’t as accurate as it use to be. With Anti-Aliasing, and image compression if you are off by a few pixles so what.
The argument smells a lot like when we started to move off Command line and went to Point and Click WIMP interfaces. How the mouse just isn’t as accurate, or as functional as they keyboard and it really makes things that much worse.
However, just like CLI, the mouse will still continue to be available to use for those who need or prefer it.
People keep talking about x killing y but the reality is so long as people are varied and given freedom of choice, then technology will remain equally varied.
So it’s not an either/or situation. It’s just a new method of input available for those who want or need it.
“There are a lot of advantages to the touch screen that you cannot do with the mouse. Pinch Zoom is one of them. Sure you can zoom in with a mouse gesture but a pinch zoom is often more useful. ”
You must be kidding us. It is obvious [to some], that pinch zooming is there because those devices are crippled down and fcuked with tiny little screens, so the final content doesn’t fit right into these screens. Bam! here comes zoom as a ‘semi-resolution’. I don’t know how this can be practical, sorry.
Command line is no argument. Why? because it’s still here – XXI century and we still use it. You wanna add some trick to you Win box? you gotta use command line or registry, which is all about typing the commands, not clicking the crap out of your box.
Servers are nothing without CLI, my friend. Ask HP for their HP-UX, Solaris for Solaris and other giants that produce SERVER Operating Systems, not so GUIsh stuff that hides everything from you and use your OS resources just to draw some incredibly huge button right in front of your eyes.
Some of us prefer good log message than a silly dialog box with: “you are fcuked: error 0000000×32” message.
Well, not all touch screens can be served better without them. Two instances imho:
1. Small mobile devices like smartphones. These aren’t full computers with lots of keyboard shortcuts and the like, and having to arrow through menu after menu (Windows Mobile, I’m looking at you) just to find the item you want is more than bothersome. It’s much easier on such devices to scroll and touch. Not to mention web browsing of course. It’s much easier to touch a link or button than it is to arrow around to it, and it’s not as though you’d want a mouse on these devices when the mouse would be as big as the phone to begin with.
2. Public kiosk systems. Love ’em or hate ’em, touch screens are much easier to keep clean than a keypad and are less prone to failure when roughly handled.
However, most other interfaces don’t really benefit much in the end from a touch screen imho. On larger screens, the gestures become too exaggerated and that’s just on flat devices. I wouldn’t want a touch screen desktop or laptop, my arms would get tired in minutes if touch was our only method of input on them.
Implementing touch screens into the mobile devices? Yes.
Implementing them into the kiosk systems? surely.
Implementing full browsing experience into m.dev.? No.
That is just not possible with the size of the typical smartphone screen.
You would either cripple device down, or just not provide such ‘innovations’. IMHO it’s better to buy something bigger just for the web browsing instead of smartphone.
That’s BS, if anything the CLI hides stuff by not making your options apparent. For command line arguments you have to put in a bunch of ugly flags and a typo can screw up the whole thing. The UNIX Hater’s handbook has some funny examples of this.
The GUI resources argument is also BS. It isn’t 2000 anymore, the GUI overhead in Windows Server is negligible unless you are running on an old machine. Take a look at the cpu idle sometime, it doesn’t take much work for Windows to update a 2d frame that is drawn by the video card.
The CLI can of course be very useful and I use it all the time but command line elitism is really silly. GUIs can be very useful for servers, especially for virtualizing and data monitoring.
“The GUI resources argument is also BS. It isn’t 2000 anymore, the GUI overhead in Windows Server is negligible unless you are running on an old machine. Take a look at the cpu idle sometime, it doesn’t take much work for Windows to update a 2d frame that is drawn by the video card. ”
Might I remind you, that we are talking about the SERVERS here. You don’t even need a GUI in srv and that’s a fact. It can be a browser interface with minimal overhead, but GUI? It’s not an argument to say ‘look, we have such a powerful machines nowadays’. Of course we do, but it doesn’t mean, that the programmers of GUI OSs should bloat their Operating System into incredible extent just because they have a memory, or CPU power.
P.S servers don’t usualy need 3D *unless* you are using them to wirtualize Windows instantions …
“That’s BS, if anything the CLI hides stuff by not making your options apparent. For command line arguments you have to put in a bunch of ugly flags and a typo can screw up the whole thing. The UNIX Hater’s handbook has some funny examples of this. ”
I’d rather say Unix Haters Book is no argument. Few frustrated people that are just bored with CLI [although they know it PERFECTLY from an early stage of it’s development]. It’s all about productivity and simplicity, not some ‘silly elitism’, so we agree in this part.
So what are you saying here? You’re against GUI interfaces for servers but don’t care if there is a browser open? That’s a minimalist GUI and having an interface for something like Apache open as well will hardly make a difference.
Ever noticed how much cpu Windows uses at idle? Using a couple percentage points of cpu share is not bloat. As I said this was more of an issue around 2000 when servers had 1ghz P3s and 256mb of RAM. With a quad core, 8gb of RAM server these concerns are no longer valid.
There is definitely a legitimate concern for stability when it comes to running X which is why I would only run FreeBSD from the command line. The Windows GUI however is something you can ignore.
More useful than zooming with mouse wheel?
Anyway, mouse is a somewhat unexciting device. Keyboard, OTOH… people that do useful stuff will always need one.
Edited 2010-05-02 18:25 UTC
You can also use the scroll wheel to zoom. Yes I would rather do than than pinch the screen like a retard. With the mouse I can also pinpoint the exact location where I want to zoom.
This is really funny.
So we are going to move to a sketch based web? Where people fingerpaint images and call it good enough? Sorry boss but I can’t add any more detail with my finger. I can smudge an area of about 5 pixels but that is it. Do you want me to blur the image for you?
Photoshop touch-screen edition. Yea keep waiting on that one.
Every photo editor I have ever used, since I moved from GeoWorks to Windows as a primary GUI, has, by default, used mouse wheel up to zoom in at the cursor’s position, and zoom out from that position with mouse wheel down. Pinch zoom and mouse gestures? Put down the drugs, man.
No. We’ve had good working GUI interfaces, like X+Qt, that are very capable of changing DPIs (I typically ovverride mine to be 20-30% higher), and having the whole interface follow that change. This has been one reason I preferred X before Win7 (while still limited, the defaults are good enough in 7, while still a bit awkward in Vista), and Apple has basically added the feature to OS X, too.
Which, it does. Until you get used to having several terminals open at the same time, without any of them having to be smaller than 80 columns, and bask in the best of both worlds–GUIs where they make sense to use, CLI where you know what you want to do, and can just get it done there. Then you get an interpolating 800+ DPI mouse, and do a little shin-chan-esque happy dance.
Just like with cars and many other modern devices we’ve moved to a society where they are no longer toys of the rich or geeky but instead consumer devices used by everyone, so they are increasingly designed to be operable by the lowest common denominator, serviced by the specialists and disposable.
However, there are always enthusiasts who just want to play with their toys, so I don’t believe the market will die completely.
The idea that everything will move to cloud is a little extreme. Native code is still needed for most popular PC applications (video players, image editors, video editors, sound editors, music editors, 3D modelling editors, CAD, web browsers, video games and so many other uses). Only the “low end” of applications (what has been around for longer than ever) and what is not as computationally intensive is moving to web, such as mail apps, word processors, etc.
I’m pretty sure that, to solve this, the cloud will be able to deliver native code or almost native code using approaches such as google’s NACI or PNACI.
I know a very popular site which houses an online video player, if only I could remember its name.
Oh yeah, I know a very popular site where you can download plenty of movies and episodes too, that you can’t watch in that other popular site..
Ah, but the video is still being decoded locally even though it’s coming from the cloud. This is something a “dumb terminal” could not do if it was really nothing more than an interface to the cloud. IN that scenario, you would be fed the decoded video stream and your cloud client would just output it. I wouldn’t call online video players cloud-based, as even though the video is streamed over the web it is still being decoded natively by your machine, therefore native code is involved in most of the process.
The video content is online but the player itself is installed locally.
Most websites like youtube need Flash, Silverlight, Quicktime or any number of other plug-ins installed and even the HTML5 sites with <video> tags need to run on web-browser that are installed locally (and lets not forget that many of these browsers also depend upon locally installed codecs too)
First, sorry for my poor english 😉
I don’t think the facts that you pointed out really matters as long as portable machines can also run the players smoothly. Portability is a nice advantage and desktops should be able to compensate for our giving up portability to be competitive. Do they? I’m not so sure.
I believe that portion of desktops will shirink as the technology advances and handheld machines become powerful enough. But, still I can hardly believe that this means the death of desktop computing. There are always some areas in which the performance is much important than the portability, so the performance of handheld machines can never be “enough”.
Edited 2010-05-02 17:12 UTC
Some people don’t need portability though so they’re not giving anything up.
However, if power and portability is really an issue then there is this crazy new invention that I predict will become popular one day: it’s called the “laptop”.
Hell, even some netbooks are as powerful as 4 year old laptops these days – and lets not forget UMPC either.
That’s just cloud storage, not cloud computing.
I won’t be adding anything to the touchscreen-trend, ’cause that would be complete waste of time. We all know it’s an unpractical gadget.
The whole ‘devices aimed at particular purpose’ thing is just a MARKETING. Some ppl wants to sell something to the world and they produce new, silly gadget for dummies.
They base it on ridiculous – but quite true for most of the ppl – assumption, that most of the users are the conformists and so they will likely get the catch and use their device ‘just because it’s a NEW THING’.
That said, reason [consciousness] is the worst enemy of the marketers. Independent thinking doesn’t leave any place for their filthy games.
So I wouldn’t be demonizing the whole thing up. It’s about purpose, conscious choices and our own will.
What if speech recognition is added for text input and commands with a touchscreen. Why are we limiting our analysis to touch input devices only vs desktop computers. The question should be “in 2010 given the devices on the market right now, would you use a specialized device with limited input for general computing”. Of course not. But what if you had a device with a large lightweight touchscreen with speech input for text and commands and a camera, for example, for gesture recognition (which would solve part of the hovering problem) and a stylus (yes I used the s word) for precise interaction then I don’t think most people would still use desktop computers as we know them now. But your article illustrate well the limitation of current touch screen devices right now. As for cloud computing, that’s something else.
Nice idea, but speech recognition is just a bit too iffy yet. And what about noisy environments?
Speech input is already here, and at least on Android, it is lovely. That being said, I don’t think I’d want to do that in an office of 200 people separated by cubicles.
That would sort of defeat the purpose of a portable machine, wouldn’t it? If the screen is too large to practically carry around (and thus stationary), it’s still technically a desktop. Plus, whether I would use one or not depends on the erganomics of the thing.
Passe? Absolutely.
Bring on the appliances!
You have very concisely summarized my own thoughts.
cloud Computing taking over personal local computing in the near future. There are issues that are raised. ie: privacy, location. I think of security when it comes to my own “Intellectual Property” (Yes, I can use that term, so as everybody on the planet that produces a creative work, whether it’s a letter, script, etc.) What’s to prevent the government, instead of raiding your house, just type an email to Google or Apple, and basically download your entire life from the cloud? Instead of the government, what about a hacking group that is performing corporate espionage, and instead of accessing your company, exploits Google or Apple and compromises your corporation’s IP or sensitive data?
The benefits include, centralized computing power and stateless data.
But does the benefits outweigh the drawbacks? I personally don’t think so.
First, I’d like everyone for participating with such good comments. Maybe it’s because I’m getting a bit tired of the H.264 vs the rest of the world war, but to me it looks like even if OSnews has generally high quality comments, a lot of those are of even higher quality.
Question is : is it possible ? I’ve tried gimp on an EEEPC 701 (yay, the first one), and why I could resize various toolboxes, it still hid most of screen estate. Photoshop is a bit better as far as screen estate is concerned, but that’s because it makes heavy use of drop-down menus, which are unsuitable on a touchscreen.
And if you always have to switch back and forth between a “tool” tab and an “image” tab, it’s rather cumbersome. Then there’s the precision issue : unless touchscreen devices are equipped with stylus that have wacom-like precision, it’s hard to make serious work at the pixel scale on such a device. The stylus effectively solves several touch problem, that’s one of the way I think touchscreens could win. But stylus-oriented applications are hard to make, though : anyone remembers Windows Mobile 6 or tried to use a drawing tablet in everyday web browsing ?
You’re right. There’s pinch and rotate. It’s written in my article. But then what ? My point was that touch offers far less general-purpose actions.
You’ve horribly misunderstood what I wrote if you think that. I talk about screen estate in terms of inches and centimeters, not in terms of pixels. The way non-OS software should be talking about it since ages. The fact that buttons get smaller when screen size gets higher is actually a bug that should never have appeared in OSs and should be fixed by now. Fitt’s law is not about pixels.
I just didn’t understand that.
You’re perfectly right, it’s a bit of a nostalgic argument. Except for one difference : those who prefer CLI over GUI for a certain task and GUI over CLI for another can use both if they like to. However, lately, it sounds like touchscreens and mouses are mutually exclusive, except for some unsuccessful products that try to combine both without providing serious touchscreen support (from HP, Archos…)
See above. I think it’s not like GUI vs CLI, because a single machine can do both GUI and CLI, whereas the emerging touchscreen market nowadays is incompatible with the old behavior. And I don’t see many people buying two computers in order to get both.
It may solve some issues, but you won’t be writing trade secrets and written porn on your computer anymore Speech recognition is nice, but the “anyone can hear” argument is IMO the #1 reason why it never truly caught up (#2 being poor quality of said recognition, especially in noisy environments).
Your point is perfectly and totally valid, and shame on me for not minding to describe this with more attention in my article. What I’m talking about is today’s “pure touchscreen” model. The one used in iPhones, Androphones, tablets, iPads… But technology from tomorrow may totally change the whole situation. However, we might be getting more and more distant from some of the core ideas of the current touchscreen thing then : you interact it directly with your hands so it’s simpler, and it’s only a screen so you can easily carry it around.
Maybe. I prefer to have some buttons, myself, at least for typing and common features like task-switching, but I guess it’s the same as those people who prefer CLI over GUI for some purposes
Edited 2010-05-02 18:30 UTC
I think the article makes a good point but it’s using the wrong arguments.
General purpose means you can do whatever you want,
the whole resolution argument has nothing to do with this. That just means a different interface, it doesn’t limit what you can do with a device, just how you control it.
The cloud computing thing is a better argument: By putting your apps on-line you are limiting the apps you can run to those supplied by the vendor, but that’s only if you are limited to one vendor which is not likely to be the case. In any case, Web apps can be general purpose, the difference is in where you run them, not what they can or cannot do.
However, there is a good point to be made about the end of general purpose computing. I don’t think it’ll be the end but I do think general purpose machines could become a niche.
I think the iPad shows what will happen, I think it’ll replace PCs for many people, it’ll still be general purpose, but not in quite the same way.
I think PCs will become like SLR cameras are today, they’re highly flexible devices that sell in reasonably high numbers. Point and click cameras are much less flexible and don’t give the same results, but sell in vastly higher numbers.
The level of flexibility we want will still be available but you’ll probably pay rather more for it.
Wrong. I’ve used a 15-inch screen then a 17-inch screen. For coding purposes, the second one is *much* better. And coding is not about pixel res, since text display is resolution-independent. When you have more space, you can do more things with it. With touchscreen-oriented buttons, we’ll effectively have less space.
Sure, it’s still possible to do the same thing using scrolling, tabs, and other tricks. But at some point it’s not practically usable anymore. You miss the “I see everything at once” thing, which is really important.
You’re right, It’s the “personal” in personal computing that’s dying here. You don’t own your computer anymore. You’re a minion of New IBM.
That’s what I see happening too, if the touchscreen things catches up. Hence this article But you write this point much better.
And I call BS.
The calculation is really that simple. I don’t see why I couldn’t do anything on a upcoming Meego Smartphone with HDMI and a LCD TV.
Soon phones will have 1,5 Ghz and 1GB Ram .. those things can do anything (as long as they run Linux :OP)
What’s a WiDi ? Some kind of pointing device that overcomes touchscreen limitations ? If so, why not… If not, cf article and comments. It’s not only about computing power and number of pixels on the screen. If your pointing device (the smartphone) is still small, the precision-related issues remain…
http://www.engadget.com/2010/01/07/intel-announces-widi-hd-wireless…
For pointing there could be a touchpad on the keyboard. Or you could the phone touchscreen as a touchpad (if you are cheap and the software is flexible)
Re speech recognition:
Personally, i’m not a fan, have used it in the past.
Advances in technology can make it more of an option. How about voice recognition software that can detect your quiet speaking amongst a room full of people talking? What about the software recognising your voice spoken just above a whisper? The person in the next cubicle won’t understand you but the software will. Better software/hardware and algorithms might be able to do this, you just never know. If there’s a will….
You’re right, I’m talking about today’s speech recognition, plus perfect detection of the pronounced words.
With some tiny directional piezzo microphones close to your mouth and noise removal, I think this is already doable in hardware and software today Prices just have to drop, which will happen someday.
That’s possible on a longer term scale. Microphone and amplifier technology just has to get a lot more efficient. But if your ear can do that, a good mic can… Sound quality will be awful, but with good speech recognition algorithms equivalent to those of the human specie, who knows ?
That’s where I don’t follow you. I think there’s a psychological issue about talking about something in a public place where everyone can go. But maybe I’m totally wrong…
Edited 2010-05-03 04:43 UTC
Those touchable stuff will just evolve into like a TV remote for general consumers…
While I agree that many of these devices have created greater convenience, also I do not believe that we should be modularizing all of these devices, over specialization can be a negative aspect in many areas. One potential downfall I can see is increasing costs, by having to purchase a separate device for each task you would like to complete. It could be beneficial for large companies milking the consumer for every penny they can get.
Strange, what you’re describing reminds me of a company who produces portable media players…
OSNews is at it again.
Unrealistic nonsense.
1/This was not written by the OSnews editorial team, even though its help was precious
2/To what extent is this nonsense ? Can you please go into more detail ?
(The author ^^)
Edited 2010-05-03 06:05 UTC
I saw some guy using an ipad at harbucks the other day.
He was clearly using way more effort than me to surf the web. It was actually quite funny. I was using a wireless laser mouse and making very small movements while he looked like he was frantically fingerpainting. He also had to lay the ipad flat to type on it.
Touchscreens are fine for phones but general computing? Oh God no. We’re just in the middle of Apple hype when it comes to tablets. Most people would rather have a netbook than a tablet.
So, you figured he was struggling because he was making larger movements than you were? I can’t think of a more ridiculous argument.
It’s like saying regular drivers struggle because their steering wheels have a 2.5 turns lock to lock compared to a 0.9 turn lock in a formula one car.
These are different computing paradigms. In fact, I would say that the person on the ipad struggles less because he has multitouch, so he can interact with more than one screen element at a time, unlike a keyboard and mouse user. And needing to be less precise is actually a _good_ thing. One thing I hate about mice is that they force me to be more precise than one needs to be. And they require you to learn an unnatural hand/eye coordination to be able to use them.
OK, some here we go again with bedtime stories about general purpose PC’s flying out the window. And again, I’ll come up with one – of many – arguments against that: transitioning completely to portable dumb terminals (phones, pads, whatever) and cloud-based apps presumes a constant high bandwidth connection, and enough available computing power in the cloud to host all the algorithms and computing power the user can come up with. Which is insane! Half the algorithms I’m dealing with require large amounts of data with realtime availabilty. Another one, what provider would be happy to see me stream multiple live high resolution camera feeds into the “cloud” for running my algorithms on?
Again, there are so many arguments against the whole subject it’s not even funny.
Touch-screens are very appropriate for small mobile devices (smaller than a netbook). Though not as accurate as other pointing technologies, they work well with devices that get dropped, kicked, and generally beaten up during their lives.
As for virtual keyboards, they are very handy for texting or inputting URL’s. Again, with a mobile form-factor, real keyboards stop working, keys fall off, gunk gets stuck between keys, etc. An on-screen keyboard solves these problems. Remember: people aren’t going to be typing the Great American Novel(tm) on an iPhone. They’re only keying in brief messages.
To address your comment about more generally-purposed devices, GP computing will never go away. There will always be people who want an all-in-one device. People said mainframes would die off. While mainframes comprise a shrinking percentage of all computing devices out there, the installed base of mainframes is still increasing in absolute terms. Why? Because they’re very good at what they do.