I’s time for another “OSNews Asks”, a blatant rip-off of just about every other website in existence. Anyway, today we want to focus on multitouch. The technology behind it has existed for a long time, but only recently have companies like Apple (iPhone, trackpads) and Microsoft (Surface, Windows 7) begun promoting it. We have a question for you, about multitouch in desktops and laptops.
With Windows 7, Microsoft will be the first to bring system-wide multitouch frameworks and APIs to the desktop. If you follow the OSNews podcast, Kroc and I discussed this very subject in episode 4, and we came to the conclusion that by providing these APIs and frameworks, Microsoft is giving adventurous and talented developers the tools to design desktop applications with multitouch in mind, or maybe even solely for multitouch. So while “desktop multitouch” may make no sense now, Microsoft is laying the foundations for a whole new generation of applications that we may not even have thought about.
Personally, I can see a number of specific uses for multitouch on a desktop (or laptop, for that matter). For instance, in video players, it would allow for very accurate seeking, and with the appropriate gestures, you could walk frame-by-frame easily. Sure, you can do that with a mouse or keyboard, but seeking could be a lot more accurate using your own fingers instead of a mouse. Cutting out the middleman, so to speak.
I can also see a usefulness in games. Card games come to mind, but it could also be handy in real-time strategy games, as a means to select and dispatch units across the map. I’m sure game developers (especially independent) will be all over the multitouch frameworks in Windows 7.
So, the question we’re asking you is this: what use do you see for multitouch on desktops and laptops? What applications could make use of this technology to make your life easier? Fire away in the comments!
…As you state, who knows what it might lead to. No doubt some great uses in applications, and some horrible. Either way it will be interesting.
To be honest, I do not want to be touching my monitor all day. Try this experiment: set a timer to go off every five minutes. When it goes off, jab at your monitor for a few seconds (say as long as it takes to move a few files and folders around) and then go back to work. See how long before you think it’s ridiculous.
In an arcade perhaps. On my hand held device, definitely. On my refrigerator or any other panel interface, for sure! But leave my desktop out of it!
Well, what if your computer had an addional display that was more ergonomically placed to allow you to multitouch with ease?
For the record I’m still multitouch skeptic. The only cool thin I can think of is computer-aided art. Specifically, finger painting. So maybe there would be a CAD related use.
How exactly is that any more ridiculous than wiggling a small plastic box around?
I have a tablet PC laptop and even when I use it normal laptop mode I often have the pen at hand and use it in conjunction with the mouse. So yea I’m basically using a touchscreen now and I’d love to have a touchscreen on my desktop and I would definitely use it.
This discussion is about Multi-touch, which involves touching it with your fingers.
What you are doing is just using a pen to replace a mouse, the actual functionality and usage is not changed.
Multi-touch allows you to use gestures, multiple-fingers etc. which changes significantly the way we can interact with the computer. I think it’s a good idea, but I prefer Apple’s way so that I don’t have to touch the computer, while at the same time having the advantages of a wide variety of gestures. Of couse, my current laptop does not have a multi-touch trackpad, so I don’t know how well it works in practice.
Multiple fingers?? I thought it’s only 2 sensory inputs, so 3 fingers would be useless..
Although the iPhone only supports 2 fingers for gestures right now, Snow Leopard is supposed to have support for gestures with 3 fingers and perhaps more.
In computing my mantra is:
“The least you use your hands, the best”.
But, who know what a creative mind could bring and change my mind.
Engelbart, before inventing the mouse, tried out all different interfaces including touch-display. One thing I know that touch won’t be fun on a big monitor; and would give quite a lot of arm pain. On the other hand; touch would be quite great on small-screen laptops where touch-pad is a tad insufficient.
That aside, I see benefits only in “average user” section — viewing pictures and documents, casual games but that’s pretty much it. Most can be accomplished by single-touch and generally they’re better done with a mouse or keyboard IMO.
Personally I dream of getting a 10″ or 12″ TabletPC loaded with Win7..
EDIT:
There is ONE important thing; putting in non-english characters. Writing it on screen is a lot more easier than other techniques I’ve known so far..
Which brings another idea — MAYBE do the sacriledge of using it in text editors. We use menus; maybe instead of relying on ctrl-alt-shift… kinda commands we write the letter on screen.
Edited 2009-04-22 19:22 UTC
resizing images!
oh yeah! and ‘throwing’ documents around and spinning cubes/spheres/arbitrary 3D shapes.
When I play an RTS, I’ve got one hand on the keyboard(for command short cuts) and one hand on the mouse for selecting units and targeting.
When manipulating a 2D space most of my fingers aren’t very useful, so it’s down to my pointer finger to do all the work and it’s big and inaccurate.
If I was going to use both hands as a pointing device I’d just get two mice.
Multi-touch is moderately interesting, but only on large devices. On a small device (iPhone), the interface does NOT lend itself well to people who have problems with their hands, like elderly, people with injuries or people who just have “fat fingers.”
On a larger device, I can see it being easier to use than a mouse because most people can point a finger from each hand to do something (like expand a picture), but who wants to touch a monitor and get it all streaked with fingerprints? Any time someone touches my monitor, I threaten to break their fingers and I’m almost OCD when it comes to streaks on my phone display (or monitors).
I wouldn’t hold my arm up all day, that would be too tiring. However, a large touchpad would be interesting, but I think I would still use my trackball
I personally don’t like it.
1.
Imagine accidently brushing the screen and suddenly delete hundreds of files.
2.
Resize an image, stair at a finger smudge.
3.
Navigating a 21″ display or larger and watch yourself with both arms extended (hurting your back) while you punch and poke your screen. Talk about being worn out at the end of the day. Carpel rotor cup anyone?
The software ought to be smart enough to recognize accidental touches. The iPhone handles that quite well.
[/q]
I agree with you on the other two points.
hmmmmm … LCARS anyone???
I don’t think anyone ever saw someone inputting thousands of lines of text/code with an lcars interface, most commands are made with voice, and fingers are mostly only used to activate displayed buttons. Today’s developers would lose 99% of their productivity of one would force them to use touch-based interfaces exclusively. I would for sure.
First we’ll need cheap hardware that supports multitouch otherwise it’s all pointless.
In any case, an onscreen keyboard will never replace my trusty IBM Model M keyboard. The screen will probably die before the keyboard anyways…
Touching a screen all day sounds terrible to me. I’m a trackball and trackpoint guy so my mouse hand stays stationary all day (except typing of course)…and I wouldn’t have it any other way. Ergonomics are far more important than looking cool.
A good trackball is far, FAR more accurate that touching a screen anyway so the accuracy argument is garbage. Even a cheap standard mouse is better….
I really detest finger prints and finger smudges on my screen anyway. Maybe I just have especially greasy fingers but I wouldn’t want to clean my screen twice a day.
Also, what about modifier keys (ie ctrl+click etc.) or even double click vs single click? I wouldn’t want to need to learn some ‘secret tap’ sequence to do something that a mouse can do better, faster and smarter (or keyboard for that matter).
Maybe when voice recognition gets better and we can eliminate the keyboard altogether. Why would we want THREE input devices (keyboard, mouse and multitouch)?
My arms are tired just thinking about it.
If I can rest my arms and ‘touch’ (not as kinky as it sounds) it would be fine for some things.
I find swiping, etc great for navigation, but that’s about it.
It’s perfect input technology for everything where you need to combine slow, inaccurate and unergonomic control with extra filthy screen.
I believe touch screens are very useful in niches where it is either impractical or prohibitively expensive to have large screens which is compensated for by an iPhone-like interface on smaller screen real-estate.
Personally I believe the laptop market is perfect for touch-screens as a smaller screen is better for portability as well as battery life, but without some clever way of emulating a larger screen surface I have found myself making many usability compromises when work on small Asus eeePC-like 7-10″ screens. Emulating some large surface area in a user-friendly manner would probably help at making laptops better desktop-replacements in general without having to sacrifice portability.
My wife is an artist (and noted technophobe). She usually finds it quite awkward to manipulate the mouse or trackpad to point to what she wants and reckons it would be far easier to be able to use a touch screen.
Also, I used to work for a Process Control company who used touch screens quite widely and successfully – this in the late 80’s when the technology was quite crude.
One of their products consisted of a touch screen set into a desk at a normal reading angle and was really easy and intuitive to use, so, yeah, I can certainly see uses even for desktop use (if it’s smear resistant!)
Lots of older folk are quite reticent to use technology, wouldn’t go near a computer, but find things like digital picture frames quite cute. Well, how about a digital notepad that used handwriting recognition to translate normal writing into printed script? That technology exists now on PDA’s, etc. If older folks could just sit and write a letter and have it sent instantly (I mean e-mail) that could be a real blessing to some.
Edited 2009-04-22 22:49 UTC
I can see multi touch being used on desktops and laptops. Prolly more on the later. I think the smaller the computer unit is, the more mulitouch will be used. i can see it being adopted to the desktop but it wont replace anything. Multi touch on the desktop will probably be selected icons and files… hitting the ‘OK’ button, things like that. But the mouse will not be removed from the desktop. Could you imagine how anoying it would be to use photoshop with only touching the screen? Plus, it is easier to use your hand at waste level than it is to always be pointing out and touching infront of you. At most you will see the mouse turned into a track ball and intergrated on to the keybord.
Touch screen is going to be more important on small devices like phones and dedicated interfaces such as some sort of interactive controller
As with everything, it all depends on the application/intended use, or at least it should. Artists, gamers, video/photographers, etc. would probably have very nice uses for touch/multitouch interafces on desktops, e.g. sound studio where the table and the displays are all large multitouch screens. As for coders and developers, I’d expect much less productivity, I personally would not use them. But I’d very much like to use such screens as a tv/pc/multimediacenter/internetdevice combination, or as the door of my fridge, a calendar/photo display on the wall, and so on, and so forth.
I’m not sure if I’ll like using (multi-)touch screen until I’ve actually tried it. But I don’t like the “my arms will hurt” argument. A lot of people who do manual labour have to work like that for hours and hours. You don’t hear them complain as much as I’ve heard here (and elsewhere on the internet), especially not in advance. I never heard someone say “I’m going to work. I hope my arms won’t hurt.”
Stop whining. Making computers a little more physically involved might actually be a good thing.
Prolonged use of touch (or multi-touch) is only useful on devices you can hold in your hand. If the object in on a desk or on your lap, it’s a bad case of strain-induced injuries waiting to happen.
For kiosks in public places where you use the machine for like 2 minutes to get some information, it’s fine though.
I could imagine a lage screen on my desk where I can work with both hands and propably all my fingers: drawing, moving, turning around and scaling objects, like I did many years ago, when no screens, mouses etc. were available (but now with no eraser needed ;-). The screen should not be vertical, but adjustable somewhere between horizontal and 45° (together with the desk surface). Something like a pen should be available for the details. At the bottom side I’d like a compact keyboard for text input. There will be no need for anything like a mouse.
Quite a new experience of working. For me it seems more natural than being tied to a keyboard and a mouse.
In the further future I’d like this new workspace to be extended into the third dimension 😉
Imho, “touchability” on standard screens/PCS is useless.
People will never touch their screens more than just for some fun. Same problem as with “speech recognition”. Nice feature, but worthless and useless. “To have some fun” and “for geeks only”.
What is usable is sort of “external touchpad” to replace mouse. With multi-touch capabilities, of course. At least to detect “click” and “right-click” events.
But… Who cares? Marketing rulezz!
Here’s my concept of a future “Touchbook”.
http://www.flickr.com/photos/30432567@N08/3467310539/
For myself I very much dislike to touch the screen in front of me because it’s not very ergonomic. But replacing the hardware keyboard with a virtual one and adding some Multitouch might be very handy…
Edited 2009-04-23 09:42 UTC
http://www.geekwithlaptop.com/new-sharp-netbook-has-touch-screen-tr…
virtual keyboards dont work for volume typing because it hurts your hands
Looks like the OLPC XO-2.
So far, most of the replies seem to be assuming that multi-touch for desktop applications would just mean adding touch-sensitivity to current monitors. I agree that such a setup would be pretty awkward – for the same reason that “light pens” were discarded in favour of the mouse (arm strain).
On the other hand, though, I can see multi-touch being quite useful with a large display setup like a drafting table. I think that would be very convenient for video editing – although you would probably still want a mouse and keyboard for anything requiring precision.
I could also see small multi-touch displays being useful as a third input device, complementing the mouse & keyboard (sort of like the touchscreen-touchpad on that new Sharp laptop, but in the form factor of a pen tablet). E.g., I’ve met many people who work with audio and prefer to have physical sliders for the volume control, cross-fading, etc – so a multi-touch screen could be used to provide sliders that can be adjusted with your fingertips.
Many of the above posters only imagine a very narrow range of uses – as in “how can I use it within my current processes.” You wouldn’t use multitouch for working in vi, necessarily. For narrow desktop use, I don’t see keyboarding being effective on touch screens – mostly coarse object selection and manipulation, and multiuser scenarios.
Scenario 1 – someone above mentioned multitouch would limit modifier keys. Multitouch precisely enables that capability in touch screen devices.
Scenario 2 – Menus: Screen inset horizontally into a booth table. Each person can “drag off a menu” from an icon or edge so that it sits in front of them and all make selections at the same time. 1 person can have 1 menu or 6 people see 6 menus. After the last person hits submit, the order is processed to the kitchen/ bar. No server involved, unless requested. If one person orders desert while food trays and salt shakers are scattered over the surface, they can drag off a menu to their corner. Kids can also play games on it while mom and dad order. Obviously, the system has to be robust. Order refills easily.
While replacing laminated paper menus with a PC may not sound cost efficient, this allows 1 server to work far more tables, which recovers labor costs. The graphic person who designed the paper menus and logos could just as well do the screen layout, too. Updates are instantaneous and don’t need reprints.
Scenario 3: Control panels. Control panels of all kinds. I work in Air Traffic, so I see them everyday. In the home, a remote system for controlling my entertainment (and all kinds of other things) that is inset horizontally into a coffee table. One person can operate the output device(s) now-playing controls on one side of the touch screen remote table while someone else is browsing movies from a different orientation, or programming the DVR. It can also be turned into a tabletop gaming system.
Scenario 4: Desktop apps that can use it include apps like Visio or other block-type modeling apps. Presentation and conferencing applications can also benefit.
Scenario 5: I used SmartBoards for training and I can see the application here. A PC serves up the content that multiple students will work on/ manipulate on the board. The workspace is dynamically reconfigurable.
We’ve only really discussed basic touch itself, but the proximity and awareness features that MS demoed open up far more possibilities. Setting the camera on a surface to wirelessly import the pictures is just one example.
Scenario 6: This feature enhances gaming possibilities enormously. Imagine PC roleplaying games merged with the old table top elements. A DM can use a toolkit (ala Neverwinter Nights) to create their own adventure which all can see on the screen. Physical elements, augmented cards(?), can be added and subtracted from the surface to cause interactions.
Just ideas.
“all day long” … really … certainly there are applications where a real keyboard and mouse will rule, or a tablet, but touchscreen will be very useful as another input device (not replacing)
Touchscreen would be great for actions where it is quick / limited / and a mouse/keyboard gets in the way or is awkward.
How about standing by the computer and wanting to quickly mute it or pause video without having to bend over and awkwardly use the mouse while standing. And on a laptop, my hands really are not that far from the screen, so if it’s more intuitive to touch than I will.
Perhaps a tablet for your coffee table, counter, or fridge that you just pick up, get some info – how easy and intuitive to use your finger to scroll through the news, through recipes, pictures, or even perhaps music or videos to play on your Media PC.
The drafting table mentioned already makes great sense.
so many possibilities (I really haven’t put all mine here… just start with a kitchen counter PC with touch screen and think of all the possibilities…)
oh and how about the return of the board game… OK so you don’t get to handle the monopoly money (which is half the fun) but multitouch surfaces would be great for interacting with other people with the computer being less of the focus
Specifically 2D artists used to a digital tablet device.
That’s the only real use I see. The rest of the 2D space is handled pretty well by the standard tools.
Oh and please let touchscreen keyboards die a horrible death–send them screaming into the bottomless Pit of Useless As Shit Fads.
I think Multitouch in desktop computers, and possibly in laptops as well, will go the way of the Virtual Boy.