Nowadays smartphones, tablets and desktop/laptop computers are all siblings. They use the same UI paradigms and follow the same idea of a programmable and flexible machine that’s available to everyone. Only their hardware feature set and form factor differentiate them from each other. In this context, does it still make sense to consider them as separate devices as far as software development is concerned? Wouldn’t it be a much better idea to consider them as multiple variations of the same concept, and release a unified software platform which spreads across all of them? This article aims at describing what has been done in this area already, and what’s left to do.
Operating systems spreading across personal computers
The first step in that direction would be to create an operating system which works well on all platforms. As it turns out, major actors of the OS market have already made some steps in that direction, probably realizing the benefits of this approach in terms of development resources usage, cost, and UI consistency.
Apple’s iOS is the most blatant example of this. First because it reuses a lot of Mac OS X code at the core. Second because the iPad version is only a slightly tweaked variant of the handheld version (and is even able to run the exact same applications, just zooming them in). Third because what the upcoming Mac OS X “Lion” brings on the table is essentially a big copy+paste of features from iOS, with the Mac Store even including several ported iOS applications, clearly showing that both OSs have a common fate
On their side, Microsoft are also working on this, although more quietly. Their various OSs don’t share a common kernel at the time being, but the recently showcased port of Windows of ARM is a step in that direction. .Net is already a universal development platform across all Microsoft devices. Windows 7 comes with better touchscreen and pen input support, and the yet-to-be-released Windows 8 is rumored to go one step towards smartphone look&feel by including an application store
Google is a much younger player in the OS market, but as a first-class citizen of the mobile OS space they have decided to follow Apple’s track by porting their smartphone OS–Android–to tablets in its 3.0 “Honeycomb” release
The sad state of third-party software
So the “unified operating system accross all personal computers” concept has clearly made its way in the big men’s head, and they are obviously all working on that. Now, one of the core points of a personal computer is that it’s a flexible machine which adapts itself to the needs of its users, as long as its hardware is up to what they’re looking for. So having a universal operating system (or at least a universal development platform) is not enough. Creating a universal personal computing platform is also about having third-party software which works well everywhere, without needing to have some of their parts (like, say, their UI) completely re-done each time a new sort of personal computer comes out…
And this is where current designs fall short.
Let’s first consider what is a priori the easiest path: making smartphone-oriented applications run on a more heavy-duty device, like a netbook or a tablet. You have more hardware resources than before, so it should be a trivial thing to do, right? Well, as it turns out, it’s not. Applications are designed with a specific form factor and fixed-size controls in mind. Positions and sizes are hard-coded in the code, either in centimeters/inches, or even worse in pixels. So the only way to make a phone application use more screen estate without completely re-writing its UI is to use zooming and blurring, blindly multiplying positions and sizes by a factor without knowing what they represent, kind of like what Apple does when running iPhone applications on the iPad.
This kind of blind upscaling from the operating system, without any knowledge of what’s actually happening, is not a good idea at all. First because you waste screen estate by making gigantic-sized buttons, menus, and text, while (hopefully) people’s fingers and eyes remain the same. Second because you completely destroy usability by having people make gigantic-sized gestures to go from one button to another while they only had to move the thumb around on their phone. Third, maybe most important of all, because you keep using a phone-oriented UI, simplistic to the point where it’s cumbersome in places because it’s supposed to fit in a 4″ screen, on a much larger screen where the space constraint is no more, making people wonder what’s the point of having this larger screen at all if applications do not benefit from it anyway.
The last issue in particular is interesting, in that it makes us realize that limited hardware capabilities cramp a developer’s creativity, by forcing him to adapt his software to the technical constraints of the hardware he’s writing it for. So while we’d spontaneously say that upscaling is the easiest process, it is not necessarily true. In order to fully make use of some hardware capabilities, software must have been designed while having in mind hardware that has these capabilities or more. When put on more capable hardware, phone applications can’t invent new functions which weren’t useful or usable on a 4″ screen. On the other hand, it is easily possible to imagine hiding some complexity when porting a tablet application to a smaller device, so that it still works.
Since upscaling is not such a good idea after all, we must try to find a real-world example of downscaling in order to see if it actually works better. Thankfully, we have one: Microsoft have announced that Windows 7 should work properly on netbook and touchscreen-based devices. However, in practice, it does not work that well either. While Windows itself and the applications that come bundled with it may work acceptably well (and still they feel a bit clunky already), third-party apps are simply a usability disaster. Most controls (buttons, edits, etc…) are way too small to be targeted with a finger, and still hard to target with a stylus. On a small screen, like that of a netbook or a tablet, toolbars and menus go out of the screen, requiring constant scrolling and playing with menus before finding the most common items. In short: everything feels messy, overly compact, and extremely complicated. These applications just try to do too much on that hardware, and end up being nearly unusable.
Who’s the culprit there? Again, the problem is that the OS (or, more exactly, its UI toolkit) is supposed to help applications adapt themselves to the device they run on, without having a single clue about what they’re doing, and without being able to do anything without violating a set of specifications given by the applications. Controls have hard-coded sizes which cannot be overriden without completely messing up an app’s layout. Toolbars and menus are designed without prioritizing some features among others, based on the thought that everything will fit on the screen anyway. In short, applications are basing themselves on a very strong set of assumption about the hardware they run on. If these assumptions are not verified, the result is a failure in the realm of usability.
One possible way to solve this problem
So let’s sum up what we’ve concluded so far…
OSs are slowly starting to work on multiple kinds of personal computers, but third-party applications are lagging behind as their user interface must still be redesigned on a per-device basis. This redesign must take place because they can’t use extra screen estate wisely, nor adapt themselves to a reduced screen size well.
Fundamentally, UIs can hardly make use of hardware capabilities that weren’t there when they were designed. The only path through which applications can adapt themselves to a wide range of hardware is if they are designed for powerful machines first, then somehow manage to ditch some UI functionality as they run on less and less powerful computers, so that they remain easy and pleasant to use. Compromises related to a reduction of hardware capabilities should be handled at runtime by the operating system, and not at design time by the developer.
Now, how could this work in practice?
In order to adapt user interfaces to the hardware they run on, the operating system first requires some freedom to do that from the application developer. Said developer should only specify some constraints which actually matter on its UI, and leave the rest to the operating system. As an example, he wouldn’t specify button position and size in pixels by hand, but rather say “There’s a Cancel button in the bottomright corner of this window and an OK button at the left of it”. That’s all. It’s up to the operating system’s UI toolkit to decide how big they are and where they are based on these constraints. When designing a game for touchscreen devices as a whole (and not for tablets or phones in particular), a possible constraint could be “those buttons should be on the edge of the screen (for finger accessibility), and close to each other (to quickly move the thumb from one to another)”.
Then, now that it has the power to decide how it will render the UI to some extent, the OS must be able to use this power wisely. Since we’re talking about moving applications from a bigger screen to a smaller screen, the main task of the OS will be to remove controls from the application’s UI when their screen estate cost is higher than the usability benefit of having them at hand. Question is: which controls should it remove, when, and how?
This is all a matter of having the UI designer define some priorities. We all unconsciously do this when we design a toolbar for a desktop application: the rightmost buttons of the toolbars are the first to disappear when the window’s size is reduced, and the user only notices them after examining the leftmost buttons (assuming that he reads from left to right), so we only put the most minor functions there. But when we’re talking about an app which is supposed to work on everything from a desktop PC to a touchscreen phone, that simple rule alone is not enough anymore. Put some modern office suite on a phone without modifying it, and all you’ll get will be a screen covered by a mess of menus, truncated toolbars and ribbons.
To avoid this, the OS must be able to hide whole toolbars, menus, status bars. To collapse groups of buttons into a pop-up menu so that they take less screen space, giving only direct access to the most frequently used function. To not only hide, but sometimes even totally disable functionalities from menus so that they become shorter. All these possible ways of freeing up some screen space and simplifying the app will affect a various range of controls, making our desktop app less discoverable in some way, so it’s a screen estate consumption versus usability compromise. Of course, there’s no way a computer program alone can take such decisions, so he must receive some help from the developers, describing in some way which controls can be forgotten first and what should be kept at all cost.
Let’s take a word processor, as an example.
Obviously, the area where the actual document is is the most important, and shouldn’t be hidden. Now, in terms of controls, let’s examine the various things which a user has at hand but could do without if necessary:
- On a small device where all applications are full screen, it’s not necessary to specify the name of the opened application in the titlebar. The menu in the topleft corner is also quite unnecessary.
- The menubar is used for functions which are only invoked infrequently, like page breaks. If needed, it could as well be collapsed in a single button — like say this arrow button in the topleft corner.
- Most people will only use the formatting toolbar in everyday use. We could ditch the two others if we need to save some space.
- In that toolbar itself, not everything is equal. Only advanced word processor users use styles, so we could reconsider dedicating a whole big combobox to them when we lack screen estate, and just leave the button instead. Or even ditch it altogether if we really don’t have much screen estate left.
- Then, if that is still not sufficient, we can consider dropping the comboboxes for fonts and font sizes, and hide that behind a “font settings” button.
- Then we can hide the rulers and the increase/decrease indent buttons.
- Then we can hide scrollbars and zooming controls, considering that small devices are equipped with multitouch screens and that additional scrolling/zooming controls are thus superfluous. We could only leave a small indicator during scrolling, showing where we are in the document.
- Then we can merge the align left/right/center/justify buttons into a pop-up menu, only displaying the currently applied alignment.
- Then we can hide the rulers and the “indent” buttons.
- Then we can remove the highlight and background color settings.
At this point, this is what we get:
This would be an acceptable smartphone office suite UI, so our software could be in the end capable of adapting itself to everything from a desktop computer to a smartphone without any UI rewrite.
So what is the key towards cross-device software portability without modification? Defining numerically how important each feature is compared to the amount of space it takes on screen, then letting the OS do its auto-removing job based on those priorities and screen estate constraints. Though UIs would take more time to be designed (because of that additional priority attribution thing), they would adapt themselves to a much wider range of devices once that design process is completed. And we could finally have some universal personal computing platform, definitely putting those silly “my smartphone is bigger than your desktop” debates to rest.