First of all, we should agree on what the definition of “ready for the desktop” stands for. For some of us it refers to a graphical user interface in which applications have icons and can be launched in an intuitive manner without the need of complex commands. Even a Commodore 64 running Geos could be “ready for the desktop” by this definition, but the fact is that when we read “ready for the desktop” we understand “ready to replace Microsoft Windows”.
But this definition alone is not enough, most of the people and the mass media understands that replacing Windows is just not about booting something different than the Microsoft’s operating system, it refers to some conceptions such as commercial support, document compatibility, availability of office tools and other mainstream applications.
Above all, an operating system aspiring to replace the Microsoft’s product must able to provide at least the same commitment to the end user as Windows in political and software terms. This may sound like a contradiction in these days of security flaws but I’ll develop the idea further in this article. We all agree that we can teach our parents how to browse the Web or write a “Word” document with Linux or, better yet, we can use some Windows theme that may fool the eyes of more than one Microsoft veteran at the first glance. This is not the point though. By “ready for the desktop” we refer to a system that can be used by someone without the help of a geek-relative or a specialized magazine. More than that, we want to see Linux preinstalled by default by most of the top PC manufacturers. We want the latest games and hardware to be compatible with Linux. We want device drivers written by the same companies that produce hardware and not by computer science students in their free time.
We know what “ready for the desktop” means, but what is Linux?
A kernel. Repeat after me, a kernel. No, it’s not Suse and neither RedHat. Those are products that use the Linux kernel. But they are flavors of Linux, aren’t they?. No, they are products that use specific Linux kernels. This means that an application compiled with one kernel in mind may not work with another one. For example, at the moment some distributions use the 2.4.x while others the 2.6.x kernel. An application targeting Suse Linux is thus not necessarily compatible with RedHat Linux even though we read the word Linux in both products. Each distributor compiles and re-packs the mainstream applications for their implementations.
The truth is that we don’t have Oracle or Java for “Linux” but for some “certified” distributions, in essence, conceptually “different Windows contenders”. So, at the end of the day, a “Linux application” is source code that you expect to compile on most distributions, and the kernel alone is not granted to make it compile, the host will probably need a concrete shell and a precise set of shell utilities. It’s not uncommon to find out that a make script calls some shell utility that our distribution of choice doesn’t happen to have. When we refer to a “Windows” application, we refer to a program that we expect to run in any kind of “Windows” flavour (unless it is a specialized software that needs some special feature of the NT kernel series such certain server applications). If I have a CD-ROM Encyclopedia for Windows I expect to run it without problems on Windows 95, 98, ME, XP, 2000, etc. If I have the same product for Linux, it will be compatible with very specific distributions and if the software is in binary form it will probably brake after some years because we all know that binary longevity in Linux is not granted. I’m not talking about the ELF format, I speak in a generic way, meaning that binaries relay on too many dynamic libraries that are usually related to the target kernel release. Linux binaries usually don’t work out of the box, they often cry complaining about the lack of dynamic libraries or worse yet, glibc.
KDE & Gnome
The point here is not which one is better. There are countless articles on the matter, the problem is that we have two of them. Please, let’s not talk about personal taste and freedom of choice. Let’s talk instead of incongruity, incompatibility and development effort. Should car drivers choose whether the gas pedal must be at the left or the right?. No, they may want different colors or seats. We expect to apply the lessons we have learned when we have obtained a driver’s license in all cars. Could we say the same about Linux?. Are our private lessons on Lindows be of help to our parents when they receive their new computer running Suse or Mandrake?. If an old relative calls you from a long distance telling you that he runs Linux and that he can’t get into the Internet, can you give him instructions as clear as “Press Start, choose Run…, type cmd and then ipconfig”?. No, because a Linux desktop doesn’t have a precise way to open the command prompt. The most elemental tool, the shell, changes icon location according to the Window manager . Pressing some strange Alt+Ctrl combination to obtain the console mode is not a option, many LCD monitors don’t even support this text mode specially when it’s not the standard 80×25.
There are struggling efforts to integrate these two desktops, to make a Gnome application look like a KDE application and vice versa. It’s a good start, but if we intend to make both environments look the same, why should we have two different graphical APIs? There should be one “official” desktop for the end-user. The remaining toolkits don’t need to die, they can be used for academic or hobby purposes.
Running Gnome & KDE at the same time is only good for a transitional period of time. The X environment is already heavy, what about loading all the libraries for both KDE and Gnome just because the developer wants to choose the API she likes?. The soap opera doesn’t end here. What about when the developer chooses to use the latest API and asks the user to download a recompile the latest KDE/Gnome release?. Horrible. Bundling the latest toolkit library even statically compiled is not a necessarily a bad thing. The end-user shouldn’t even know what a toolkit is all about, she just wants to download, double click and go.
Poor low-level desktop integration
When Windows 95 arrived everybody complaining claiming it was just a “mask” and that underneath it was pure MS-DOS. I also was an sponsor of this concept at that time, but X is much more of a Mask for the command line based Linux distribution than Windows 95 for MS-DOS. I won’t get into the details of whether this is good or not for system stability, but the facts are clear for everybody: The Linux desktop is slow and poorly integrated. A getPixel()/putPixel() call is much more expensive in Linux than in Windows. Raise your hand if you thought “but you can project the desktop over the network”. 99% of the users don’t care about this, should we give them a two times times slower desktop just to leave the option open of sending a pixel write over the network?. The Linux desktop must get low-level graphic integration as soon as possible. There are some projects on the matter but half of the developers consider it not worth the price.
Besides graphics, the integration with the command-line environment is also poor. If you change a setting using the command line you are usually “on your own” and you are not expected to see these changes replicated in the graphical version of the tool. The graphical configuration tools are aimed for “those who don’t know how to edit config text files” which is in my own opinion an awful approach. Instead graphical tools should provide “an additional” way of modifying these files. How many times did you find a script that recalls another one with a comment that says “don’t touch this, generated automatically by Kjoe”?. The Linux desktop will never get far with this kind of hacks. It is true that part of the problem is that many utilities such as sendmail have configuration files so badly designed that it is very hard to reconstruct them by using a GUI parser, but hey, what about XML?. Every application should be able to be configured either by hand using a text editor or by a GUI application using ONE configuration file. Programmers should start to write “GUI friendly” configuration files.
I don’t understand why so many people complain about the lack of applications for Linux. This is probably its strongest side. It is true that some king applications such as Cubase (for audio production) or Photoshop are missing, but these applications will never get ported to Linux unless it first performs some house tidying (define a standard desktop and remove X or use in a way that “doesn’t hurt”). Although a personal example is always subjective I can say that while I use a Windows XP as my primary desktop, I don’t use a single application that isn’t available on Linux, in fact most of them were born on Linux and have been ported to Windows; Mozilla and OpenOffice, just to name a few. Most of these applications run much slower on Linux. Even OpenOffice opens in less than two seconds (On an AMD 2500+ PC). I have Unix tools installed so I can use most of the common UNIX shell commands. Windows only provides me a well-integrated hardware-friendly desktop. I could be running these applications on Mac OS X as well.
What we need
Let’s leave aside those who want Linux as a hacker tool or as a “matter of choice” product; for those Linux is already a stellar system, however, a system based 100% in open standards and open source software, free at least in its most basic form, and as easy and fast as Windows is the dream of most of us. We want to develop for the big public not just for other freaks like us. But everything comes at a price, which many of the hardcore developers aren’t willing to pay because many of them don’t understand that “Better” is many times the enemy of “Good”. In my humble opinion, a Linux based solution that aims to replace Windows should consider at least these ideas:
a) A Foundation Operating System
We have a kernel (Linux), not an operating system. An operating system contains a kernel and other applications like the shell that runs “shell utilities” such las ls, cd, mv, etc. It also has a boot loading system (LILO in many cases). The kernel also accepts modules that extends it and allows the operating system to recognize base and new hardware. As Linux is just the kernel, let’s call this the FOS (Foundation Operating System). The FOD should provide a standard kernel (with a granted number of drivers), one shell by default, a standard set of utilities and a configuration system for base services such as TCP/IP and hard drives. In practice, any distribution is already a FOD on its own, but what we need is a common foundation. The United Linux project seems to have this idea in mind, but not all distributions adhere to this initiative.
b) Binary longevity
Breaking binaries is a bad thing and it’s hard to find an excuse to justify it. The promise of light-speed processor-specific applications didn’t materialize. Nobody wants to spend 3 days compiling an operating system just to gain 20% of speed while a 20% faster processor maybe costs just 50 bucks more. The end-user should not need to have a development environment. It’s like saying that a car driver must have at home a shop to service his car. It is ok and also advisable to include as many interpreters as possible, including Perl, Python, Mono and Java. Applications for these interpreters should be distributed using some sort of FOS standard though. Newer kernels (and glibc libraries) should not break binaries compiled against older versions. Commercial software and games will never take off if binary longevity is not granted. Who wants to buy an expensive encyclopedia that will potentially brake with the next operating system upgrade?. Binary longevity doesn’t mean that all applications must be 386-compatible. Many Windows applications include portions of code that are activated if a given processor is installed. It’s also possible to bundle more than one binary as long as a default compatible one is supplied. The only incompatible binaries will be those that are compiled for radically different processors, for example PowerPC or Sparc. Binary longevity is end-user commitment.
c) Standard driver system
Drivers should be properly register in a given category (graphics, disc, etc). It should be straightforward to include and remove a driver either from the command line or the desktop. All drivers should be able to properly describe themselves. There must be an user-friendly driver tree in the file system for storing driver modules. Although Windows has a rather good system to install, uninstall and find drivers it is very hard to locate them using the command line. Most of them are mixed together and have short and non descriptive names such as NVD5443.SYS. It would be nice to be able to specify a driver at boot time when you need to include an unsupported device such as a SATA hard drive. It is very rude to ask for a “brand new kernel” just to be able to install the operating system.
d) A common well-integrated desktop
I don’t know whether it should be KDE, Gnome or something else, but it must be only one. The most important thing is that the desktop should be tightly integrated with the underneath command line environment. I don’t think that is is necessary to demand the presence of the graphical desktop. A bare-bones command-line foundation is according to me a good thing. (Most of the users will probably never choose not to install the GUI though). We all know how terrible is to repair Windows when it doesn’t boot in graphical mode. The point is that although is not necessary to allow 100% configuration through the GUI, the configuration files used should be the same as those used via the command line. For example, maybe you can’t change the MTU setting using the GUI interface but if you touch the file with a text editor, you still can change the IP address using the GUI without having two configuration files and without breaking something else. If the user breaks the text configuration file the GUI should suggest her to use the last working version. Standard folders such as “My Documents”must be provided . Applications should be packaged using a standard installer based on some sort of user-friendly script system. There should be a dedicated folder for applications, although the user may specify where to install them, a default “Program Files” folder is a must. Both on Windows and Mac OS X it is very clear where applications go once you install them, the same cannot be said about Linux. It is easy to locate the applications bundled by your distributor of choice, but if you download a new application and you forget when you installed it your are often lost.
As you can see most of the ingredients to integrate Linux into a winning operating system exist and are available now. We need standards and sage political decisions to prepare a good product ready for widespread distribution. If Linux continues to be driven by students who believe in freedom of choice and anarchy rather than in standards, with companies fighting to become the de facto standard alongside proposing their own proprietary systems we will never get there. I want to see the day when I can walk into a store and be able to purchase a so called “Linux application” that I will be able to install with the ease of any Windows and Mac OS X application. This is not possible at the moment, because “Linux” is just not ready for the desktop yet.
About the author:
I started computer programming in 1990. I have programmed in BASIC, C, Perl in the past and I’m currently specialized in Java technologies. I have used many operating systems including AmigsOS, Digital Unix, Slackware Linux and MS-Windows.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.