Linked by David Adams on Mon 24th Aug 2009 09:21 UTC
Linux A reader asks: Why is Linux still not as user friendly as the two other main OSes with all the people developing for Linux? Is it because it is mainly developed by geeks? My initial feeling when reading this question was that it was kind of a throwaway, kind of a slam in disguise as a genuine question. But the more I thought about it, the more intrigued I felt. There truly are a large amount of resources being dedicated to the development of Linux and its operating system halo (DEs, drivers, apps, etc). Some of these resources are from large companies (IBM, Red Hat, Novell). Why isn't Linux more user-friendly? Is this an inherent limitation with open source software?
Permalink for comment 380286
To read all comments associated with this story, please click here.
wannabe geek
Member since:

I won't say anything new, just a summary of what I think are the deepest technical problems with GNU/Linux, and some hints on what to do about them.

I think the main problem is that the Unix model was developed for a very different computing environment than modern desktops, especially regarding hardware compatibility, graphical environments and third-party software.

In the big servers of old hardware was more rarely replaced or updated, not to mention hot-plugged, so hardware incompatibility problems were not an everyday concern, and high-quality drivers were taken for granted. So it was okay to use monolithic kernels where all the drivers must be trusted, for higher performance.

An open-source operating system which must support drivers from a myriad of sources, sometimes of dubious quality, some of them FOSS, some of them proprietary, should be especially resilient when it comes to driver failure. A micro-kernel architecture, as in Minix 3, or the Hurd, seems more adequate. Of course those systems also have their own problems, but I think it's worth to explore any trend towards more modular, resilient hardware support, such as managed kernels like Sharpos and Cosmos, or new architectures like Genode.

Then there's the fact that GUIs are treated like a luxury, as if anything beyond the CLI were superfluous eye-candy. That may be fine for a server, but when it comes to a desktop, the GUI is a basic service; most modern end-user applications don't even have a CLI version.

Recent experimental improvements like kernel mode-setting and rootless X look like steps in the right direction, but maybe the X windowing system should be replaced altogether. In any case, recovery from graphical problems should be as quick, automated and incident-free as possible.

Last, but not least, regarding third-party software, it's my understanding that in the '70s most software either came with the computer or was directly written by its users, often in the form of shell scripts and small C programs. There was no need to integrate dozens of applications using different versions of hundreds of libraries; package management was not really an issue. Most package management systems today have a strong bias towards memory-efficiency and against stability, I mean, they do a great job of replacing redundant libraries with slightly different versions, but they can't selectively leave alone a crucial application's libraries if the user so wishes, so updating an application can always break some other application. This is not frequent in other operating systems, where user apps are clearly separated from system libraries on which they depend.

There is indeed some beauty to the notion of blurring the line between system libraries and user applications, as package management in GNU/Linux seems to imply, but then the system should be much more carefully thought-out, in order to support different versions of libraries and maybe even different system and user configurations. Alternative distributions such as GoboLinux and NixOS, and systems like Zero Install, may be leading the way.

Reply Score: 1