Linked by Eugenia Loli on Tue 30th Sep 2003 05:20 UTC
General Development In the dawn of the renovation of freedesktop.org's web site, David Zeuthen announced the release of HAL 0.1. HAL is an implementation of a hardware abstraction layer, as defined by Havoc Pennington's paper. It encompasses a shared library for use in applications, a daemon, a hotplug tool, command line tools and a set of stock device info files. Carlos Perelló Marín also announced the design of a similar concept, but it is expected the two projects to merge. More people are encouraged to join this innovative project. Elsewhere, Gnome's Seth Nickell is giving us a first taste of his effort to replace the Init system.
Permalink for comment
To read all comments associated with this story, please click here.
@ Rayiner Hashem
by oGALAXYo on Wed 1st Oct 2003 00:42 UTC

Look, I have an XP 1600 here, 256mb ram, an IBM 15gb Hard Disk. My entire System is compiled from sources and guanrantees a normal performance boost over the binaries that some distros provide by using the correct opcodes suited for the CPU rather than limiting to the i386 or i486 opcodes that some distros compile their stuff with.

This is minimal but overall (measured on the whole system) gives a huge performance boost and this with standard -O2 and athlon-xp arch.

Furthermore my Hardware operation at good state, I don't encounter any IRQ conflicts my HardDisk is operating at 32bit DMA access UDMA 66. Running my entire System on 2.6.0-test6 as of now using the XFS from SGI. I think that this System is quite cool. Probably not the best but compared performant.

When you load up GNOME through GDM then in this moment only one single maintask is happening (symbolically) e.g. it boots your Kernel, start the init process, init process loads GDM and X, you login and you are presented with GNOME.

When you load one app, another app you then do single tasks as user. You run an application, you use it, you run another one etc. During all this time the CPU is mostly not used and the HardDisk operates at normal state and syncs its cache to the Device every now and then.

I must admit that I was highly impressed by the huge changes that 2.5.x (2.6.x) provided me after switching from 2.4.x some months ago and I noticed that the swapstorms disappeared, that the System became more responsive and and and. But Hardware is limited, there is a point where you can't get out more. Specially not the Hardware one has at home and I doubt that people go out every day and update their System. It's a dumb lie to say, if you want to use GNOME today then buy the newest junk of Hardware you can get. Not everyone is wiping the money on the street to do this.

Now the limitations:

Say you copy (not move, this will only change the node entries of the FS bitmap) a huge amount of Data from location A) to location B) and do normal operation in GNOME e.g. you want to listen to Sounds or open a GNOME-Terminal, then what happens. We know the new Linux Kernel is performant and is made to deal with resources perfectly, but as soon as it comes to dealing with physical limitations of the Hardware things start to cause problems. Copying files means it copies the physical entries of Sector XYZ to another location of the Harddisk what we realizes as scratching sometimes. Now the Harddisk is busy scratching the data from location A) to location B) and now you turn back to your GNOME desktop (scratching is still happening due to huge data transfer) and now start some GNOME applications, the app starts ld.so set's in trying to load all the libraries required by that application and under GNOME it's not uncommon that over 50-70 library dependencies are required per app. Simply run 'ldd gnome-terminal' for example it spits out what gnome-terminal alone requires, not to mention that this is only application to library dependency, the library to library dependency not measured. Say you also have a swap partition on the same hardware and run out of memory and then you scratch even more, GNOME becomes even less responsive, applications may crash and re-spawn every now and then and you end in a permanent scratching of the Harddisk because it has to heavily switch it's position on the plates.

Of course you would answer that in a normal world everyone has a 7200rpm HardDisk primarily RAID based, no swapspace and 2gb of physical ram and 3ghz Pentium 4. Dream on this is most likely not the case.

Now from reading this article, from knowing stuff around GNOME myself, they have a lot of duplicate stuff only to stay Operating System independant e.g. some wrappers for functions that may be missing on BSD, some that may be missing on Linux, Solaris or Darwin (dunno the details yet)

These duplicate things or the dependency of ancient technology is what cause the drastical Speed losses. What we have now is a Linux Kernel (speaking of it now) which does hardware initialisation, which provides it's devices through devfs, udev or normal nodes (the old way), you can't control the hardware through a GUI system like GNOME because Linux wasn't concepted that way, they used XFree86 to build the Desktop ontop of it because there wasn't much alternative when they started years back. What do we have now ? A Kernel that doesn't offer the 'Hardware just works paradigm' and we have an XFree86 System which is horrible bloated.

Now the GNOME people trying the fix these problems by a) forking XFree86, b) writing wrappers around the libraries so they run on all Systems, c) add layers over layers on exiting solutions e.g. replace Init, add HAL on a place where it doesn't belong. All this is a signal (with respect to their authors) that there is something big wrong. Limitations they try to work around with bad solutions. Adding more complexity in libraries, development tools and stuff like this gains nothing and I ask myself it wouldn't be better for GNOME to write a Kernel around their Desktop that provides them exactly this:

a) staying plattform independant (e.g. works on all Hardware),
b) implement their needs on the bottom layer of the Kernel which gains in total speed due to implemented right,
c) using a 2d/3d accelerated framebuffer for graphic layout to do their stuff.

This will put a lot of stuff towards the Kernel level and would make a few libraries totally disappear or ideas being done differently. And of course this is only theoretical thing that probably will not happen (little sidenote that there was a GNOME-OS Mailinglist some times back).

Right now we can expect that these hacks will make it into GNOME or KDE and I want to let you kow that Linux and Open Source has so what changed. These ideas are HUGE changes not of the Desktop even of the entire concept of the philosophy of Linux itself. There are not much choice either these days because (for people who are not blinded - and those using brain) everything has settled between GNOME and KDE, there is no room for choices anymore and all remains are talking and defending ideals and ideas. While I do agree that being able to control hardware on the Desktop on the one side, I also see the reality that these solutions are not really sane. How many libraries and duplicate new Layers do we actually need until we can use a nice Desktop ? Right now the amount of Libraries is overhelming and overkill. But I don't expect that anyone here will understand these things. They only see fancy icons and pray the prayersbook of freedesktop to be everything right.