“Once upon a time, a Linux distribution would be installed with a /dev directory fully populated with device files. Most of them represented hardware which would never be present on the installed system, but they needed to be there just in case. Toward the end of this era, it was not uncommon to find systems with around 20,000 special files in /dev, and the number continued to grow. This scheme was unwieldy at best, and the growing number of hotpluggable devices (and devices in general) threatened to make the whole structure collapse under its own weight. Something, clearly, needed to be done.” The solution came in the form of udev, and udev uses rules to determine how it should handle devices. This allows distributors to tweak how they want devices to be handled. “Or maybe not. Udev maintainer Kay Sievers has recently let it be known that he would like all distributors to be using the set of udev rules shipped with the program itself.” ComputerWorld dives into the situation.
Ah, the wonderful world of Linux standards… or rather, the lack of them. However, this is a step in the right direction. No, really? Let’s make all distros use the same udev rules, so device names are the same? Perish the thought! Somehow, I’m not surprised to see Debian at the forefront of the opposition–they seem to believe that anyone who doesn’t follow their guidelines is an idiot who needs to be educated. If their Udev rules are so much more elegant than the default, I for one agree with the maintainer of udev: send them along upstream. I don’t understand why Debian has to hold out against just about every standard. But then, I guess its their choice to do so. I just hope that most distros abide by this request to standardize udev. Then, hopefully, we can standardize the rest… eventually.
What’s interesting, though, is that the Linux distros this is aimed at are the ones who patch everything anyway–Debian, Ubuntu, Fedora, SuSE, etc. Since they patch the hell out of everything already including their kernel, it’s unsurprising they patch udev. I don’t use any of these distros anymore, as I got sick of the quirks introduced by all these so-called improvements (I started on Slackware, and I still prefer the lean and mean, mostly upstream-compliant distros). So, I believe this udev issue is simply the tip of the iceburg, though likely to set a precedent for how distros deal with patching and upstream in general.
Just FYI, Fedora’s policy is to submit all patches upstream: http://fedoraproject.org/wiki/PackageMaintainers/WhyUpstream
The sad reality is that it is quite tough to enforce rules of any sort in the open source world. We all build what we want to build and if someone does not like it they don’t have to use it. (Most projects have to operate in this way because that’s the only way to attract developers.)
It is only when projects have significant support, let by benevolent dictators that any sort of standards are enforced.
Take the DeviceKit as an example. The DeviceKit had promise, it’s goal was to be a ‘small’ replacement for hal, but now the DeviceKit developer has decided to make this system daemon dependent on the entire glib / GObject framework (because I guess he is also a gnome developer). If my device does not require GObject I either have to deal with the bloatware or rewrite/fork whatever. This is a system daemon that’s supposed to be all about interoperability!?!
No way we’ll every get interoperability until the primary OSS corporate sponsors agree to it and enforce it among their own developers.
How ironic. First you complain about rule enforcement. But when someone does something that you don’t agree with, then you suddenly don’t want the rule to be enforced?
As for “bloat”: just how is this bloat? The DeviceKit author has two choices:
1. Write all the required functionality himself.
2. Use the functionality provided by a library, which is shared by many other applications and libraries.
(1) would logically result in *more* bloat because the developer ends up duplicating functionality that can already be found elsewhere.
And really, you call GObject/glib “bloat”? Dude, it’s a library of less than 2 MB, and shared by a bazillian of Linux apps! The C standard library is bigger than this! You do know that the memory used by glib is shared all those apps, right?
Very true. Qt4 itself uses GObject so Gtk+ and Qt can use the same common main loop and thus exchange information.
Glib is simply a C library with data types such as sets, lists, etc, and a few abstractions such as tasks. It no more belongs to GNOME than freetype.
Qt 4 can be (optionally) pluged into the Glib main loop. It doesn’t use GObject itself.
It’s optional, but the default is to use the GLib event loop in QT4/KDE4.
I agree with the idea that most mainstream distros should be using the same default udev rules, but I don’t think application writers should rely the /dev filenames to always be the same. The nice thing about udev and all the other various new Linux hardware APIs (e.g. HAL) is that they provide an abstraction above direct traditionally populated /dev filesystem queries. This is good for many reasons that I’m sure many people here are aware of, not in the least is extensibility.
The rules should be written for the ease of understanding and maintenance by system administrators, and should remain flexible for individual sysadmins particular needs. Software writers should ensure their software works even when you play around with the rules. This system-by-system flexibility is where Linux is currently a lot stronger than Windows, and I fear if these type of standards were too prevelant, app writers would start depending on facts that are virtually always true by convention, rather than always true by specification.
Edited 2008-08-23 16:46 UTC
Windows has had a HAL for over a decade now. Not the main thing I wanted to respond to, but the Linux HAL was far from revolutionary, it is something it was way behind on.
<rant>
Convention is almost always more important, and more useful, then specifications. Specs are usually a lowest common denominator, and typically convention goes far beyond specification. For example, the ANSI SQL spec has no way to do stored procedures in a database. SProcs are a vital feature of any half-way decent RDBMS, and any one worth its salt has a really good implementation of the feature. If you put out a database and didn’t have support for sprocs, chances are nobody would ever use your product. They would have good reason too.
The other thing is Specs are generally put out by consortiums of companies with competing interests, and what gets in or doesn’t tends to have as much to do with politics as technical merit. Conventions occur when everyone does something a certain way because that is the way that has proven itself to be the best.
</rant>
Edited 2008-08-24 04:05 UTC
The Windows HAL is not the same as HAL the program, even though they have the same name. HAL the program provides information to user programs about devices and suchlike (the kernel already provided this information; HAL just makes it easier to get it); hal.dll is the lowest level of NT, and performs I/O and handles interrupts for the NT kernel.
Quite right!
The Windows equivalent to the Linux “hal” is probably the SetupDi API set (which are not the most pleasant to use.. so people wrap them).
Specification: This is what we want you to do.
Convention: This is how things are actually done in the real world.
Convention beats specification most every time. Almost by definition, really.
The standard udev rules may need to put up with some ugly cruft for a while, in the form of legacy device names. Of course, it’s easy for me to say that since I don’t have to support the additional maintenance overhead. But there is a notable (excruciatingly slow, but relentless and pragmatic) trend to eliminate arbitrary “differences which make no difference” in our Linux world. Someday, things’ll be brighter. And this issue will be a non-issue that we will all have to give newbies an historical perspective upon before explaining the actual “problem”.
Our lives and our consciousnesses are but cursors on the progress bar of time.
Edited 2008-08-24 20:10 UTC
As others have said, Linux HAL is something different to Windows HAL entirely.
About your convention argument, it’s true that convention pretty much always is more important than specification for the real world. However what I’m trying to argue is that application writers should be trying to depend on something that will *always* be true, for instance, they can find the first hard drive through hal, rather than, for example the first hard drive is always /dev/hda. Poor example but I think it illustrates what I mean.
In the case of SQL, if you’re intending for your program to work with every SQL implementation, you shouldn’t in that case use SQL SPs. You could target your application at all SQL implementations that support SPs. I take it that most app developers are targetting all modern Linux systems, rather than ones configured in a particular manner.
The point is, while convention essentially overrides specification, trying to follow the specifications will enable users, administrators, packagers, etc. to be more sure that it’ll work properly on their system without being dependent on some unwritten convention.
Last time I checked udev rules could also call other scripts and executables. It seems to me all that has happened is a simple mechanism that became unwieldy was swapped out for a much more complex one which will probably eventually become unwieldy too.
Edited 2008-08-24 01:00 UTC
I’ll leave the developers vs. distributors debate to the relevant people, but what about custom rules used by the end users?
One simple, and I believe rather common example: I have two ipods at home, an 80G classic, and a Rockbox-running first-gen nano. Custom rules allow me to give each one a specific device name, which will stay the same regardless of the order I plug them in, when I do happen to plug them both in (using the serial numbers, no less). How is a set of rules chosen by the developers going to cover the zillion similar real-world uses of udev? Or am I missing something?
I assume your custom rules will behave exactly as they do now. The udev developers aren’t objecting to end users using udev to customize their system to their needs – they’re objecting to the distros that don’t use the standard ones, with all the potential drawbacks when distros diverge from upstream.
It’s really bad when the author of a feature loses touch with reality. But… I’ll blame the freedesktop.org folks probably for this more than anyone. There’s a lot of very EVIL agendas out there right now. People have forgotten what free software is all about.
Oh.. hal is junk.. in case somebody was wondering where I stood on that one. But that doesn’t mean we should take something like udev, which is pretty flexible and turn it into something “proprietary” … if I can use the term to describe the situation.
I’d like to see improvements on udev… but I think most of the DE folks are clueless when it comes to what the user wants/needs and what is real/true.
Well, this is known story and expected move. Configurability was one of the keywords in advocating udev against devfs. Personally, I always thought, that udev is nothing more than bloatware, so this step makes it clearer: we have something like devfs in userspace with its code written on a domain language, instead of C. That’s it.
You have strange definition of bloat. On my desktop box, udev uses 344k with 340k of that shared. That’s a net 4k of memory. And 0% processor. Presumably it uses a little bit of processor when it needs it to do work. By comparison, a bash shell sitting at a prompt doing nothing is using 3556k with only 1476k shared.
Edited 2008-08-25 08:51 UTC
Exactly. People are quick to call everything “bloat”, no matter the facts.
This is not all. It does not count all system structures, required to run a new process, all opened files and there caches, page directories, etc. The entire footprint in times bigger. And the memory is not all either. What about boot time? In general, if we replace an object or a procedure by a demon, it is possible that process will be small, comparing with Xorg, but if repeat the same pattern every time, we can make Windows Vista.
DevFS was unmaintainable. That alone is was solid reason to replace it. The fact that such functionality doesn’t belong in the kernel (it isn’t in the kernel on any other OS) was another.
Udev is a hell of a lot simpler, doesn’t have the extra problems caused by being inside the kernel, and can implement policy in a configurable way, including consistent device names, loading firmware, divers, notifying other programs (like HAL), and a bucket load of other stuff. All nicely consolidated in a (comparatively) simple little app.
It uses hardly any resources at all. On an Ubuntu desktop machine, it uses less than 1MB of RAM by itself, plus a few KB for the ramdisk that hosts /dev. This is on a machine that has ample memory (2GB). It sleeps until it’s needed, so it uses no CPU resources at all.
It can safely be paged out until it’s needed, so on a machine with little RAM it may have a memory footprint of 0. Since it wakes up so infrequently, this won’t even cause a performance problem.
The actual overhead of udev is probably 1MB of swap, but it does provide real, useful functionality, in addition to being a far more maintainable and flexible way to manage device nodes.
If this were 1995, I might be worried, but it isn’t. Unless you’re using an embedded system, in which cause you won’t need the functionality provided by udev, surely the cost is well worth it?
Well, this is a non-technical reason. And there are many technical choices. Why do not fix it and maintain after that?
This is something I would not agree. /dev is an interface to the kernel.
Well, it has other problems being in the userspace.
Not as much configurable, if we return to the post. This is a entire point. [u]It can not be configurable[/u], because it is interface, which should be consistent.
In fact, I do not use Linux much as a desktop machine, there are still better choices, but mostly on small hand-held devices, legacy systems and embedded systems, where every megabyte is counted. And hot-plug would be welcome, but without overhead of udev or cron or any other crutches. Kernel should manage itself.
udev is one of few things, which do not allow to create instant-on functionality, which is extremely important for UMPC or hand-held scenario. Hybernation or sleep do not work well either. Fast boot would be an ideal solution.
See, we count few MB already. What we are storing there? An interpreter of a domain language? What for?
I would think opposite. It is only now you can have a system on chip which can run full Linux with several GB file system on the chip nearby and you need instant-on and several hours working battery, so, for example, any cron-based solutions do not do anything useful, except draining your battery.
Why don’t I need hot-plug functionality? What is a principal difference?