Linked by Thom Holwerda on Tue 3rd Jul 2007 19:35 UTC, submitted by Rahul
Linux As the number of Linux kernel contributors continues to grow, core developers are finding themselves mostly managing and checking, not coding, said Greg Kroah-Hartman, maintainer of USB and PCI support in Linux and co-author of Linux Device Drivers, in a talk at the Linux Symposium in Ottawa Thursday. In the latest kernel release, the most active 30 developers authored only 30% of the changes, while two years ago, the top 20 developers did 80% of the changes, he said.
Thread beginning with comment 252756
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Monolithic...
by butters on Tue 3rd Jul 2007 22:41 UTC in reply to "Monolithic..."
Member since:

First, I don't see how your subject relates to your post. If drivers were maintained out-of-tree, the kernel would still be monolithic.

Personally, I think that Greg argues the stable API debate backwards. He says that Linux doesn't need a stable API because they maintain everything in-tree, and the lack of stability that lets them improve the kernel at a faster rate.

This is true, but it doesn't explain why it's worth maintaining everything in-tree. For that, you have to consider the quality angle. Even if you have a stable API policy, there are always going to be bugs in kernel components, and their behavior may change in very subtle ways. In order to ensure a high-quality kernel, you have to test development builds as complete units, including the drivers.

At the end of the day, drivers and other modular components get loaded into the kernel address space and run in kernel mode. Their quality is as critical to system stability as the core kernel code, and they have to work together. That's why the world's most sophisticated distributed kernel development project maintains as much code as possible in-tree and highly encourages vendors to do the same.

Binary compatibility for userspace applications, on the other hand, would be a good thing for the Linux community. However, this is made difficult because of the same issues that made the Linux kernel project work so well. The userspace library stack is split amongst numerous projects with their own source trees and release cycles.

This is why package compatibility is only provided at the distribution level. There's no universal project for making sure the libraries that make up a Linux system are developed and released as a cohesive product. Each distributor fulfills this role, and none of them seems to be in a position to become the consensus unified Linux distribution.

Instead we have about 3-4 major Linux distributions, each with their strengths and weaknesses. If the kernel project embraced out-of-tree development, we'd likely have 3-4 major kernel distributions (and many minor ones), each with their strengths and weaknesses. I prefer the unified mainline kernel with its track record of rapid improvement and high quality.

Reply Parent Score: 5

RE[2]: Monolithic...
by Timmmm on Wed 4th Jul 2007 20:19 in reply to "RE: Monolithic..."
Timmmm Member since:

Actually there is binary compatibility. It is perfectly possible to create binaries that runs on gentoo, Debian, Fedora, Linspire, Arch Linux, Slackware, LFS and whatever distro you can think of.

Maybe a simple C program with no dependencies, but for anything more complicated, no you can't. See for example:

Of course you can occasionally run into missing .so-files, but this is no different than problems with missing libraries in OS X or Windows ;)

Oh come on! When was the last time you had a DLL missing in windows?

Having every driver in the tree means that needed API changes can be made quickly and all drivers can be updated easily.

Yes but they have to be update by the kernel maintainers! Thus resulting in the current situation.

Linux is a monolithic kernel

I was actually referring to their development model as monolithic. Sorry that was very ambiguous.

In the Windows world there are many very badly written drivers by cheap companies which cause instablity in the platform.

True but at least you have the option of using them. Better than nothing!

If the kernel project embraced out-of-tree development, we'd likely have 3-4 major kernel distributions

I don't see why. It would be easy (or at least possible) to make it like Xorg is now - have a core package, and then separate driver packages, and some way of automatically installing them: "You have inserted device X, this requires the package Y to be installed. Continue?"

Userspace binary compatibility should be their priority though. Windows manages it! Window XP can run warcraft 2 with no help - try running something that old on linux! Also try compiling something on a ubuntu, and then running it on debian stable. Impossible.

Reply Parent Score: 1

RE[3]: Monolithic...
by Redeeman on Thu 5th Jul 2007 03:26 in reply to "RE[2]: Monolithic..."
Redeeman Member since:

you can run stuff very much older on linux, you just may not be able to with a DEFAULT installation. mainly the stuff being older libc's or libstdc++ but if bundled, it will work perfectly.

Reply Parent Score: 1