Linked by Thom Holwerda on Wed 26th Jul 2006 17:41 UTC, submitted by elsewhere
Linux Greg Kroah-Hartman has put the slides and a transcript to his keynote at OLS online. The title speaks volumes: "Myths, Lies, and Truths about the Linux kernel". He starts off: "I'm going to discuss the a number of different lies that people always say about the kernel and try to debunk them; go over a few truths that aren't commonly known, and discuss some myths that I hear repeated a lot."
Thread beginning with comment 146859
To read all comments associated with this story, please click here.
The ultimate trade off
by JeffS on Thu 27th Jul 2006 19:58 UTC
JeffS
Member since:
2005-07-12

So many here want a stable API. Great. I like stable APIs, too.

So let's all send a petition to the kernel devs requesting them to start maintaining a completely stable API.

And let's assume they grant the request.

That could be a good thing for more support of proprietary commercial drivers, and fewer driver authors will complain.

The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface. And, there will have to be multiple implementations of common APIs, to support multiple devices, and multiple versions. And flawed drivers will be supported longer, decreasing efficiency and security. And the kernel will become bigger, slower, and more bloated.

So, are those willing to make that trade off willing to sign a petition?

And to those who said old Unix maintained a stable API, answer this question - can you take a device that runs on HP-UX and have it run seamlessly on AIX, SCO Unix, Solaris, old SysV, any of the BSDs, or even Mac OSX? Or how about even an older version of HP-UX to a newer version. And take MacOSX - how many hardware devices can you run on it, that are not sold by Apple?

Also, name one *nix that supports more devices out of the box than Linux. I'm not talking about architectures, which NetBSD arguably runs on more than Linux (depending on who you talk to). I'm talking video cards, usb devices, sound cards, pcmcia cards, network cards, speakers, scanners, printers, etc etc. Is there one *nix, that has a stable API, that supports more of those kinds of things than Linux? Name one.

Also, is there a Live CD version of a *nix, with a stable API, that can run on as many things seamlessly as Knoppix? Name one.

And one more thing about stable APIs. Look at all the companies who support OSDL - IBM, HP, Oracle, Sony, Sun, the list goes on. Do you see any of them complaining about Linux's non-stable APIs? Well, no. Obviously for them, stability comes in at the distro level, which is a market opportunity that Red Hat filled in quite nicely, and their bottom line proves this. RHEL is on 18 month release cycles, and comes with (correct me if I'm wrong) 3-5 year support periods. A full RHEL Linux release is shipped with one major kernel release, with security patches and back ports. And the API of this one kernel release remains stable. The same is true on the applications level.

So, with this, we get the best of both worlds. The kernel and the drivers in it's tree get improved very rapidly, and API stability is provided by certain distros like RHEL, or Debian stable. Then they can include newer kernel releases when they are ready, or when the market will bare it. Or they can backport features (something Linus Torvalds said is just fine and dandy) as needed.

In the meantime, what about the geeks who want the "latest and greatest" kernel, DEs, apps, etc, who always download the latest ISO from Ubuntu or whatever? Well, if you want to be on the bleeding edge, you don't get a stable API, and you have to put up with more bugs. And then, what about those who choose more stable distros, like RHEL based CentOS, Slackware, or Debian stable? Well, it's simple. Just use hardware that is know to be supported.

Edited 2006-07-27 20:04

Reply Score: 3

RE: The ultimate trade off
by vtolkov on Thu 27th Jul 2006 20:45 in reply to "The ultimate trade off"
vtolkov Member since:
2006-07-26

The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface.
I would rather think that it will stimulate some additional modularity and some "inteligent design" will be involded at last.

And, there will have to be multiple implementations of common APIs, to support multiple devices, and multiple versions.
Which means versioning - feature we have a problem with. Old drivers are supported by old compatibility layer, new drivers have full benefit of the new architecture.

And the kernel will become bigger, slower, and more bloated.
Only, if we continue to have monolitic kernel. If it is modular, then if you do not have old drivers, no compatibility layer is required to be loaded.

The problem with this article is that Linux developers start believe themself, that they do not have problems with drivers, when users consider this as one of main problems of Linux. Just go to CompUSA, buy a new cool device and try to find a driver.

Reply Parent Score: 1

RE: The ultimate trade off
by Cloudy on Fri 28th Jul 2006 06:59 in reply to "The ultimate trade off"
Cloudy Member since:
2006-02-15

The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface.

There's no reason to believe this is true. Past experience and the literature both indicate the opposite. You get more innovation in systems when the internal APIs are stable and you're spending your time optimizing than you do when you're spending your time reimplementing to support yet another API change.

I'm not going to sign any petition to change the way Linux is done. It is the way it is. I'd just prefer people not assign properties to it that it doesn't have.

Reply Parent Score: 1