Greg Kroah-Hartman has put the slides and a transcript to his keynote at OLS online. The title speaks volumes: “Myths, Lies, and Truths about the Linux kernel“. He starts off: “I’m going to discuss the a number of different lies that people always say about the kernel and try to debunk them; go over a few truths that aren’t commonly known, and discuss some myths that I hear repeated a lot.”
Nice article, not at all technical or geeky, great slides even if you don’t agree with every point he makes. Even someone who knows very little about Linux would emerge from this having grabbed some key points.
Yes, we passed the NetBSD people a few years ago in the number of different processor families and types that we support now. No other “major” operating system even comes remotely close in platform support for what we have in Linux. Linux now runs in everything from a cellphone, to a radio controlled helicopter, your desktop, a server on the internet, on up to a huge 73% of the TOP500 largest supercomputers in the world.
WOWW!! I didn’t know that Linux KERNEL is operating system. Or we have Linux without GNU already? I don’t get it. And “supporting” some processor by running plain kernel and drivers is not “fully working operating system” IMHO.
Let’s not start this argument again. By many definitions, an OS == kernel. By many others, the kernel is only a small part. It just depends on how you want to look at it, so I think it is perfectly fine if someone chooses to call the kernel an OS.
Your humble opinion is wrong. A kernel is an operating system as it operates a piece of hardware and provides an interface. Allbeit, the user of a kernel is not Joe Sixpack, but an application software developer.
WordNet (r) 2.0 [wn]
operating system
n : (computer science) software that controls the execution of computer programs and may provide various services [syn: OS]
Jargon File (4.3.1, 29 Jun 2001) [jargon]
operating system n. [techspeak] (Often abbreviated `OS’) The foundation software of a machine; that which schedules tasks, allocates storage, and presents a default interface to the user between applications.
[Jargon definition was abbreviated]
The next issue is that anything Linux builds on, GNU should build on as well: Simply because Linux is built, almost always, with Gcc and so are gnu’s utilities. How many GNU utilities are using asm these days?
How do you operate the computer with just a kernel?
The fact is it is a system of many parts, from the kernel for abstracting hardware access to the shell for allowing user-access to the kernel in order to control the hardware.
If you think about it for a moment kernel+shell=a single grain. It’s a metaphor that’s got lost as the terms became accepted in their own right.
The jargon file is more accurate (unsurprisingly), Linux is not an operating system on its own: it is an operating system kernel that can be used with other user-space utilities to create a full-featured operating system. At present, the only user-space utilities that work with Linux are those supplied by the GNU project, and even the BSDs use GNU tools (notably GCC).
Thus Linux is not an operating system. This also negates Greg’s comment about plug’n’play. A lot of users using the standard graphical shell on their machine (Gnome, KDE), cannot plug a device into a system and be certain that it works. Ubuntu, for example, won’t mount my USB hard-disk without low-level intervention from me: Windows will. The whole system does not yet work together seemlessly, as the person from Novell indicated.
The problem with your definition is that it is a constantly changing one. Is DOS an OS? How can you operate a computer without a desktop? In another 10 years will the systems of 2005 count as an OS?
When I was in school, we were told an OS was an interface to the hardware and a scheduler, nothing more. App programmers tend to include all the libraries, etc. on top of that, since they are pretty much linked and they use higher level functions that a kernel doesn’t directly provide. I see both sides of the argument, it just depends on your perspective.
Nearly every CS program taught in universities around the world teach that Unix Operating Systems, including Linux, are comprised of three parts:
1. the kernel
2. the file system
3. the shell
Well, Kansas State University did not. And the professor who taught it was from Cal State, so I’m sure he taught the same thing there in the past. He taught us that the kernel was the OS and that the shell was merely a userspace program built on top of the OS.
Maybe every other university in the world is different though?
Edited 2006-07-27 02:26
Ok, here are some links to lecture notes from Dr. David Schmidt, a professor at KSU:
http://www.cis.ksu.edu/~schmidt/300s05/Lectures/OSNotes/os.html
http://www.cis.ksu.edu/~schmidt/300s05/Lectures/ArchNotes/arch.html
If you don’t want to read the whole thing, let me point out some important parts:
“When the computer is started, part of the operating system, its kernel, is copied into primary storage and started.”
Clearly, he states that the kernel “is part” of the OS.
“The operating system is especially helpful at managing one particular output device — the computer’s display. The operating system includes a program called the window manager, which when executed, paints and repaints as needed the pixels in the display.”
Again, he states that the OS “includes a program called the window manager”. Or for the graphically challenged a command line interface should suffice I’m sure.
So… What do you think?
I never had Schmidt. Anyway, as I said, there are a million different definitions of what an OS is. Some of them say it is just a kernel and others say it is more. I think both are valid, just different points of view.
He taught us that the kernel was the OS and that the shell was merely a userspace program built on top of the OS
He may have done so. Such is a recently emerging and fairly atypical definition of OS, and not a particularly useful one.
There’s a reason why we came up with the word ‘kernel’ to deliniate the central bits of an OS from the rest of it. If the ‘kernel’ had qualified as the OS, we wouldn’t have made that distinction, back in the day.
A linux kernel, by the way, won’t run on bare metal without a minimal userland. It will, the last time I checked, fall over on its face.
So where there may be some people who define OS equal to kernel, the actual kernel in question is indistinguishable from a brick without a user land.
So where there may be some people who define OS equal to kernel, the actual kernel in question is indistinguishable from a brick without a user land.
Yes, I’m not disputing that. Clearly a kernel by itself is completely useless, and any definition that defines it as such is theoretical since any practical application requires some userland.
What I’m saying is this: Take a car. Remove the engine. I’d still call it a car, wouldn’t you? Now remove the battery. Still a car? Keep removing things. At some point it becomes nothing more than a heap of metal, but exactly when that tipping point occurs is going to be disputed by most people. Some people would say that as long as the car frame is intact it is still a car. Others will argue that it cannot be a car if it is useless, so it ceases to be a car the second that engine has been removed. Most people will probably fall somewhere in the middle.
My point? That there is no single definition that will satisfy everyone. What makes up an entire OS is quite subjective and often depends on what you use it to do.
The problem with your automotive analogy is that no one takes an enigne out of a car and calls the engine ‘car’, but that’s exactly what people who want to take the kernel out of an os and then call the kernel ‘os’ are doing.
With its API, the same way the programs built on the kernel operate the kernel.
The Linux kernel, by itself, meets none of these definitions. You don’t think so, then do the following: set up a linux distro on a spare partition. get it working nicely. then cd / and remove everything except the kernel. You can even keep grub, if you’d like.
Now reboot. . .
That’s not an OS. That’s a kernel.
How many GNU utilities are using asm these days?
Well, the C library is, and there are machines for which there are kernel ports and no full ANSI C, which means a lot of stuff won’t compile for them.
Let’s look at those definitions again:
software that controls the execution of computer programs and may provide various services
In other words, a scheduler. Which is part of the kernel.
The foundation software of a machine; that which schedules tasks, allocates storage, and presents a default interface to the user between applications
In other words, a scheduler, memory manager, file system, and some sort of shell. The kernel satisfies 3 out of 4, but not the last part.
Both have some merit.
then cd / and remove everything except the kernel. You can even keep grub, if you’d like.
Now reboot. . .
That’s not an OS. That’s a kernel.
That is an OS by some definitions, it just won’t do anything because you’ve deleted all the user applications that run on top of it.
That’s not an OS. That’s a kernel.
That is an OS by some definitions, it just won’t do anything because you’ve deleted all the user applications that run on top of it.
It’s not just user applications. kernel’s no good without init…
and a definition of OS that allows “it’s a brick” is a very useless definition of OS.
Edited 2006-07-27 03:37
It will boot fine, and stop at init.
And honestly, if you think it doesn’t mean the wordnet definition you really need to read up on something:
operating system
n : (computer science) software that controls the execution of computer programs and may provide various services [syn: OS]
The Jargon file definition is honestly a little harder to push, since it’s supposed to have a user interface, however since a developer is a type of user and the kernel provides an API, the kernel provides a user interface.
It will boot fine, and stop at init.
It won’t find init. It’ll then fall over.
eager to jump in and supply device drivers
Which is why the kernel devs maintain their own drivers.
That said, the article does read a little bit like an advertisement. It’s not that it got anything wrong, just that it feels a bit… self-promotional, or something.
Edited 2006-07-26 18:36
> Which is why the kernel devs maintain their own drivers.
Then they might not have the latest and greatest stuff.
3D video drivers – this might not be related to linux kernel
latest WiFi enhancement – MIMO, SuperG, G+ etc
Agreed; Linux needs to look at the what the ‘others’ are doing, if they want their, tongue ‘n cheek ‘world domination’; OpenSolaris already has a desktop operating system being based upon it (Nexenta), which has a stable driver API and lots of other goods; on the FreeBSD front there is PC-BSD.
And the myth, ‘oh, it’ll remain static if we have a stable API’ – take another fellow opensource project, FreeBSD, and the fact that, yes, they have broken compatibility on occasions with driver API, but they’ve always given timely advance notice, the reason behind it, and warn that in the release, compatibility has been broken, and a recompile of drivers will be required.
OpenSolaris, for example, has had MAJOR overhauls in its internal structures to improve the speed of memory read and writes, cleaning up the source code etc. and yet, has maintained driver API compatibility – what it tells me is that the issue with Linux isn’t about, ‘its impossible to maintain compatibility without losing flexibility’ but rather the developers choosing not to maintain compatibility.
Now, if they wish to go down that route, then they’re quite entitled to it, but at the same time, however, lets not try to spread misinformation about the real reasons why a stable API isn’t provided.
Yet of all the OSes you mentioned with “stable” APIs, Linux, the OS without a “stable” API, still supports the most hardware and architectures out of the box compared to any OS known to man. All of a sudden, the stable_api_nonsense.txt begins to make sense.
“Yet of all the OSes you mentioned with “stable” APIs, Linux, the OS without a “stable” API, still supports the most hardware and architectures out of the box compared to any OS known to man. All of a sudden, the stable_api_nonsense.txt begins to make sense.”
Having support for nonsense old hardware really doesn’t matter.
example: Windows doesn’t support CGA, MDA video cards, is that really a matter?
It is the lastest, greatest hardware that matters.
I think newer hardware is important too, although I’m not as dismissive of the old stuff as you are.
However, you didn’t respond to what he said. You praise Solaris, etc. for having a stable API and say that will provide more drivers. Yet, those OSs actually have less support than Linux does. This seems to suggest that having a stable API doesn’t matter, and that only by gaining market share will companies be forced to create Linux drivers.
Like USB 2.0, Wireless USB, SATA, etc? Linux supported these things out of the box much sooner than Windows did.
Also, the “latest, greatest” in what area? Linux is usually behind in the area of graphics cards, but in the embedded world, theres lots of cutting-edge hardware that Linux supports and Windows doesn’t.
Basically, your complaint comes down to “waahh, Linux doesn’t support my X1900+ XT Platinum XXL so I can’t play World of Warcraft, waahh”.
Rayiner, the problem is not hardware support. Linux supports a lot of hardware, and esp. out of the box, it beats just about any other. There’s no denying that, and it’s one of the main reasons I like Linux so much.
What this is all about, is the stable driver API stuff. There is absolutely NO REASON for the Linux kernel devs to break driver API compatibility completely at will, at random, without any form of prior announcements or warnings.
Kaiwai put it really well in his post, so there is little reason for me to repeat his words:
http://www.osnews.com/permalink.php?news_id=15295&comment_id=146429
Linux has no stable driver API, but NOT because it stifles innovation or more of that marketing bogus. Linux does not have a stable driver API, breaking compatiblity at will/at random without prior notice, because the kernel devs either simply cannot maintain compatiblity like so many other operating systems do because it is too difficult, or because they are simply not willing to because it is boring, or a combination of both.
Commercial companies do some major spindcotoring, but this stable driver API stuff is a classic example of the art as well. I would wish they just be men about, instead of sugarcoating their words.
Edited 2006-07-26 20:04
“Linux has no stable driver API, but NOT because it stifles innovation or more of that marketing bogus. Linux does not have a stable driver API, breaking compatiblity at will/at random without prior notice, because the kernel devs either simply cannot maintain compatiblity like so many other operating systems do because it is too difficult, or because they are simply not willing to because it is boring, or a combination of both. ”
The slide show did specifically say that Linux development is about evolution, and that maintaining a constant stable API would slow down evolution. It gave specific examples of merging existing drivers because, even though they were for different devices from different companies, they did essentially the same thing. Thus they could be merged, and thus keeping the kernel smaller and more efficient.
Also, the slide show did point out how a stable API makes it so that security problems are not dealt with as efficiently. It makes it so the writer of the driver does not have to make changes, thus keeping in potential security problems longer.
Another thing, please remember that examples given about OS’s that do maintain stable APIs point out the fact that not maintaining a stable API evolves the kernel faster. Look at how much more hardware is supported by Linux (non stable API) as compared to BSD (stable API).
Finally, and perhaps most importantly, maintaining a stable API would probably encourage hardware providers to write their own proprietary drivers, rather than submit them to the kernel tree as GPL. This is in violation of the kernel license, and keeps problems in drivers much longer.
In theory, I agree with the idea of a stable API for drivers. You would think that it would cause greater support of hardware devices for the Linux kernel. But it seems that in practice, the complete opposite is true. A non stable API encourages Linux evolution, GPL’d drivers submitted to the tree, and a more stable, secure, efficient system. I’m looking at real world results here. Sure, occasionally there is a device that isn’t supported – usually a very exotic and/or brand new device. But even those end up supported, and supported very well, over time.
As the Linux eco-system and critical mass continues to grow (and grow it does), GPL’d drivers submitted to the kernel tree, to evolved with the kernel, will an accepted norm, and any hardware device provider that does not conform to this will lose out, big time.
Look at how much more hardware is supported by Linux (non stable API) as compared to BSD (stable API).
Here is another “urban myth” about “Linux got more drivers than …”- that is not true- FreeBSD for example got more wifi drivers compare to any Linux distribution these days.
I do agree that the “innovation” argument isn’t a very convincing one, but I don’t think that means that there are no good reasons for preserving compatibility.
Linux isn’t like any other OS. Consider: Solaris keeps binary compatibility. Of course, all Solaris developers work under one roof, and Solaris supports an order of magnitude less hardware and doesn’t run on everything from cell phones to supercomputers. Windows keeps compatibility, but the total number of Windows developers + Windows driver developers is an order of magnitude more than the number of Linux developers + Linux driver writers. They also have much more funding, large numbers of professional QA people, and a large amount of central organization.
As a result, the Linux developers just have to deal with things other developers don’t. In Linux, the driver API might need to be changed to support power management on PDAs. Do the Solaris developers ever encounter that situation? In the past, the Linux developers changed the driver API to support a new task queueing model for drivers. Should they have kept that new feature out, to maintain compatibility? Should they have added a compatibility layer, and supported both APIs? Who would want to maintain it? Given the particular constraints of the development model in Linux, it’s not obvious that they can do any better than what they’re doing now.
Given the particular constraints of the development model in Linux, it’s not obvious that they can do any better than what they’re doing now.
That is a very good point, and I understand where you are coming from. However, is even a warning in advance too much to ask? I think a lot of people would be less pissed off if the devs at least showed they care by sending out a few warnings, like, “hey guys, we’re breaking some things here and there, because we want to achieve this and that. Just so you know.”
As a result, the Linux developers just have to deal with things other developers don’t.
Ok, I am going to get killed for saying this here (I feel my next column taking shape), but here I go: your post just gave a few damn good examples as to why Linux needs to do what I have proposed months ago on my weblog: split the kernel up. Don’t try to be a jack of all trades, but specialise. Create seperate branches for embedded, server, and desktop use. This way, it will be much easier to maintain backwards compatibility on each individual branch, while not being burdened ont he desktop branch by features needed on a Zaurus.
Linus has consciencly chosen to keep the vanilla Linux kernel a jack of all trades, and this is fine, it’s his baby, it’s his party. However, that leads to a certain set of major drawbacks, which we (Kaiwai, you, and me) identified in our few posts. Now, what I dislike, is how Linus & Co are making up bogus reasons as to why these disadvantages exist, instead of just being men about it and admit the real causes.
If I understand what you’re saying, you’d like to see the kernel forking in different branches, so that each branch can keep compatibility with themselves.
That would mean that the branches diverge over time, and the fixes and additional features from the different branches would stay on their branch, or it would take a lot of effort to resync everything the whole time. Considering the resources available to the Linux devs, adding a lot more work for some vague advantage (what would “compatibility within a branch” really buy you) seems to be the wrong path to take in my opinion.
Very often the API breakage would be not only beneficial to one specific “branch” – they break the API to address real deficiencies that need addressing on all branches.
In a way, Linux devs already work a little bit with specialised branches (with GIT it’s easy these days). However everyone prefers not to diverge too much from Linus’s tree. I think it’s for a reason – the effort needed to stay out of sync is prohibitive, and everyone would just rather work on their specialisation, and merge it when it’s ready.
Feel free to ignore the silly rant that follows, I just want to clarify where I’m coming from:
I’m in the “drivers should be in the unified tree” camp. The only reasons why you’d need a stable API is to keep your driver out of the tree – which afaict means either one of two things:
1. you want to keep your driver out of tree because you can’t be arsed to bring the quality up to the standards needed for inclusion – a driver like that should only really be used as documentation for a better driver that _will_ be included in the kernel. What good is a driver that breaks if you want to use it with suspend, smp, … You don’t need a stable api to be able to act as hardware documentation 😉
2. you want to keep your driver closed. Considering most closed drivers end up being abandoned almost as soon as the product hits the stores (nvidia is quite good though), and is often of even worse quality than the drivers mentioned in (1) – I don’t feel very sorry for the pain those people have to go through to chase the API changes. Either they bear the pain, or the Linux devs bear the pain of dragging compatibility layers along for ages; I’d rather have the hardware guys who choose to keep things closed bear the pain.
If I look around in this room there’s a lot of perfectly good hardware lying about, that realistically only works with Linux, because there’s no proper driver support for other systems anymore (or there never was proper support in the first place). A stable api often ends up being the only needed excuse not to maintain or improve a driver.
Silly rant over
I’m in the “drivers should be in the unified tree” camp. The only reasons why you’d need a stable API is to keep your driver out of the tree – which afaict means either one of two things:
Your claim is based on the implicit assumption that the person who changes the API will make the necessary correction to your drivers if they’re in the tree. This has, in practice, not proven to be true for drivers that aren’t “PC mainline”.
I want a stable API because I don’t want to be burdened with having to change my drivers every few weeks when the API breaks again, as has been happening in recent times.
“I’m in the “drivers should be in the unified tree” camp. The only reasons why you’d need a stable API is to keep your driver out of the tree – which afaict means either one of two things:
1. you want to keep your driver out of tree because you can’t be arsed to bring the quality up to the standards needed for inclusion – a driver like that should only really be used as documentation for a better driver that _will_ be included in the kernel. What good is a driver that breaks if you want to use it with suspend, smp, … You don’t need a stable api to be able to act as hardware documentation 😉
2. you want to keep your driver closed. Considering most closed drivers end up being abandoned almost as soon as the product hits the stores (nvidia is quite good though), and is often of even worse quality than the drivers mentioned in (1) – I don’t feel very sorry for the pain those people have to go through to chase the API changes. Either they bear the pain, or the Linux devs bear the pain of dragging compatibility layers along for ages; I’d rather have the hardware guys who choose to keep things closed bear the pain.
If I look around in this room there’s a lot of perfectly good hardware lying about, that realistically only works with Linux, because there’s no proper driver support for other systems anymore (or there never was proper support in the first place). A stable api often ends up being the only needed excuse not to maintain or improve a driver.
Silly rant over “
Not a silly rant at all. In fact, that was perhaps the most sensible post in this entire thread.
I’ve been kind of arguing both ways (stable API vs non stable API), and now I’m thoroughly convinced that a non-stable API is the absolute best way to go for kernel development. This methodology has proven itself beyond a shadow of a doubt to be wildly successful (look at how much hardware Linux supports out of the box, and look at fast Linux is, and look at how scalable Linux is, and look at how versatile Linux is, and look at rapidly Linux is being adopted in business, and look at what a great desktop Linux is, and on and on an on).
The only real cost of a non stable API, as well as the GPL (which legally requires drivers be open source), is that the ultra competitive graphics chip makers don’t want to play (for fear of giving away any speed advantage they might have to their competitiors).
Well, as far as I’m concerned, it’s worth it. For one, I’ve been able to run Linux on everything I’ve thrown at it, with full graphics supported (including 3D if the video card supported it), and I’m not a hard core gamer.
But that market doesn’t really matter anyway because the games providers can’t write to Linux because it is so difficult to get out a title for only one platform (Windows), let alone two – it’s not economically viable for them to support multiple platforms.
But as Linux gradually gains more and more critical mass, the graphics chip providers are going to have to play the non-stable, driver in kernel tree, driver open source game. And they will benefit in the long run because they’ll have to put less effort into driver development (sharing the load with kernel devs), and they’ll get much better performance, and better stability and security. It’s a win win. No doubt about it.
>I’ve been kind of arguing both ways (stable API vs non stable API), and now I’m thoroughly convinced that a non-stable API is the absolute best way to go for kernel development.
Surely it makes easier to do kernel development (in a bad way, though). But it makes impossible to scale out driver development. So as a result, you have a limited set of drivers and no vendors. As it is already told multiple times.
The kernel development and driver development are different tasks with different skills. There is a small set of experts who can make good work on kernel and there are huge set of hardware designers who can develop drivers for their hardware.
I’ve been kind of arguing both ways (stable API vs non stable API), and now I’m thoroughly convinced that a non-stable API is the absolute best way to go for kernel development. This methodology has proven itself beyond a shadow of a doubt to be wildly successful (look at how much hardware Linux supports out of the box, and look at fast Linux is, and look at how scalable Linux is, and look at how versatile Linux is, and look at rapidly Linux is being adopted in business, and look at what a great desktop Linux is, and on and on an on).
Doesn’t follow. Linux has a lot of device drivers because a lot of people are putting device drivers together for it. There’s no evidence that a lot of people are working on Linux device drivers because of unstable APIs.
I doubt, if you took a poll of device driver writers, they would say “why yes, I develop for Linux because it has an unstable driver API.”
On the other hand, I know of several devices that aren’t supported on Linux because the driver writer got tired of having to change the driver frequently just because of random API breakage.
Yes, an unstable API means a lot of work only for porting and testing drivers from the old API version to the new API version.
Time that ofc can’t be used for writing new drivers or fix real problems with the drivers anymore.
Interestingly enough, Linux has a lot of drivers, but with the manpower invested in those and a stable API, it should have even more drivers, as a lot of time working on the old drivers would be saved.
However time would have to be invested in designing a good interface, but with the amount of drivers in Linux, it should be far less then the time needed just to maintain the drivers.
The way with the unstable API is how the linux guys like it, but Linux has the amount of drivers despite the unstable API and surely not because of it.
When seen political, the unstable API is perfect for Linux. How to keep binary modules, which are unethical according to the presentation which is linked in the article, out the best way – make them stop working soon enough.
“I doubt, if you took a poll of device driver writers, they would say “why yes, I develop for Linux because it has an unstable driver API.”
That’s not what I was saying at all.
What I was saying is that including a driver in kernel tree means others are looking at it, improving it, bug fixing it, optimizing it, and fixing it up when the API changes, as was said in the article. In other worlds, the kernel improves more rapidly, due partially to it not having the burnden of maintaining a completely stable API, and the drivers are brought along with it.
If, on the other hand, there were a completely stable API, there would be very little incentive for driver writers, especially the proprietary hardware providers, to improve or optimize their driver, and the kernel would suffer, like having a ball and chain to hold it back because of thousands of lame, poorly written drivers that won’t keep up or improve.
So, in essence, the kernel constantly improving and occasionally breaking the API forces the drivers to keep up, and the original writers of those drivers get help (if the driver is submitted to the kernel tree).
The real world has proven me, and the slide presentation, to be correct.
I will admit, however, that some of the most current devices are not completely supported, or can be a hassle to get working, with Linux.
However, because Linux has such a huge mass of common drivers in the tree, the very positive trade off is that more legacy hardware is supported than it is with Windows. That’s important. I don’t want to have to throw away perfectly good hardware just because there isn’t an XP driver available for it.
So, while some of the most current and/or exotic hardware is not fully supported in Linux, I can still use more hardware with Linux, without being forced to waste my money needlessly on buying the latest and greatest graphics card or some stupid usb webcam.
Just look at Knoppix. You can stick Knoppix in any PC, modern or legacy, and it will “just work”. All of the essentials – video, sound, keyboard, mouse, networking, printing, etc, and most of the peripherals (except for some of the most current or exotic) will work, no fuss, no muss.
What I was saying is that including a driver in kernel tree means others are looking at it, improving it, bug fixing it, optimizing it, and fixing it up when the API changes, as was said in the article. In other worlds, the kernel improves more rapidly, due partially to it not having the burnden of maintaining a completely stable API, and the drivers are brought along with it.
That’s the theory. In my experience, that’s not how it works out in practice, except for drivers for the very most popular devices.
If, on the other hand, there were a completely stable API, there would be very little incentive for driver writers, especially the proprietary hardware providers, to improve or optimize their driver, and the kernel would suffer, like having a ball and chain to hold it back because of thousands of lame, poorly written drivers that won’t keep up or improve.
There are plenty of lame poorly written drivers in the open source. Many lan drivers, for instance.
But you have the incentive backwards. Having to spend a lot of time modifying drivers to compensate for yet another unnecessary API change is a disincentive to do drivers at all. Having a stable API means spending less time doing that, and so having more time available to do the optimizations you’re hoping for.
So, in essence, the kernel constantly improving and occasionally breaking the API forces the drivers to keep up, and the original writers of those drivers get help (if the driver is submitted to the kernel tree).
In practice, with respect to device models, it’s often breaking the API with almost no improvement.
Also, in practice, it has been my experience that older drivers, even those in the kernel tree, tend to bit rot with the API changes. This is because the person who makes the API change may make a superficial attempt to modify drivers, but without the hardware or the understanding, they get it wrong more often than not.
I think a lot of people would be less pissed off if the devs at least showed they care by sending out a few warnings, like, “hey guys, we’re breaking some things here and there, because we want to achieve this and that. Just so you know.”
I think it is too much to ask. Kernel devs don’t want binary drivers therefore they don’t care about them. In the end it only makes things harder. What’s the point of supporting binary drivers that only work on one platform when Linux supports a multitude of platforms?
split the kernel up. Don’t try to be a jack of all trades, but specialise. Create seperate branches for embedded, server, and desktop use. This way, it will be much easier to maintain backwards compatibility on each individual branch, while not being burdened ont he desktop branch by features needed on a Zaurus.
Do you know the amount of manpower that would require? It’s simply not feasible.
Linus has consciencly chosen to keep the vanilla Linux kernel a jack of all trades, and this is fine, it’s his baby, it’s his party. However, that leads to a certain set of major drawbacks, which we (Kaiwai, you, and me) identified in our few posts. Now, what I dislike, is how Linus & Co are making up bogus reasons as to why these disadvantages exist, instead of just being men about it and admit the real causes.
The real problem is that you and kaiwai don’t realize that the drawbacks of splitting up the kernel are even greater than the drawbacks of keeping it a “jack of all trades”. I already mentioned the manpower issue. You would also have to deal with severe duplication of effort, missing functionality is some kernels, and an even more difficult time programming for Linux, where you would have to port applications to different versions of the Linux kernel, which would be just as bad as having to port an application to a different operating system, because in effect they would be different operating systems.
Do you know the amount of manpower that would require? It’s simply not feasible.
It has been happening all along, in the form of various patch sets maintained by various people As recently as six months ago you couldn’t build a working ARM kernel from the tip of Linus’ tree, because you would have needed Russell King’s ARM patch set.
It has been happening all along, in the form of various patch sets maintained by various people As recently as six months ago you couldn’t build a working ARM kernel from the tip of Linus’ tree, because you would have needed Russell King’s ARM patch set.
As you have seen with many of these patchsets, if they are good enough, they end up in the mainline kernel. No one wants to maintain patchsets like that forever. That is one of the reasons that Linux came about, because people were maintaining Minix patchsets but they couldn’t redistribute a patched version. It is a lot easier to just have the patches become a part of the mainline kernel. It was better then and it is still better now.
There is also a huge difference between maintaining a patchset and maintaining an entirely different kernel, nevermind several different kernels. Even if it was feasible with the manpower that we have now (and it clearly isn’t) that still doesn’t answer all the other issues that would arise.
If one person needs it, it matters.
Having support for nonsense old hardware really doesn’t matter.
example: Windows doesn’t support CGA, MDA video cards, is that really a matter?
It is the lastest, greatest hardware that matters.
Legacy is what x86 is all about. You can run 8086 software on a Core 2 Duo. Keeping support built-in is what allows old hardware to remain useful. Have you ever tried to dig up a Windows driver for some old piece of hardware? Plenty of sites are willing to take your money for what you should be able to run without installing anything. And if you get it, it’ll work with 9x systems.
Linux maintaining all its own drivers and Macs being end-to-end gave Windows something to be jealous of, besides the greater stability, which I think is why they finally got around to including their own drivers for most common things. I think Microsoft’s nVidia driver is better than nVidia’s, though mostly because I value compatibility.
Somewhere I got away from this being a reply specifically to you, but I think I’ve made my points. Third party support may not be the best way to get it done, and support for legacy hardware can save a lot of headaches.
His main point is, you can innovate while maintain the API stability, and if some API needs to go, some OSs have taken a more cautious and user/vendor-friendly way by giving advanced notices.
This is a burden, but if Linux chooses to not take that burden, just be frank about it, don’t come up with excuse like stable-api prevents innovation.
There’re reasons why Linux has more device drivers and supports more architectures, “unstable_api” certainly _isn’t_ one of them, would you agree?
And how long has Solaris been opened?
Is that so surprising Solaris has less number of drivers than Linux? Or you think it’s because Solaris has a stable API?
Unfortunately cheap shots get mod up.
The argument is that it stiffles evolution. Guaranteeing a stable API sends a message to developers saying, “Our architecture is perfect and does not need to evolve or change even after we realize it’s a pile of poultry manure.” These aren’t excuses, these are facts backed up by practical experience. Linux’ USB stack is rumored to be the fastest and most efficient when compared to other operating systems. However, the USB stack was rewritten 3 times before the hackers got things right. If they had guaranteed API stability immediately after the first attempt, the Linux community will be paying for the price of premature design to this day. Thankfully, Linux evolves with apathy and without sympathy for anyone. The kernel developers have also shown that they’ll continue to redesign things until they get it right. That’s a breathe of fresh air, if you ask me.
Are you familiar with the difference between interface and implementation? No I guess not.
Interface and implementation splitting is all well and good, but in the end it actually harms performance to get the greater maintainability. And when you’re dealing with low-level entities like hardware and the structures of the kernel, too much implementation detail leaks through the interface.
The NT guys have done a pretty good job of separating out the interface, but you still need to follow a lot of arbitrary-seeming rules to write a driver that works in Windows even with the WDM because of leaking interfaces.
Are you aware that interfaces could be poorly designed even if implementation is perfect? No, I guess not.
Thats a bogus retort. Its quite possible to get the API wrong and the implementation wrong. If your constrianed to reuse your badly designed interface, then your subsequent implementations will be suboptimal…
Personally I think this whole discussion is pretty stupid. Widows – the main competition – does not have driver compatibility accros releases. The main obstruction to drivers on linux is market share.
And how long has Solaris been opened?
Is that so surprising Solaris has less number of drivers than Linux? Or you think it’s because Solaris has a stable API?
And how do you think about FreBSD (with mostly stable API and far less drivers) being even longer in THE game?
btw. My personal opinion is that reason for that is not constant reinvention or time of existance, but license which is favoured by developers.
There is a difference between “supports” and “runs on”. You surely meant “runs on” because it is NOT supported on ANY hardware. There is also a large difference between “runs on” and “runs well on with no issues”. Again, you must have meant “runs on”, because that’s all it’s ever done for me.
The reason Linux doesn’t have stable APIs isn’t due to the technical inability, or any true “reason” other than there isn’t enough foresight put into the design phase. The reason projects like Solaris/OpenSolaris have stable APIs is somebody spent a LOT of time designing them to be extendable in the future, without changing existing functionality, EVEN IF you totally rip out the innards and rewrite them.
It’s the whole OO argument again. The people working on the kernel are obviously the “extreme coding” type folks who just start slamming stuff together and getting it out the door. That’s why you have broken network card support for some NICs, that partially works, but at the same time have drivers for just about every NIC out there. Lots of code flying out the door, very little thought in the design itself (which is an ugly, UGLY mess IMO.)
I’d rather have a well designed system that is stable as a rock, but only runs on a limited set of hardware, but runs WELL on that hardware than a mish mash mixup of poopoo and mud slapped together and let cure under the sun.
“It’s the whole OO argument again.”
Part of the idea of OOP is to separate interface from implementation (encapsulation).
Maybe that’s part of the problem. The Linux kernel is written entirely in C (with small bits of assembly). C does not support OOP natively. And while OOP is possible with C, as evidneced by GTK+ and GObject, at the library level, it’s rather difficult and messy to do so, as compared to a language that supports it natively.
Is this a soapbox to say the kernel should adopt OOP, or a language that supports OOP natively? No. OOP is largely about abstraction, and with kernel development, or device driver development, abstraction can be a hinderence (need to stay close to the metal).
What I am saying is that perhaps Linus and the kernel devs don’t have an OOP mindset, thus don’t fully understand/appreciate the power of encapsulation (separate interface from implementation), and thus the kernel driver API probably does not fully seperate interface from implementation, thus the need to break existing API when re-writing stuff for the purpose of optimization, or “evolution”. If there were complete separation of interface from implementation, it would be relatively trivial to completely re-write implementation while still providing a stable API.
I don’t know. I’m not a kernel dev. Only they can answer the “encapsulation” question. But it seems logical to me that if there were true encapsulation on the API, maintaining a stable API while rewriting, innovating, evolving the implementation would be a complete non-issue.
Thank you for taking what I said and putting it into better words, I’m too tired today.
Of course, you’re correct. Call it what you will, whatever methodology, but the ENTIRE point of an API is to have a stable method for interfacing with the underlying architecture. Why have an API if it’s going to change daily, just import the source directly and access it’s member functions directly instead of through some kind of interface.
It’s easy to abstract things and write a good API in C, it just takes effort and time in the design phase, as stated before this is NOT something the linux devs are known for spending much time on.
“Why have an API if it’s going to change daily, just import the source directly and access it’s member functions directly instead of through some kind of interface. “
Why indeed. The whole purpose of an API is a consistent interface, for other software, or sections of code, to interact with.
Just look at Java. It has, indeed, “evolved” quite nicely. It continues to add powerful features, and continues to be optimized and improved. Any issues with the speed (or lack there of) in which this happens can be attributed to the politics of the JSR. But the point is, Java is much better and full featured then it was in the early days, while still maintaining backwards compatibility, i.e., a stable API. But, alas, Java is fully an object oriented programming language.
Crap! I guess you have never been involved in a project where APIs actually change. No matter how much forethought you put into your design, your first few attempts will almost always suck! Welcome to reality where the waterfall approach to software development is all but legend.
The word you’re looking for is “layers.” That’s right, layers of abstraction to provide a common interface to various independent implementations. That’s what kernels need, that’s what Linux does, and that’s what the Linux kernel takes to the absolute extreme in many cases.
Often, these layers are implemented in part using C structures that contain data and function pointers. I assume that’s what you meen by OOP being possible in straight C. Not only is it possible, but it’s often easier (especially in a kernel) to do OOP-style implementations in C than in modern OOP languages because the semantics of C are so simple, powerful, and widely understood.
Layers are why the API doesn’t break as often as this keynote might suggest. You’re right that layers make it easy to change the implementations independently of the interface, but what about when the interface needs to change? No matter how brilliant a kernel developer you are, you can’t anticipate the needs that hardware, software (kernel and userspace), and developers will have 5-10 years from now.
Greg didn’t mention is that the kernel does have in-kernel interfaces that have been kept stable and/or backwards compatible for years, and sometimes this causes problems for the very reason I just mentioned. For instance, the developers of Reiser4 used their own interface design instead of that of Linux VFS in order to better suit their implementation. The kernel devs still haven’t merged Reiser4, in part, because its developers initially refused to implement functions required by the VFS interface. This is a high-profile example of how stable interfaces, even fundamental ones largely derived from the days of BSD and SystemV, can hinder innovation.
While the kernel developers assert their right to change internal interfaces at will and without notice, it’s almost never in their best interest to do so. There’s no official policy (that I know of) for announcing interface changes in advance, but such announcements almost always happen whenever a breakage is planned. If this didn’t happen, the dev responsible for the change would likely be responsible for fixing broken parts of the kernel, and so they instead give the development community fair warning on the LKML.
The two keys to distributed open source development are modularity and communication. The Linux kernel development community has become masterful at both. That’s why they’re the distributed open source development project that (not literally but in many ways) started it all, and that’s why it’s still leading the way.
The kernel continues to evolve, but the development model and philosophy stays more-or-less the same. It’s hard to say whether or not stable APIs would hinder future progress, and it’s hard to say whether or not unstable APIs have been hindering progress all along. However, it’s undeniably true that the model is wildly successful: Linux is growing leaps and bounds by any metric. World domination is a joke, but those other operating system vendors don’t find it very funny anymore.
Yet of all the OSes you mentioned with “stable” APIs, Linux, the OS without a “stable” API, still supports the most hardware and architectures out of the box compared to any OS known to man. All of a sudden, the stable_api_nonsense.txt begins to make sense.
Which is completely irrelevant to what I originally wrote; the original article which I replied to claimed that if they created a stable API it would then create a situation where innovation would be stifled in that they would be more concerned about API stability rather than correcting fundamental flaws in the implementation of the said API in question.
I’ll use Solaris, for example, when USB was first implemented, it was implemented in a very adhoc fashion, basically, enough implemented to get basic mouse, keyboard and what-not functioning; but Sun clearly stated that a stable, long term USB DDK was in development; by the time Solaris 10 was released, it had a stable long term USB API which will allow any person to know that if they create a driver for Solaris 10’s USB API, it’ll work in 3 years time when, for example, Solaris 10.1/11 is release
The Linux issue, in respects to USB implementation could have been avoided had they instead decided to properly design it from day one rather than rushing out an implementation simply to get the ‘look, we supported it first!” – better to support something properly the first time, than having to re-invent the wheel 2 more times because of a cock up in the original implementation.
When USB support was implemented; the first things should have been to consider future proofing it, to allow extensability at a later date if the USB standard is either enhanced or significantly changed to suite a new set of requirements. Yes, it would have taken possibly up to 6 months, but then atleast, when implemented, it would be in a position where it could remain stable for a few years rather than being in a constant state of flux.
Your unrealistic approach to real software development is exactly the theoretical bullshit the Linux kernel developers know well to avoid. Do you develop software? If so, has your first attempt at designing any API/infrastructure/libraries ever been perfect?
There is what you read in textbooks, and then there’s what actually works in the real world. Linux got its USB stack working first, they refined it with experience and now it’s arguably the most efficient. To add insult to injury it has the largest support for USB drivers out of the box of the whole bunch, including Solaris. Now tell me how its unstable API is going to be its demise? What advantage does Solaris stable API gain it over Linux? Today, in practical terms absolutely nothing. Instead Solaris has to maintain legacy code and if at any point they spot design flaws in their APIs, they are stuck with it.
Providing stable APIs is not a holy grail. It doesn’t automatically suggest better design, more drivers or even higher quality ones. It just gives developers blind faith and false sense of security.
Your unrealistic approach to real software development is exactly the theoretical bullshit the Linux kernel developers know well to avoid.
And yet it worked well for the Unix community for nearly 30 years.
Do you develop software?
yes.
If so, has your first attempt at designing any API/infrastructure/libraries ever been perfect?
Yes. But that’s not relevant. “stable” does not mean “perfect.”
Providing stable APIs is not a holy grail.
That’s correct. It’s good engineering practice, though.
It doesn’t automatically suggest better design, more drivers or even higher quality ones.
Starting from the assumption that you will not provide stable APIs, however, does automatically suggest poorer design and poorer quality implementations as the designer “evolves” the design through a random walk.
It just gives developers blind faith and false sense of security.
Nope.
And yet it worked well for the Unix community for nearly 30 years.
No, it didn’t. Unix’ hardware support has been a big joke for the last 30 years. Do you want me to list the devices I have that Solaris wouldn’t even recognize that Linux recognizes out of the box? A stable API means jack practically. Religiously, however, it makes some people feel better.
Starting from the assumption that you will not provide stable APIs, however, does automatically suggest poorer design and poorer quality implementations as the designer “evolves” the design through a random walk.
It doesn’t. It just means you don’t handicap the evolution of your project as a result of your initially premature design.
No, it didn’t. Unix’ hardware support has been a big joke for the last 30 years. Do you want me to list the devices I have that Solaris wouldn’t even recognize that Linux recognizes out of the box?
Which is why Unix has run on 1, 16, 32, 36, 48, and 64 bit processors with address bus sizes running from 15 to 96 bits, all possible endiannesses at the byte, 2byte and 4 byte levels, and over a wide range of I/O architectures than even exist these days, right?
It doesn’t. It just means you don’t handicap the evolution of your project as a result of your initially premature design.
You handicap it with a poorly thought out implementation without a design, instead, then you take a random walk through the design space, trying to compensate.
Which is why Unix has run on 1, 16, 32, 36, 48, and 64 bit processors with address bus sizes running from 15 to 96 bits, all possible endiannesses at the byte, 2byte and 4 byte levels, and over a wide range of I/O architectures than even exist these days, right?
Wake me up when it runs on my powerbook, xbox, PS2, or ipod. Forget those, wake me up it recognizes my USB webcam. What a joke.
You handicap it with a poorly thought out implementation without a design, instead, then you take a random walk through the design space, trying to compensate.
Right, that’s why Linux has arguably the most efficient USB stack. It is a poorly thought out implementation that “evolved via random design spaces.” And Solaris’ well thought out implementation wouldn’t even recognize half of my USB devices. Given your “alternate reality” theory, I’ll take “poor implementation with random design spaces” anyday, especially if it means I can use my hardware peripherals as opposed to salivate over waterfall theoretcial BS.
Right, that’s why Linux has arguably the most efficient USB stack. It is a poorly thought out implementation that “evolved via random design spaces.”
With significant emphasis on “arguably”.
The Linux USB stack is fragile. It falls over if you do frequent enumerations and it enumerates unnecessarily.
That’s after the third redesign.
And it has support for less than half the USB devices on the market.
The Linux USB stack is fragile. It falls over if you do frequent enumerations and it enumerates unnecessarily.
Can you elaborate? How does that affect usage of my USB devices which work flawlessly on linux, and have the highest transfer rates of any OS I’ve tested them on. While some UNIX choke when I try to transfer data from my USB disk to my hard drive. That is if at all the USB devices get mounted at all. Pointers to code and examples is welcome.
And it has support for less than half the USB devices on the market.
It supports all my USB devices, unlike some other UNIX. Also, conjuring statistics from thin air doesn’t lend credit to your claims.
Edited 2006-07-29 18:46
The stable API is a trap, it just leads to more binary-only modules being available, because right now one of three things happen (when some driver is available):
1) Companies provide some lame one-kernel-only binary-module (few, thank god)
2) Companies provide some blob of binary code with some open-source glue (like nvidia and ati do it now)
3) Open-source drivers are made, either by the community (with or without help/specs), or by the companies.
Which one would you think would increase if there was a stable API? If there was a way to write a driver and use it in all kernel versions instead of one?
Is it the best way to freedom to use the hardware anywhere (OS/kernel/archh) you want?
Edited 2006-07-26 19:31
Entertaingly enough, aside from a few very high profile cases, the big vendors seem to be quite good about driver support on Linux.
The high-profile cases, where vendors are reluctant to provide open-source drivers from Linux, don’t even have to do with a stable driver API. They have to do with IP issues (in the case of NVIDIA and ATI), and regulatory issues (in the case of wireless chipsets).
No stable API, then don’t expect vendors eager to jump in and supply device drivers
No one expects that. All kernel devs want are open specs.
No one expects that. All kernel devs want are open specs.
I’m a kernel developer and I want stable APIs. One gets tired of having to change a driver every couple of weeks because someone randomly changed the argument list to some interface, and yes, there have been periods in which the interface breakage was pretty random.
I’m a kernel developer and I want stable APIs. One gets tired of having to change a driver every couple of weeks because someone randomly changed the argument list to some interface, and yes, there have been periods in which the interface breakage was pretty random.
All I said is that kernel developers don’t expect hardware vendors to supply drivers, just open specs. How you changed the argument to kernel developers wanting a stable API no one knows.
If most kernel developers wanted a stable API then there would be a stable API but obviously that’s not the case.
If most kernel developers wanted a stable API then there would be a stable API but obviously that’s not the case.
Believe me, Linux kernel development rules aren’t set by a majority vote. Nor is it “obvious” that “most” don’t want stable APIs.
I’m unaware of any survey of any sort of what “most” kernel developers want.
I’m also not calling for one. Linux is Linus’ toy, and he can play with it anyway he wants.
I’m just trying to point out some problems in claims people have made about the consequences of the development model, and, in this case, some errors in assertions Greg made in his keynote.
I’m unaware of any survey of any sort of what “most” kernel developers want.
I’ll concede that point but I would say that the kernel developers that provide most of the code do not want a stable API. If they did then they would code it that way.
I’m just trying to point out some problems in claims people have made about the consequences of the development model, and, in this case, some errors in assertions Greg made in his keynote.
I’m just trying to point out the problems with the claims people make about how a stable API will benefit Linux when in fact it will hurt Linux. Allowing binary drivers will not only destroy the whole concept of having a free system, but it will destroy the kernel itself. No longer will it be small, clean, and fast. It will be just like windows, big, messy, and inefficient.
I’m just trying to point out the problems with the claims people make about how a stable API will benefit Linux when in fact it will hurt Linux. Allowing binary drivers will not only destroy the whole concept of having a free system, but it will destroy the kernel itself.
“binary drivers” is not the same as “stable APIs”, and the reality of the system is that without binary drivers, mostly in the form of graphics drivers, Linux penetration would be a tiny fraction of what it is now.
No longer will it be small, clean, and fast. It will be just like windows, big, messy, and inefficient.
It is big, messy, and inefficient. It’s been getting bigger, messier, and more inefficient for at least the last five years. Recent 2.6 has slower networking, for example, than earlier releases of 2.6 did. On some systems, with identical hardware, I’ve seen factors of two slowdown between 2.6.8 and 2.6.14.
If you want small, clean, and fast, take a hard look at plan 9.
Agreed. It combines what looks like an Impress presentation with commentary, both witty and informative.
“Agreed. It combines what looks like an Impress presentation with commentary, both witty and informative.”
Luckily, unlike Impress, it’s not eating all your system rescourses and running at 2 fps.
Just kidding.
Anyway, while I thought this was an interesting read, I fail to see his point about the lack of a need for a stable API. Yes, having one would force developers to keep backwards compatibility, but at least there would be some backwards compatibility, unlike in the existing setup, which forces closed source driver suppliers to provide hacked together open source abstaction layers. As it stands, the kernel requires a recompile for something as simple as installing a driver, so it’s essenitally impossible to write a driver and distribute it with a product, if a manufacturer were imclined to do so.
Edited 2006-07-26 18:47
As it stands, the kernel requires a recompile for something as simple as installing a driver, so it’s essenitally impossible to write a driver and distribute it with a product, if a manufacturer were imclined to do so.
That isn’t true, is it? I thought you could just ship it as a kernel module with your product and have it loaded at runtime. Of course, it might break if the user updates their kernel.
“As it stands, the kernel requires a recompile for something as simple as installing a driver, so it’s essenitally impossible to write a driver and distribute it with a product, if a manufacturer were imclined to do so.
That isn’t true, is it? I thought you could just ship it as a kernel module with your product and have it loaded at runtime. Of course, it might break if the user updates their kernel.”
And that gives linux a bad image/perception
“That isn’t true, is it? I thought you could just ship it as a kernel module with your product and have it loaded at runtime. Of course, it might break if the user updates their kernel.”
You’d only have to recompile the kernel if a dependency (of the driver) is missing from the kernel image.
“As it stands, the kernel requires a recompile for something as simple as installing a driver, so it’s essenitally impossible to write a driver and distribute it with a product, if a manufacturer were imclined to do so.
That isn’t true, is it? I thought you could just ship it as a kernel module with your product and have it loaded at runtime. Of course, it might break if the user updates their kernel.”
Of course this isn’t true, I install the nvidia drivers under Debian all the time (yes, even when I compile just the nvidia source package) and never have to recompile my kernel. It’s just a matter of using Module Assistant.
I fail to see the problem here. The best solution for the user as well as the developers is in-kernel drivers, so why should we care about a stable api? Hardware producers should write good enough code to get it accepted in the main tree, and stop whining about ip. New technology gets invented all the time, so how lossfull can it be to open up the specs when its out of the market before competitors can get any advantage from it anyway?
Ouch, that’s what kernel.org says too, perhaps you should mail their webmaster to fix this up?
devices.
That is perfectly fine for hackers, tweakers, die hard users, but not for average computer users
The kernel doesn’t have to be compiled for new drivers. New drivers have to be compiled against the current kernel.
In practice, the distinction is irrelevent. Good distros handle driver updates transparently, through their repository.
This point cannot be stressed enough. Don’t install software or drivers manually on Linux and then complain when its hard. You’re not supposed to do that. That’s what the repository is for!
How does “Linux has a bazillion drivers” counter the “plug’n play is not quite there yet” myth? They are not really related. Plug’n Play is about being able to plug in a device and have the kernel notice it’s presence, allocated its resources, and have it ready for use. This happens with a lot of things in Linux, but not with everything. And its not consistent across environments (CLI, X11, KDE, GNOME, etc).
His logic for dispelling this myth is completely pointless and irrelevant.
CLI, X11, KDE, and Gnome are not Linux. Let me show you:
Linux
-> CLI
-> X11
->->KDE
->->Gnome
See the parent relationship for the dependency? How it’s handled up the tree has nothing to do with Linux…
Device handling in Linux, when the driver is there and working, is “studly.” TMK, NT does it just as well, it just doesn’t ship with the broad number of drivers.
And your point? Other than being a pointless post, you do nothing to refute my statement, or to boost the article’s statement. IOW, it’s a pointless post.
His rebuttal to the “linux has no drivers” myth is completely illogical. I don’t care how many drivers linux has built in, I just care how many are available. I think it’s pretty safe to say that Windows has more available drivers than linux.
Did we read the same article?
In no way did I see that he was limiting the comparison only to what other OSes had “built-in”.
His consideration of the Linux kernel encompasses many different archetectures, including embedded devices. Seems like a reasonable claim to me (I think that BSD would be a closest competitor in this arena).
So, what is the fact concerning Linux and devices these days. It’s this:
http://www.kroah.com/log/images/ols_2006_keynote_04.jpg
Yes, that’s right, we support more things than anyone else. And more than anyone else ever has in the past. Linux has a very long list of things that we have supported before anyone else ever did.
Quote from the image:
Linux supports more devices, “out of the box”, than any other operating system ever has.
Sounds like it could be taken your way, but in fact it is just half of what you think it is. (does that make sense, because I re-read it, and it doesn’t sound clear to me)
Let me use an example: “I drink more coffee before 8 am each day than my coworkers ever do“. Does that mean to compare what they drink before 8? Or all day?
That original statement is a bit ambiguous, but to assume a limiting clause on Windows is putting a few words in his mouth.
His reply was in response to the “myth” that linux has less drivers than windows. It seems pretty clear to me that he’s trying to refute this. He did prove that linux has more drivers, but he’s talking about built in, whereas most people only care if they’re available.
He did prove that linux has more drivers, but he’s talking about built in, whereas most people only care if they’re available.
Greg never “proved” anything, by any standard of proof in that talk. He claimed there were more drivers “out of the box” without saying what he meant by that phrase, and then he claimed that Linux was the first to support certain things.
His claims weren’t always correct, either.
Greg never “proved” anything, by any standard of proof in that talk. He claimed there were more drivers “out of the box” without saying what he meant by that phrase
I’d think he was talking about drivers that ship with the kernel/OS, in which case he’s correct. What I’m saying is that what he was talking about really had nothing to do with the “myth,” in the sense that nobody really cares whether drivers come out of the box or not.
Except that the “myth” he was trying to disprove is that Linux does not support PnP. Which has nothing to do with the number of drivers an OS has.
IOW, his entire ramble about drivers is illogical and does nothing to disprove the “myth”.
<dupe>
Edited 2006-07-27 19:04
//I think it’s pretty safe to say that Windows has more available drivers than linux.//
You might think that … doesn’t mean your thinking bears any realtion to reality.
Are you kidding me? Everything supports windows if it works on a PC. If you exclude other architectures, which has no bearing when you’re talking about drivers for pc devices, then by definition windows has more. Do you honestly think the kernel developers instantly create a new driver the minute a new device comes out?
“His rebuttal to the “linux has no drivers” myth is completely illogical. I don’t care how many drivers linux has built in, I just care how many are available. I think it’s pretty safe to say that Windows has more available drivers than linux.”
I will try to remember that the next time I try to install Vista and it dies because it doesn’t support my Promise SATA chip. Oddly, Linux supports it in x32 and x64.
Vista isn’t even released yet. Not only are you trying to compare a currently released OS to one that isn’t even released, you’re also trying to use circumstantial evidence to support a general claim!
Please try and provide less stupid arguments next time you decide to respond to me, mmkay?
“That said, the article does read a little bit like an advertisement. It’s not that it got anything wrong, just that it feels a bit… self-promotional, or something.”
And that’s bad? People need to know. It’s not that we are selling them something. Unlike other OS providers…
And that’s bad? People need to know. It’s not that we are selling them something. Unlike other OS providers…
Kroah-Hartman works at Novell… So yes, he is selling something.
It’s an operating system.
http://en.wikipedia.org/wiki/Operating_system
The shell and tools are a complement.
Linux, in fact, does support more hardware, devices, and peripherals out of the box than any other OS in existence.
And this has proven to be emphatically true in my experience installing several different Windows versions, and serveral different Linux distros.
In every case, the Linux distro supported more out of the box, without installing/adjusting anything, than Windows. Most of the time, you have to have a separate drivers CD, or motherboard CD, to get all the hardware working with Windows, or you have to hunting on the internet.
And the above is especially true of desktop oriented distros, especially with live CDs, like Knoppix, Kanotix, Mepis, etc. With those, you just pop it into any PC, and everything just works, period.
Then add to that the incredibly wide range of hardware linux runs on, from cell phones, to embedded devices, to desktops, to big servers, to super computing, and everything else.
Thus, Linux kernel development method, which does not maintain a stable API, is the absolute right way to go.
The only problem is when hardware providers don’t want to play by the GPL rules, and/or let the kernel hackers provide drivers by providing documentation of their devices. Fortunately, this is becoming less and less common. It’s such a huge win for everybody if a hardware provider does not make the driver closed source – they save tons in development costs, and users get better drivers, and the kernel is kept smaller and more secure, and the hardware provider gets more sales.
The old way is dying. Bring in the new.
As explained in the text: unstable api allows for better performance, less code and above all SAFER code.
That you have to recompile your kernel for new drivers is quite annoying, but your distro should provide new kernels as they come out, or update the patches to the used kernel as in example ubuntu does.
Edited 2006-07-26 20:07
unstable api allows for better performance, less code and above all SAFER code
That’s the f–king stupidest thing I’ve read on osnews since yesterday.
Don’t ever become a developer.
Why not safer code? Surely less code means less vectors for a potential attack?
What is a difference between kernel driver and application? Both provide some special functionality, both interact with kernel using API. Application is not derivated from kernel, nor driver is. Using API by private module should not mean “be derivative”. In other case all non-GNU Linux applications will violate the license.
So here I can see a promise to break my device driver all the time. And there is a requirement to include all private drivers into the tree, so someone will fix it (who?). What a rubbish. This does not scale out. Nobody needs my driver, nobody understands it, it is just for my private hardware. If we include all this crap into tree, it will definitely contain the largest number of devices, which nobody needs.
So there is a point about compatibility, which is too far from reality. Half of my external devices at home do not work with Linux, because of driver problems. All of them work with Windows. No single distributions ever could detect and setup correct video-resolution and refresh rate on my system. All version of Windows do, starting from Windows98. There is no correct driver for my printer either. And all hardware is just a commodity. The driver problem is one of the biggest problems in Linux. If I could have a layer reusing Windows driver, it would be great. But creating of such a layer is PROHIBITED! So I use Windows, when working with these devices.
If Linux will go this way, I would definitely look for some other OS with more reasonable license.
What is a difference between kernel driver and application? Both provide some special functionality, both interact with kernel using API.
Kernel drivers link with the kernel, applications do not.
What is a difference between kernel driver and application? Both provide some special functionality, both interact with kernel using API. Application is not derivated from kernel, nor driver is. Using API by private module should not mean “be derivative”. In other case all non-GNU Linux applications will violate the license.
There is a big difference between drivers and applications. Applications do not link to the kernel, drivers do. Applications do not use kernel header files, drivers do, therefore drivers are derivatives. Drivers will only work with certain operating systems. Applications can be ported to a variety of operating systems and are not dependent on the kernel itself to be useful.
So here I can see a promise to break my device driver all the time. And there is a requirement to include all private drivers into the tree, so someone will fix it (who?). What a rubbish. This does not scale out. Nobody needs my driver, nobody understands it, it is just for my private hardware. If we include all this crap into tree, it will definitely contain the largest number of devices, which nobody needs.
It doesn’t hurt to have your driver included in the kernel. If it isn’t used it won’t be loaded. There really is no down side, and the driver will be updated to work.
So there is a point about compatibility, which is too far from reality. Half of my external devices at home do not work with Linux, because of driver problems.
How many devices don’t work and what are they? I find it hard to believe that most of your devices don’t work. Any external drive should work without a problem.
No single distributions ever could detect and setup correct video-resolution and refresh rate on my system.
Which distributions and when was the last time you tried? You know there is a utility for grabbing monitor specs from the monitor itself right?
There is no correct driver for my printer either.
That has nothing to do with the kernel. Printer drivers are in userspace.
>Applications do not link to the kernel, drivers do.
As I already wrote here, “linking” is a technical detail, describing the way of installing driver. The only legally important fact is that software is installed onto a machine.
> Applications do not use kernel header files, drivers do, therefore drivers are derivatives.
Really? Header file does nothing, except describing API. And any application, if it wants to use OS API should also use some kind of header files, when compiled.
Proprietary drivers do not use header files for it’s internal HAL layers, they use internal API, which codebase is usually shared between OSs. Only top-level OS interface depends from kernel header and it can be easily distributed under GNU.
>I find it hard to believe that most of your devices don’t work.
Well. I do this test about 7 years from time to time. Trying to install one of most popular Linux distributions and see it if works and if I can use it without switching to Windows. I do this on my notebook as well. Not yet. Close enough, but not yet. Buggy, folks. Buggy.
I do not see the point of listing all devices here. This is big list. Just few, for example: VBox USB HDTV tuner, lightscribe for DVD writer, KISS DVD Link software, if you know what is it, TV Camera on Sony VAIO, and so on.
>You know there is a utility for grabbing monitor specs from the monitor itself right?
Well. Windows does not ask me to know about utility. It just works. I have 4 distributives installed on my system just now, I had to edit config file manually on every of them.
>Printer drivers are in userspace.
I hope so. But important thing from business point of view is that I reboot into Windows when I need to print.
Really? Header file does nothing, except describing API. And any application, if it wants to use OS API should also use some kind of header files, when compiled.
I think the issue here is that you don’t understand what “derivative works” means. Basically what it means is that without the work it was derived from it is useless. This is true of binary drivers because they are only there to work with the OS and the hardware. This is not the case with applications.
Proprietary drivers do not use header files for it’s internal HAL layers, they use internal API, which codebase is usually shared between OSs. Only top-level OS interface depends from kernel header and it can be easily distributed under GNU.
True, but you are missing one crucial thing. As soon as you compile the GPL wrapper with the binary only module it is no longer redistibutable because you are breaking the GPL license if you do. You cannot mix code (and the distribute it) when one piece of code is GPL and the other is closed source.
I do not see the point of listing all devices here. This is big list. Just few, for example: VBox USB HDTV tuner,
There are supported TV tuners out there, you know that right?
lightscribe for DVD writer
No offense but that is of minimal importance to most people right now, especially since my experience with lightscribe is that it doesn’t work too well anyway.
KISS DVD Link software
That has nothing to do with drivers. We’re not talking about userspace software here.
TV Camera on Sony VAIO
Not sure what that is. What does it do?
Well. Windows does not ask me to know about utility. It just works. I have 4 distributives installed on my system just now, I had to edit config file manually on every of them.
I guess I’ve been lucky because I’ve never even had to use that utility. I set up my monitor on Gentoo without an issue by using the xorg configuration program. With other distros the resolution has always been detected automatically for me.
I hope so. But important thing from business point of view is that I reboot into Windows when I need to print.
Well businesses should be using laser printers, which are well supported under Linux.
Mmmm… I guess he is selling Linux.
If everything this guy says is true, then it just goes to show that all the accomplishments of the linux community aren’t enough to dominate the OS world.
In other words, even if Linux is better in every way it still can’t win.
Microsoft must have the right magic, since their OS is so cr*ppy yet it still rules the world (ie 95% desktop share, second largest server share, etc…)
In other words, even if Linux is better in every way it still can’t win.
No preblem here. Linux is a kernel promoting freedom. OpenSolaris is a kernel with stable API/ABI welcoming hw vendors. And on top of them the same software.
As long as *X && (OSX!=*X) wins, I’m voting for it.
OK, I’ll get voted down for this, but “cr*ppy”??? Are you in grade school?
haha linux is a d*mb OS for st*pid p*opl* thats why im h8in it
Wow, I actually agree with you:
linux is a d*mb OS for st*pid p*opl* thats why im h8in it
Yep. you’re right!
And since you can’t spell, maybe you should go back to grade school.
What an idiot you are, you didn’t even get the point of the original post.
“My favourite nemesis – plug and play, is still not at Windows’ level” – Jeffrey Jaffe, CTO, Novell
Debunked by Greg Kroah-Hartman of Suse/Novell! Jeffrey Jaffe has certainly been unimpressive as a CTO, and he’s supposed to be technical.
I can certainly agree with him on Linux having the best hardware support. You can actually get a kernel that will use the scaling of the latest CPUs to scale back processor speed from laptops to desktops. What do I have to do to make that happen on Windows for a Sempron or Athlon desktop machine I have? Download some drivers from AMD or elsewhere that may make my system bluescreen – and has done. Nice…..
He’s also right on the whole ABI/API interface issue. The vast majority of users want their hardware to work. Drivers should be seen and not heard, and preferably not seen. The way to do this is through proper testing of a kernel system as a whole so bugs can be ironed out, security issues can be passed on and solved and the performance of said drivers and hardware can work as advertised – all for the benefit of end users. Quality assurance through peer review and testing of a system as a whole. End users don’t give a flying F that their operating system has a stable ABI interface so developers of drivers for their hardware can sit in a darkened room and produce stuff that simply doesn’t work, and needs multiple updates before it even thinks about working.
Personally, I see Linux as a process where hardware can be made to work better and more reliably for end users, drastically improving the impression of computers in general. People think Windows drivers just work. They don’t.
What a mess it is!
After reading the article it is obvious that linux is nothing more than one BIG BAD UGLY PATCH !
Just kind of tweaks me about how closed source modules are illegal.
Im sorry but desktop users USE opengl. I know the kernel devs are stuck in cli all day, but get with the times.
What am i supposed to do for opengl on my new nvidia video card? Take that opengl hardware and just shove it up my ass? Because of some FOSS people?
There are very good reasons on not opening up hardware specks, theres other competing companies out there that would love to get there hand on that data. Hence why its closed source. Sorry you FOSS people don’t get that.
If you want Linux to become more main stream you need to open up a little more, and think about the desktop users.
“There are very good reasons on not opening up hardware specks, theres other competing companies out there that would love to get there hand on that data. Hence why its closed source. Sorry you FOSS people don’t get that. “
That’s a decent point. The advanced 3D graphics chip providers are in an intensely competitive market, and they are always trying to gain even the slightest of competitive performance advantages over their competitors. Opening up their specs, for the purpose of having GPL driv ers for the Linux kernel, means giving away their performance advantages to their competitors. They’re not going to do it. Period. It’s business suicide.
This is where there needs to be some sort of a middle ground, like an LGPL abstraction layer on the API, where it is both legal and easy to provide a closed source driver (again a necessity for advanced graphics chips providers) for Linux.
Have you used the official, closed source 3D drivers in Linux? They could put the code into public domain and it still wouldn’t be used for anything other than an example of how not to write a driver.
>>What is a difference between kernel driver and application? Both provide some special functionality, both interact with kernel using API.
> Kernel drivers link with the kernel, applications do not.
“Linked” is not a word from legal or business dictionary. “linked” is a rather unimportant technical detail, which just reflects current state of Linux kernel development. Saying that driver violates GNU license, when it is linked, and does not violate when it is “loaded as a module” or “works in user-space” does not have sense. From busuness point of view driver “is installed”, whatever it means.
The same from development point of view. If I do a hardware, which needs software to work, I design both user space and kernel space software and ship it together. And I need stable API on both sides or I’ll ship my own version of kernel to be sure that I control this part and then I will have to provide my own security updates and everything.
When you develop for Windows, you do not need to worry about providiing Windows security updates, do you?
The ivtv drivers I have to recompile every time a minor kernel update downloads itself to my Mythbox, or…
The kernel-subsumed bttv-driver.c whose audio broke with 2.6.15 when a well-intentioned l4tv developer arbitrarily rewrote a portion of its code without checking it across a large enough sample of the cards using the chipset?
The reason you had to recompile the ivtv drivers for every kernel update is because the code quality wasn’t very good and the driver wasn’t very stable. Recently, it has gotten a LOT better, and the devs are now working on merging it into the kernel tree and massively cleaning up the code. See http://ivtvdriver.org/trac/roadmap to track progress.
As for the graphics BS that other people posted about…
It would make BUSINESS SENSE to open source the driver. Allow me to explain why.
A) Open source drivers allow EVERYONE to fix bugs. This means that the company producing the hardware does not need to employ as many people to work on diagnosing and fixing bugs as other people will do that for them. This will allow them to get a competitive advantage as their paid developers can be working on adding features and improving the driver.
B) Performance freaks in kernel development will now be able to work on the code for the driver and make sure that it is even more heavily optimized, giving a competitive advantage to that graphics card driver.
C) Thanks to the reduced hours that the company developers need to spend on the driver, more time can be put into improving the quality of the hardware itself, providing a competitive advantage.
Drivers can be reverse engineered and probably already are… but it takes time. Looking at open source code and trying to figure out a way to copy it and rewrite it into your own form and closed source code still takes time, which is money. The company with closed source drivers is spending more on development of drivers now and can’t afford to spend as much on the development of the hardware as a result.
If the second company goes and open sources their drivers, now they get all of the benefits, and then both of the companies continue to reduce their cost of developing the drivers as they are going to be able to share the code that really doesn’t matter and that everyone is doing the same way at this point anyways. They will still need to compete for hardware quality regardless of all of this.
Remember, open sourcing the driver doesn’t mean saying how the hardware works internally, just how to use the hardware for the best performance. That is a lot easier to reverse engineer then the hardware internals anyways.
What if the driver accesses parts of the hardware that are not known from the outside. The driver does know what registers are on the hardware and the patterns used for accessing hardware resources could give big clues as to the architecture of the hardware itself.
So much of the graphics card game is actually in the drivers because it’s only feasable to do so much in hardware. The more you do in hardware rather than software, the higher your verification costs and the slower your time to market, because silicon has to be almost totally right the first time, while software can be revved.
Most of this article is the truth. I may not agree with the guy’s views on some things (ie. a stable API) but nevertheless. One thing I have to absolutely call bullshit on though is that Linux runs on more platforms than any other system. This is complete nonsense. Debian supports more architectures than any Linux distribution and they are currently at 11. NetBSD supports over 50, including all the embedded stuff LXers are so proud of and a friggin toaster.
Debian comes with far more packages than NetBSD. Furthermore, he NetBSD kernel isn’t as advanced as Linux in many areas, like power management, multimedia, etc. I mean, just take a look at the ChangeLog for 3.0 :
http://www.netbsd.org/Releases/formal-3/NetBSD-3.0.html
There are some new core features that were added to Linux more than a year ago!
Now, I am fan of NetBSD and I really don’t want to badmouth the project. I am fully aware that both projects have different development/design philosophies… It’s also quite a feat that NetBSD can run on so many architectures with a limited team of developers.
Still, there might be no Linux distribution that is as versatile as NetBSD, but I believe GKH was completely on the spot when it comes to the versatility of the Linux kernel. We are not only talking of single-processor configurations, but massive SMP systems too.
And there are many features in NetBSD that do not exist in Linux. Did you have a point?
I’ll agree that there are more packages available for Debian, but does that in any way change my original post?
And there are many features in NetBSD that do not exist in Linux. Did you have a point?
No matter how you try to spin it off, Linux just got more features.
I’ll agree that there are more packages available for Debian, but does that in any way change my original post?
NetBSD might have more ports, but Debian is more complete, hence more useful out-of-the-box. Still, Debian doesn’t speak for all distributions, even less for the kernel.
Anyway, like I said, it’s just unfair to compare both projects since they are definitely not going the same way.
…One thing I have to absolutely call bullshit on though is that Linux runs on more platforms than any other system. This is complete nonsense. Debian supports more architectures than any Linux distribution and they are currently at 11. NetBSD supports over 50, including all the embedded stuff LXers are so proud of and a friggin toaster.
It may surprize and shock you, but: Debian runs (mostly) on top of a Linux kernel.
Linux Kernel != Linux Distro
It may surprise you even more, but Debian also runs on top of both NetBSD and FreeBSD:
http://www.debian.org/ports/netbsd/
http://www.debian.org/ports/kfreebsd-gnu/
not sure why you’d want to, but the bear does dance.
I’m well aware of these atrocities. I simply used Debian to make a point. One that has yet to be proven wrong
When you take the Linux kernel code, and link or build with the header files against it, with your code, and not abide by the well documented license of our code, you are saying that for some reason your code is much more important than the entire rest of the kernel. In short, you are giving every kernel developer who has ever released their code the finger.
I think this almost actually made me laugh.
The truth is that kernel hackers don’t want a stable API. They want to be able to improve the kernel at will without having to worry about legacy code. If drivers are in the kernel then there isn’t a problem. If they are not in the kernel then the maintainers should attempt to have the drivers included in the kernel. If the drivers are binary then no one cares because Linux is a GPL system. It is actually a detriment to Linux to have the ability to load binary drivers across a wide range of kernels. This gives hardware vendors less incentive to open drivers, which in turn gives kernel hackers less leeway into improving and changing the way the system functions. The author gave a very good example of this when explaining Windows’ three different USB stacks. The code is now more bloated, less secure, and has less features, nevermind being slower. The strength of the GPL is that when things change everything dependent on those changes can be tweaked and recompiled. With proprietary and “stable” systems we have to deal with code mistakes made years ago. Linux on the other hand just keeps improving and leaving old and broken code behind. Personally I would rather have a more streamlined and secure kernel than one that lets me load 2 year old binary drivers.
The truth is that kernel hackers don’t want a stable API. They want to be able to improve the kernel at will without having to worry about legacy code.
The truth is I’ve been a kernel hacker for 30 years, and I’m a big fan of stable APIs.
It has been my experience, comparing Unix and its derivitives to Linux, that the stable APIs in Unix have made it easier to improve code there.
Besides, driver interfaces aren’t rocket science. If you can’t design a set of stable driver interfaces in one go, you really shouldn’t be designing kernel subsystems.
It has been my experience, comparing Unix and its derivitives to Linux, that the stable APIs in Unix have made it easier to improve code there.
Ok then what happens when the API is broken or needs to be improved. You’re stuck. The only option is to support multiple APIs like Windows does. Not only is that not feasible for the Linux kernel but it is also less secure, less compatible, and more bloated.
It has been my experience, comparing Unix and its derivitives to Linux, that the stable APIs in Unix have made it easier to improve code there.
Ok then what happens when the API is broken or needs to be improved. You’re stuck. The only option is to support multiple APIs like Windows does. Not only is that not feasible for the Linux kernel but it is also less secure, less compatible, and more bloated.
Stable is not ‘unchanging’. There are several options, none of which are rocket science. (How to support stable APIs was first figured out by the OS/360 team back in the 60s, after all.)
The most common approach is to deprecate an API over time, and differentiate the old version from the new when the new is introduced.
But even Linux/GNU supports multiple APIs [i]out of the kernel, as with lseek and llseek. Do you really think it’s harder to do this in the kernel than out?
The most common approach is to deprecate an API over time, and differentiate the old version from the new when the new is introduced.
But even Linux/GNU supports multiple APIs out of the kernel, as with lseek and llseek. Do you really think it’s harder to do this in the kernel than out?
When it comes to drivers it is definitely harder. Each driver must be implemented and reimplemented for each different API. That is not an easy task.
When it comes to drivers it is definitely harder. Each driver must be implemented and reimplemented for each different API. That is not an easy task.
Which is why driver APIs should be stable.
Greg’s anti-API arguments would make more sense if there was any evidence at all that Linux had benefited from the lack of a stable driver API, but compared to systems that have stable driver APIs, it’s no better.
Greg’s anti-API arguments would make more sense if there was any evidence at all that Linux had benefited from the lack of a stable driver API, but compared to systems that have stable driver APIs, it’s no better.
How do you figure? Linux supports more hardware than any other operating system. Obviously it is working out better than you think.
Greg’s anti-API arguments would make more sense if there was any evidence at all that Linux had benefited from the lack of a stable driver API, but compared to systems that have stable driver APIs, it’s no better.
How do you figure? Linux supports more hardware than any other operating system. Obviously it is working out better than you think.
What’s working out “better” is that Linux has attracted more people to driver writing than other systems. The bit rot of older drivers as API changes aren’t tested on them, coupled with the lack of evidence that Linux drivers are no more stable than any others, coupled with the difficulty that Linux developers have keeping current with new hardware introductions, coupled with the increasing fragility of the USB subsystem, argues that the only thing it has is volume.
There’s certainly nothing about the Linux driver model that makes it easier to write drivers for than, say, BSD, for all the churn there’s been in the APIs, and I don’t find what drivers that do exist to be any stabler than those from other operating systems.
So what did we get out of all the API churn? API churn.
Where’s the “improvement”?
What’s working out “better” is that Linux has attracted more people to driver writing than other systems. The bit rot of older drivers as API changes aren’t tested on them, coupled with the lack of evidence that Linux drivers are no more stable than any others, coupled with the difficulty that Linux developers have keeping current with new hardware introductions, coupled with the increasing fragility of the USB subsystem, argues that the only thing it has is volume.
Linux kernel developers do not have a hard time keeping current with new hardware. In fact they release some drivers before other operating systems do. The problem is not having open specs as I mentioned earlier. Your point about USB is moot to me because I use several different USB devices daily and have for years without a hitch on the Linux kernel. I hardly call that fragile. What would you call Windows mess of a USB subsystem? At least Linux’s USB stack is clean and fast.
There’s certainly nothing about the Linux driver model that makes it easier to write drivers for than, say, BSD, for all the churn there’s been in the APIs, and I don’t find what drivers that do exist to be any stabler than those from other operating systems.
Obviously Linux’s model is not hurting them considering it has better support for more hardware than any other operating system. I think the only two issues that remain are soundcards and wireless cards and the wireless issues are about to go away because of the recent introduction of the broadcom driver and the devicescape wireless stack. Soundcards are a different story. I haven’t had any trouble getting them to work as much as getting them to work with all their features.
So what did we get out of all the API churn? API churn.
No. We got cleaner, more stable, and more secure APIs and a smaller kernel.
Linux kernel developers do not have a hard time keeping current with new hardware. In fact they release some drivers before other operating systems do.
Sorry, but the second point doesn’t confirm the first, and neither is true. The first release of drivers for new hardware is by the hardware vendor, and it is inevitably for windows. There is a large amount of new hardware for which Linux drivers aren’t available right now, or, if they’re available at all, are only available in closed source form.
The problem is not having open specs as I mentioned earlier. Your point about USB is moot to me because I use several different USB devices daily and have for years without a hitch on the Linux kernel.
“moot to you” is not moot. The stack is fragile and getting more so. There are well known outstanding bugs against enumeration that are still present in the latest tree. Those bugs are very annoying to those of us who need to frequently plug and unplug devices.
I hardly call that fragile. What would you call Windows mess of a USB subsystem? At least Linux’s USB stack is clean and fast.
I don’t find it to be particularly clean. I find it to be particularly easy to make it fall over.
Obviously Linux’s model is not hurting them considering it has better support for more hardware than any other operating system.[i]
“better support”? The existance of source code is not the same as support. You’re confusing quantity with quality.
Besides, being better is not the same as not being hurt by. How do you know they wouldn’t be better off still if they had taken the time to design stable APIs?
[i]No. We got cleaner, more stable, and more secure APIs and a smaller kernel.
I’ve followed API churn in the kernel for a long time now. I’ve yet to see it produce cleaner, more stable, or smaller kernels. The vast majority of API churn has been to add features. There is evidence that 2.6 is becoming less stable over time.
I LOVED it. I cannot say anything else.
Even if you accept the assertion that Linux supports (and I use the word support here loosely) more devices you must admit that it does not support more current devices than Windows. And if you count all the devices that Windows has ever supported, then it would clearly be the winner.
Go to any major computer shop in the US (Frys, Best Buy, Circuit City, etc.) and take every computer device currently for sale. 100% of them will have a Windows driver included either in the box or in Windows already. Does anyone truly believe that Linux also will support 100% of these same devices?
The presentation implies that Linux will just work with anything you throw at it. I know from experience of working with Linux for more than a decade that this is not true. And this sort of propaganda will only go to make worse the very notions that the presentation attempts to address when they realize that things do not just work after installing Linux.
Even if you accept the assertion that Linux supports (and I use the word support here loosely) more devices you must admit that it does not support more current devices than Windows. And if you count all the devices that Windows has ever supported, then it would clearly be the winner.
Two points. Windows doesn’t support the device, the vendor’s driver supports the device. Windows just provides the framework for the driver to provide support. The same is true of Linux.
Second, I know for a fact that Windows doesn’t, and never has, supported any of my SBUS devices for my SPARCstation 20:)
For the record, I believe Linux should have a stable API. For devices such as wireless adaptors, there’s should be no barriers to stable APIs. Changing structures and function call parameters (calling conventions even!) is utter nonsense, micro optimization at best.
I think the key words were “out of the box”
“Linux supports more devices, “out of the box” than any other operating system ever has.”
Hey Greg? What does “out of the box” mean to you? You forgot to say.
But even if correct, the point is irrelevant, because Linux does not support more current off-the-shelf hardware than any other OS does, and given the short shelf life of computer hardware these days, what matters is how many of the devices released in the past 18 months you support.
“Linux supports more different processors than any other operating system ever has”.
Hey Greg? Why’d you add the caveat “major”? Do you have some definition of major that leaves out OSes that have supported more?
By the way, I guess by “support” you means “has known to have been run on once”? Because there’s sure a lot of bit-rotted “supported” code in the tree. (You know, like the devfs you keep trying to get rid of that’s supposed to finally go away in the next release…)
And remember, almost every different driver that we support, runs on every one of those different platforms.
Nonsense, as anyone who has worked in the tree knows.
This is something that no one else has ever done in the history of computing. It’s just amazing at how flexible and how powerful Linux is this way.
Nonsense, again. NetBSD is, in fact, better at abstracting architecture independent features out of drivers and making them portable between OSes.
But even within a single ISA, you can see a wide range of skill in the degree of portability of drivers. Compare the xscale drivers to the omap drivers, for example to see how much portability impact there is.
Linux wasn’t even the first, let alone the only OS to have that sort of flexibility.
An example of this, I recently plugged a new USB printer into my laptop, and a dialog box popped up and asked me if I wanted to print a test page on it. That’s it, nothing else. If that isn’t “plug and play”, I really don’t know what is.
Good for you. I recently plugged an old webcam into a SuSE distro with your USB subsystem on it and watched the box freeze up.
The reality of USB P’n’P support is that Linux does an adequate job for devices that have good implementation of class support for one of the class drivers, but that it tends, otherwise, to fall on its face; while vendors who make USB devices make sure they work with Windows, even if they don’t fit one of the class drivers.
Linux is evolution, not intelligent design
Here I have to agree completely. Anyone who has followed the discussions of, say, the OOM issue, on LKML, will recognize that there was no intelligent design involved, but rather, a lot of throwing random hacks at the problem until it went away.
Hey Greg? Or you sure you wanna brag about not having any intellgience in your design?
Hey Greg? I’ve read stable_api_nonsense. It says “Hi, I don’t know how to design stable APIs, so no one should.” It ignores the fact that Dennis Ritchie taught us 30 years ago how to do stable APIs in kernels.
closed source linux modules are illegal
Hey Greg? Did you run this slide past Linus? Do you recall that he gave an example of a closed source linux module that would be perfectly legal?
I think you forgot to put IANAL in your presentation.
Hey Greg? What does “out of the box” mean to you? You forgot to say
No he didn’t, he was refering to the precedent slide about Plug and Play. You just have a hard time reading a simple keynote.
But even if correct, the point is irrelevant, because Linux does not support more current off-the-shelf hardware than any other OS does, and given the short shelf life of computer hardware these days, what matters is how many of the devices released in the past 18 months you support
This is BS and not true at all. He is right even for off-the-shelf hardware. He did say that some esoteric hardware won’t work. That means any new chip not supported by the kernel, or product that break standard APIs. That means a graphic card with a new chip for example, or a printer with a new chip.
Your new shiny SATA drive will work just the same, your new DVB-T USB dongle will work too (if it has not a brand new chipset), …
And I’m sorry to tell you Windows XP still does not even support SATA, so your point is even worse in the BS realm.
It just proves what matters is NOT how many devices released in the past 18 months you support, but how the IHV support your OS, which is not the same thing at all, with not the same actors at all.
By the way, I guess by “support” you means “has known to have been run on once”? Because there’s sure a lot of bit-rotted “supported” code in the tree. (You know, like the devfs you keep trying to get rid of that’s supposed to finally go away in the next release…)
If the best example you can get up with is devfs, then you’re not credible at all. devfs development (which means the kernel module and the necessary user-space daemon) was stopped by its own original developer, and is deprecated since a long time. So I fail to see where you got it was “supported”, it’s just false !
Nonsense, as anyone who has worked in the tree knows
That’s not nonsense at all. You seem to have missed the “almost”, and it’s a keynote, he can’t be specific and confuse people.
Of course, not all drivers can work on every architecture, or are as well tested on each.
Nonsense, again. NetBSD is, in fact, better at abstracting architecture independent features out of drivers and making them portable between OSes
He’s talking about making Linux work on these architectures (with all kind of OS) …
Good for you. I recently plugged an old webcam into a SuSE distro with your USB subsystem on it and watched the box freeze up.
Which can be caused by the chipset on your motherboard, especially with an old webcam that can suck up too much power from the USB bus … Of course it didn’t occur to you. It could just be the driver for this webcam too. Of course, this has NOTHING to do with the fact that Linux is truely PnP, which GKH was talking about.
Nowadays, nearly every USB devices come with big red tapes and warning everywhere telling you to install some drivers and apps for Windows (which require a reboot after that) before connecting your device : not PnP at all.
The reality of USB P’n’P support is that Linux does an adequate job for devices that have good implementation of class support for one of the class drivers, but that it tends, otherwise, to fall on its face; while vendors who make USB devices make sure they work with Windows, even if they don’t fit one of the class drivers
That’s BS again. Reading you, we’d believe there is no problem on Windows or very little. Reality is just different : Windows drivers are pretty buggy, with lots of updates and lots of angry or helpless customers. The reality of USB PnP support on Linux is that it raises the bar for anything else : you plug and it works.
At worst, you have to install the driver before it works, but it won’t require any reboot to work.
Hey Greg? Or you sure you wanna brag about not having any intellgience in your design?
He just did, you’re too late. What’s your point ?
Hey Greg? I’ve read stable_api_nonsense. It says “Hi, I don’t know how to design stable APIs, so no one should.” It ignores the fact that Dennis Ritchie taught us 30 years ago how to do stable APIs in kernels
Look where that’s taken the Unix. Then they do Plan9. Nice way to show how you know to design stable API : now we know it’s not useful at all.
Hey Greg? Did you run this slide past Linus? Do you recall that he gave an example of a closed source linux module that would be perfectly legal?
I think you forgot to put IANAL in your presentation
Wow, just 2 things :
– Linus is not a lawyer, but you seem to have a double standard as to who is not a lawyer
– GKH specifically said he is not a lawyer in the keynote
Overall, you seem a lot less reasonnable than GKH, and have no worthy opinion at all.
He did say that some esoteric hardware won’t work. That means any new chip not supported by the kernel
Nice. Any new chip that is not supported by the kernel is esoteric.
or product that break standard APIs.
Which ones? The same that are in constant flux?
And I’m sorry to tell you Windows XP still does not even support SATA, so your point is even worse in the BS realm.
Yeah sure, that is why I’m writing this from under Windows XP with the only hard disk installed that is SATA.
It just proves what matters is NOT how many devices released in the past 18 months you support, but how the IHV support your OS, which is not the same thing at all, with not the same actors at all.
Which is what end users actually care about: how the IHV support your OS. They support it poorly. The reasons are partly what Greg calls lies (unstable API) and partly within IP domain.
Nowadays, nearly every USB devices come with big red tapes and warning everywhere telling you to install some drivers and apps for Windows (which require a reboot after that) before connecting your device : not PnP at all.
So what? With Windows, you’re practically guaranteed to have a driver. With Linux you’re not. For example, I’m lucky to have a modem supported by Eciadsl driver, but not everyone is.
Look where that’s taken the Unix. Then they do Plan9.
So the stable APIs are the reason of Unix demise. Interesting! Can you please elaborate or give some links?
He did say that some esoteric hardware won’t work. That means any new chip not supported by the kernel
Nice. Any new chip that is not supported by the kernel is esoteric.
No, that is your flawed logic ! One of the very first thing you learn in logic CS courses (and even in math classes), is that A => B doesn’t mean B => A.
Granted, I should have added “and related to no standard” to “new chip”.
or product that break standard APIs.
Which ones? The same that are in constant flux?
No again ! Things like USB data storage keys that need a specific driver to work, for example.
And I’m sorry to tell you Windows XP still does not even support SATA, so your point is even worse in the BS realm.
Yeah sure, that is why I’m writing this from under Windows XP with the only hard disk installed that is SATA
Yeah sure, what you said just isn’t related to what I said. That you had to add a driver not provided with the OS (wasn’t SATA there before XP SP2 ?) is sth you forgot already or was installed but not by you (in which case your opinion has no value on this matter). But selective memory loss is a thing I see all the time with Windows users. The last user I refused to help with Windows had this problem where he couldn’t install it on a new PC, that’s why I specifically gave this example, that is very telling, and break your argument of past 18 months support blabla.
Which is what end users actually care about: how the IHV support your OS. They support it poorly. The reasons are partly what Greg calls lies (unstable API) and partly within IP domain
Problem is, even Windows is poorly supported. So the reasons are not those you cited.
It just seems that people on Linux don’t take it for granted that their hardware don’t work properly, while on Windows, people just accept it and wait for the next driver release hoping it will solve the problem …
Look no farther than NVidia drivers. People used to a perfectly stable OS like Linux couldn’t stand having the only binary driver on their system crashing it all the time, and this lasted for more than a year.
So what? With Windows, you’re practically guaranteed to have a driver. With Linux you’re not. For example, I’m lucky to have a modem supported by Eciadsl driver, but not everyone is
Agreed. In the same way, with Linux, you’re practically guaranteed to have a driver that works well. With Windows you’re not.
For example, I was lucky to have few problem with my VIA chipset based motherboard on Windows, but not everyone is.
Look where that’s taken the Unix. Then they do Plan9.
So the stable APIs are the reason of Unix demise. Interesting! Can you please elaborate or give some links?
No, you don’t understand. I mean the stable API didn’t help at all. Despite it’s instable internal API, Linux still has more drivers.
Keeping the stable API in Unix means when they want to do sth new, they can’t, they start another OS : it was said in the keynote, Linux is evolution, they won’t start a new OS, they will change the Linux they have.
So if the unstable internal API is such a drawback for drivers support, Linux must be doing sth amazingly good.
Having no chance of getting a driver at all is still worse then installing a driver.
This is the reason that a lot of person have no big problem with Windows supporting SATA with an additional driver and not “out-of-the-box”, but a problem with Linux supporting device XYZ (like the printers above) not at all.
For the problem with Windows there is a solution, probably some persons might add a comment that it is not directly included with Windows, but it works.
With similiar problems on Linux (driver not existing out-of-the-box) you are often enough completely screwed.
If the linux kernel developers could cover every hardware existing, it would work, but as it is unlikely that they ever will as a result of closed specs and that drivers for new hardware should be ready (and then included in the kernel) when the hardware is sold, there might be those spots where you have no chance of getting some hardware to work forever. Idealistic goal, but a bit unrealistic in this world.
The common “solution” is to check your hardware for linux compatibility – which fixes the symptom, not the reason.
Keeping the stable API in Unix means when they want to do sth new, they can’t, they start another OS
Wrong again. The original Unix system didn’t support networking, window management, virtual memory, multiple processors, or multithreading within a process, for example.
Over time, each of these things were added, while keeping the APIs stable.
Plan 9 was a research project, aimed at investigating a different approach to OSes than Unix, but even so, there’s nothing in it that couldn’t have gone into research edition 8.
Many of the Plan 9 ideas actually originated in RE8.
Hey Greg? What does “out of the box” mean to you? You forgot to say
No he didn’t, he was refering to the precedent slide about Plug and Play. You just have a hard time reading a simple keynote.
Sorry, but even in the previous slides he doesn’t define “out of the box”.
And I’m sorry to tell you Windows XP still does not even support SATA, so your point is even worse in the BS realm.
Care to explain the SATA drives on the XP box I’m writing this on then?
It doesn’t help your credibility much when you’re calling other people’s comments BS while making claims that are utter nonsense.
But even if correct, the point is irrelevant, because Linux does not support more current off-the-shelf hardware than any other OS does, and given the short shelf life of computer hardware these days, what matters is how many of the devices released in the past 18 months you support
This is BS and not true at all. He is right even for off-the-shelf hardware. He did say that some esoteric hardware won’t work.
I’ve got a fairly large collection of ordinary USB devices that aren’t at all “esoteric” that XP works fine with that Linux doesn’t support.
I’ve got several mainline video cards that Linux doesn’t support “out of the box”.
I’ve got two different gig-e parts on Intel motherboards that Linux has drivers for that are badly broken.
All of this hardware works fine on XP.
None of it is “esoteric”.
If the best example you can get up with is devfs, then you’re not credible at all.
It’s not the best example, it’s the most amusing, because of Greg’s personal involvement in the issue.
Good for you. I recently plugged an old webcam into a SuSE distro with your USB subsystem on it and watched the box freeze up.
Which can be caused by the chipset on your motherboard, especially with an old webcam that can suck up too much power from the USB bus … Of course it didn’t occur to you. It could just be the driver for this webcam too. Of course, this has NOTHING to do with the fact that Linux is truely PnP, which GKH was talking about.
Ah, you’re new around here. The old webcam works fine on the same hardware under XP. I have worse problems with new webcams and Linux. Also, as I’ve mentioned elsewhere on these forums, part of the problem has to do with an enumeration bug in the USB core that is independent of drivers and Greg is aware of but hasn’t fixed.
And I was responding to Greg’s example with a similar example, so if you’ve got problems with the examples, take it up with Greg.
But even when you talk about web cams that have sort of working drivers, you don’t get true plug and play from the Linux kernel. You need video support from applications, and there are an amazing number of applications that only sort of support video on only some webcams.
That’s BS again. Reading you, we’d believe there is no problem on Windows or very little. Reality is just different : Windows drivers are pretty buggy, with lots of updates and lots of angry or helpless customers. The reality of USB PnP support on Linux is that it raises the bar for anything else : you plug and it works.
At worst, you have to install the driver before it works, but it won’t require any reboot to work.
See above comment about SATA and XP. Given that you don’t even know that SATA works just fine on XP, you really shouldn’t be commenting on driver comparisons.
See any Linux users forum for ‘angry or helpless’ users with driver problems. It really is worse in Linux than Windows.
Wow, just 2 things :
– Linus is not a lawyer, but you seem to have a double standard as to who is not a lawyer
– GKH specifically said he is not a lawyer in the keynote
[i]
Yup, Greg did say he’s not a lawyer. My bad for claimiing otherwise.
I didn’t cite Linus as a lawyer, I merely mentioned that Linus is aware of an example that demonstrates Greg’s claim to be false.
Greg is being disingenuous in his discussion of lawyers. The reality is that if you discuss the GPL with IP lawyers, you’ll find that they disagree on its interpretation, and that a reasonable case can be made that it’s unenforcable with respect to the Linux kernel, anyway.
But every lawyer I’ve talked to or read on the subject agrees that Linus’ example is a valid example of a legal closed source kernel module.
[i]Overall, you seem a lot less reasonnable than GKH, and have no worthy opinion at all.
This from the person who claimed XP doesn’t support SATA.
And I’m sorry to tell you Windows XP still does not even support SATA, so your point is even worse in the BS realm.
Hmmm, how odd that just last week I was able to install Windows XP (using the original release CD) onto a system using an IDE CD-ROM and 160 GB SATA HD. Didn’t even have to use a third-party driver disk. No issues whatsoever.
So, who’s talking BS, again??
Cloudy, your input is often appreciated, but here you just sound like an arrogant, immature heckler.
Many of your posts make it seem as if you have a bone to pick with the Linux developer community. Frankly, these constant potshots at Linux are becoming tiresome.
Please accept that Linux is designed in a different way than other OSes have in the past, and that despite the fact that it may not be optimal in your eyes (and there’s some truth to this argument), it is *still* one of the most successful non-commercial software projects ever done.
Now, please, use your vast knowledge of OSes in order to provide constructive criticism, and not waste your time in pointless flamewars.
Cloudy, your input is often appreciated, but here you just sound like an arrogant, immature heckler.
I’m not the ones calling other people names; nor am I the ones violating the voting rules and voting other people’s opinions down simply by disagreeing with them.
Many of your posts make it seem as if you have a bone to pick with the Linux developer community. Frankly, these constant potshots at Linux are becoming tiresome.
Don’t read them.
I don’t have bone to pick with the developer community, and when I do, I do it in person and to the individuals faces.
I do have an urge to help people on forums like this one understand both sides of the issue and a tendency to want to counter-act the hundreds of incorrect boosterism posts I see.
Please accept that Linux is designed in a different way than other OSes have in the past, and that despite the fact that it may not be optimal in your eyes (and there’s some truth to this argument), it is *still* one of the most successful non-commercial software projects ever done.
Linux isn’t non-commerical and hasn’t been since before RedHat went public. A significant number of people, probably in the low thousands, are paided full time to work on the kernel.
Now, please, use your vast knowledge of OSes in order to provide constructive criticism, and not waste your time in pointless flamewars.
Greg made mistakes in his presentation. Pointing out errors isn’t flaming and it helps reduce the extent to which mis-information is propagated.
Besides, it’s my time. I’ll waste it whatever way amuses me most.
or you maintain different revisions of an API (otherwise, it’s not really an API but an ad-hoc interface that happens to be somewhere).
A moving target is *not* an option for the poor one who’s trying to write correct software (I write embedded software — including drivers — and I can tell that working with a combination of hardware, firmware and software that is always changing in subtle ways is a real nightmare).
Linux is a very cool project yet it’s the triumph of hacking over engineering…
Not trying to be a troll here. You can really tell that Linux has some issues and this is just more propaganda to try to make Linux look better than it really is.
The Internet has become like the Mcdonalds fast food stores. It is open to anyone, no matter what FUD they are pushing, no matter what lies or how crazy, they can put what they want up and get noticed like an attention whore.
I guess if you put “linux” on your web page or blog you will get millions of hits.
One look at reading his article, I knew he was really trying hard to pull the wool over my eyes. Linux fans are really trying hard. I mean stop trying so hard, let the product sell itself.
‘Remember, no one forces anyone to use Linux. If you don’t want to create a Linux kernel module, you don’t have to. But if your customers are demanding it, and you decide to do it, you have to play by the rules of the kernel. It’s that simple.’
Hardware customers are also kernel customers, and a gun can always target the two sides (as Salvor Hardin would say) … I’d like more humilty in the kernel side (no matter they are the best programmers all around the world, if this was the case), I think it is a matter of two parts (kernel developers and hardware vendors) to try to reach a point where they can collaborate together, I don’t like the attitude if you don’t like this way, then bye bye … things don’t work this way in the world.
If i read this from the article
“Linux supports more devices, “out of the box”, than any over operating system ever has”
I can not imagine this is true, actuially this damm WROND, folks!!!! I mean i have two new new printers, theu don’t work with my Linux box with the lastes kernel version. There is no driver, sure there are some Gimp-Print related drivers that can make my printer moving, but that’s it, if i try to print, the result is a disaster. And this true for most printers, vendors like Hp, Canon, or Epson do nit support Linux, try to print with Linux to the printers of those compagnies, it won’t work. That’s the reality of the users life with Linux, this guy jsut does not get it.
Try to connect a camera from Sony, it wont work, my sony digital camera does not work either, my canon digital camera don’t even think about it, a lot of peo PCIX cards do not work, my epson scaner does not work either, etc, etc….
So now he should really explain me what is this “out of the box” that he is talking about, or i guess he should think about what it does really mean.
Yes Linux suports a ton of exotic devices, processors, platforms and so on, but matter of the fact is that it does not work with devices that people buy and use every day. This statement:
“Linux has a very long list of things that we have supported before anyone else ever did. That includes such things as:
USB 2.0
Bluetooth
PCI Hotplug
CPU Hotplug
memory Hotplug (ok, some of the older Unixes did support CPU and memory hotplug in the past, but no desktop OS still supports this.)
wireless USB
ExpressCard”
does not make so much sense. For example it not because Linux supports USB2.0 that my USB2.0 printer will work with Linux, and again in my case, it does not work. He shoud really understand this.
By comparison my OS X computer works with every of my devices, most of them out of the box.
Well so saying that Linux supports more devices than any over operating system ever has, it may be true but again it does not support the one that people use.
“Look at the latest versions of Fedora, SuSE, Ubuntu and others. Installation is a complete breeze (way easier than any other operating system installation). You can now plug a new device in and the correct driver is automatically loaded, no need to hunt for a driver disk somewhere, and you are up and running with no need to even reboot.”
No come back to earth, man, installing Linux is still a pain in the ass, how can he say that Linux is easier to install than any other OX when OS X offers a way better experience for any kind on installation. He should just get out of his Linux world and watch what it is done somewhere else, he can not speaki like that by ignoring the reality of the fact.
But since the guy is working at Novell, i am not surprise that what he wrote is more a selling propaganda message from Novell than a real technical discussion.
Like usual on OSNews the comments to this story are complete eye cancer.
I will refute one thing in your comment. HP does support linux with drivers. WOWO SUPRISE! I am not going to sit and point all the other parts of you comment that are incorrect. Do some investigation before you start spouting FUD.
http://h10018.www1.hp.com/wwsolutions/linux/drivers/
By comparison my OS X computer works with every of my devices, most of them out of the box
Just try to use different hardware for every piece of equipment your Mac came with “out of the box”.
Well so saying that Linux supports more devices than any over operating system ever has, it may be true but again it does not support the one that people use.
Nvidia graphics cards,chipsets,etc
Mosts TV cards.
Almost every scsi adapter worth its salt.
Sata harddisks.
Your precious Apple has support tailor made for all the hardware that Apple offers.That doesn’t say anything about OSX capable of supporting a lot of hardware now does it.Whereas linux allways had to reverse engineer existing drivers and moreoften did a better job than the originals.
I agree with the author that linux supports by far the most hardware out of the box.And i sincerly hope the popularity will further increase so we can finally get rid of all those driver cd’s.
That’s why i like to use linux.Compared to OSX,windows i most of the times don’t need any driver install festival.On FC5 my sound,TV-card,graphics card,motherboard drivers,sata drivers,etc.. all work seemlessly out of the box.I think with ubuntu,SuSE,etc the experience will be at least the same.
That’s my personal experience though.
I allways stay 2-3 years behind with purchaging hardware.Besides that it saves me a lot of money,changes are good linux will support most hardware,and i don’t need any driver CD’s neither do i need to install all those apps from different CD’s.
One install and everything i could possibly need is there ready to use.But i think OSX is very competitive OS too,no doubt about it.I just like to do most myself.
It’s not just user applications. kernel’s no good without init…
and a definition of OS that allows “it’s a brick” is a very useless definition of OS.
Once you get rid of all the userspace, you definitely won’t be using gnome anymore. duh. However, the in-kernel drivers are all functioning; it’s a totally functional computer with a totally functional OS on it. If you have a small embedded system that you’ve written as an in-kernel driver, the you really don’t need all that userspace cruft. It’s just meaningless.
You don’t need a display or UI to be an OS. Most car’s computers run very minimal OSs, and even thermostats tend to. The probably don’t have the full GNU toolchain running at all times in such places…
Once you get rid of all the userspace, you definitely won’t be using gnome anymore. duh.
init isn’t in the kernel, and it isn’t gnome. duh.
However, the in-kernel drivers are all functioning; it’s a totally functional computer with a totally functional OS on it.
Nope. It’s a brick. It fell over when it couldn’t find init.
If you have a small embedded system that you’ve written as an in-kernel driver, the you really don’t need all that userspace cruft. It’s just meaningless.
You need, as a minimum, init, and something for init to do. Even the smallest embedded linux systems use busybox to accomplish that.
So many here want a stable API. Great. I like stable APIs, too.
So let’s all send a petition to the kernel devs requesting them to start maintaining a completely stable API.
And let’s assume they grant the request.
That could be a good thing for more support of proprietary commercial drivers, and fewer driver authors will complain.
The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface. And, there will have to be multiple implementations of common APIs, to support multiple devices, and multiple versions. And flawed drivers will be supported longer, decreasing efficiency and security. And the kernel will become bigger, slower, and more bloated.
So, are those willing to make that trade off willing to sign a petition?
And to those who said old Unix maintained a stable API, answer this question – can you take a device that runs on HP-UX and have it run seamlessly on AIX, SCO Unix, Solaris, old SysV, any of the BSDs, or even Mac OSX? Or how about even an older version of HP-UX to a newer version. And take MacOSX – how many hardware devices can you run on it, that are not sold by Apple?
Also, name one *nix that supports more devices out of the box than Linux. I’m not talking about architectures, which NetBSD arguably runs on more than Linux (depending on who you talk to). I’m talking video cards, usb devices, sound cards, pcmcia cards, network cards, speakers, scanners, printers, etc etc. Is there one *nix, that has a stable API, that supports more of those kinds of things than Linux? Name one.
Also, is there a Live CD version of a *nix, with a stable API, that can run on as many things seamlessly as Knoppix? Name one.
And one more thing about stable APIs. Look at all the companies who support OSDL – IBM, HP, Oracle, Sony, Sun, the list goes on. Do you see any of them complaining about Linux’s non-stable APIs? Well, no. Obviously for them, stability comes in at the distro level, which is a market opportunity that Red Hat filled in quite nicely, and their bottom line proves this. RHEL is on 18 month release cycles, and comes with (correct me if I’m wrong) 3-5 year support periods. A full RHEL Linux release is shipped with one major kernel release, with security patches and back ports. And the API of this one kernel release remains stable. The same is true on the applications level.
So, with this, we get the best of both worlds. The kernel and the drivers in it’s tree get improved very rapidly, and API stability is provided by certain distros like RHEL, or Debian stable. Then they can include newer kernel releases when they are ready, or when the market will bare it. Or they can backport features (something Linus Torvalds said is just fine and dandy) as needed.
In the meantime, what about the geeks who want the “latest and greatest” kernel, DEs, apps, etc, who always download the latest ISO from Ubuntu or whatever? Well, if you want to be on the bleeding edge, you don’t get a stable API, and you have to put up with more bugs. And then, what about those who choose more stable distros, like RHEL based CentOS, Slackware, or Debian stable? Well, it’s simple. Just use hardware that is know to be supported.
Edited 2006-07-27 20:04
The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface.
I would rather think that it will stimulate some additional modularity and some “inteligent design” will be involded at last.
And, there will have to be multiple implementations of common APIs, to support multiple devices, and multiple versions.
Which means versioning – feature we have a problem with. Old drivers are supported by old compatibility layer, new drivers have full benefit of the new architecture.
And the kernel will become bigger, slower, and more bloated.
Only, if we continue to have monolitic kernel. If it is modular, then if you do not have old drivers, no compatibility layer is required to be loaded.
The problem with this article is that Linux developers start believe themself, that they do not have problems with drivers, when users consider this as one of main problems of Linux. Just go to CompUSA, buy a new cool device and try to find a driver.
The BIG BIG BIG trade off is that the rate in which the kernel is innovated/improved/opimized will decrease dramatically, no matter how well implementation is encapsulated from the API interface.
There’s no reason to believe this is true. Past experience and the literature both indicate the opposite. You get more innovation in systems when the internal APIs are stable and you’re spending your time optimizing than you do when you’re spending your time reimplementing to support yet another API change.
I’m not going to sign any petition to change the way Linux is done. It is the way it is. I’d just prefer people not assign properties to it that it doesn’t have.
In the Article:
“Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven’t taken a look at it yet.”
This deserves my response, because I have seen windows vulnerabilities in the past to affect hugely the USB devices, some of the most expensive USB devices simply after a windows corruption or going online without a firewall or a virus scanner; whereas other functions of windows seem to be stable enough. In linux or OSX or Solaris I didn’t notice that at all even using them for a period of 6 months and more.
I always said that USB, sound subsystems must be rewritten completely in windows because they cause alot of troubles to customers and technicians.
Another interesting thing I have noticed is that server windows 2003 didn’t suffer from the same problems Windows 2000 or XP suffers from, not even vista 5472, which was really the best windows I have seen to date.
a harmless nice general oviously slightly biased presentation & all U have to do is get all “this is not completly correct” etc & post 100+ comments on API’s … lame boring comments .. .
There is enough interesting stuff in that presentation to comment about not just the …. stable or not so stable APIs … .
Yes some did .. but come on .. why is everybody gettin so emotional about the APIs … .
Hey the Linux Symposium papers .. now there is a lot to discuss in them
A good day to you all
…had much more penguins on it for us to take world domination seriously.
It may surprise you even more, but Debian also runs on top of both NetBSD and FreeBSD.
It does not surprise me. I was not talking about that it could run on Linux kernel or BSD kernel. I was writing about, that normaly it runs on Linux.