Linked by Thom Holwerda on Sun 22nd Jul 2007 00:33 UTC, submitted by liquidat
Linux Linus Torvalds included patches into the mainline tree which implement a stable userspace driver API into the Linux kernel. The stable driver API was already announced a year ago by Greg Kroah-Hartman. Now the last patches were uploaded and the API was included in Linus' tree. The idea of the API is to make life easier for driver developers: "This interface allows the ability to write the majority of a driver in userspace with only a very small shell of a driver in the kernel itself. It uses a char device and sysfs to interact with a userspace process to process interrupts and control memory accesses."
Order by: Score:
Full circle...
by Almafeta on Sun 22nd Jul 2007 00:43 UTC
Almafeta
Member since:
2007-02-22

In the early dasy of Linux, Linux required Minix to run on top of, and it ran its drivers like Minix did.

Then, they began to run them as part of the kernel, like Windows did.

Now, it's going back to the Minix model....

Reply Score: 2

RE: Full circle...
by jadeshade on Sun 22nd Jul 2007 01:07 UTC in reply to "Full circle..."
jadeshade Member since:
2007-07-10

Then, they began to run them as part of the kernel, like Windows did.


For so long, it was 'oh yeah, it shouldn't crash - unless you have a bad driver'; now that problem no longer exists. Not that linux never crashes - I'd be suprised if some those using bleeding edge versions of software aren't spending more time Rxing their machine than using it.

Also, what relation does this have to FUSE, if any?

Reply Score: 2

RE[2]: Full circle...
by Almafeta on Sun 22nd Jul 2007 02:03 UTC in reply to "RE: Full circle..."
Almafeta Member since:
2007-02-22

Also, what relation does this have to FUSE, if any?


You've got me. I hadn't heard of that until I Googled it just now.

Reply Score: 0

v RE[2]: Full circle...
by Bending Unit on Sun 22nd Jul 2007 12:08 UTC in reply to "RE: Full circle..."
RE[3]: Full circle...
by vegai on Sun 22nd Jul 2007 12:24 UTC in reply to "RE[2]: Full circle..."
vegai Member since:
2005-12-25

Did you understand that wrong? He implied that Linux sometimes crashes.

Reply Score: 2

v RE[4]: Full circle...
by stestagg on Sun 22nd Jul 2007 15:20 UTC in reply to "RE[3]: Full circle..."
RE: Full circle...
by somebody on Sun 22nd Jul 2007 01:26 UTC in reply to "Full circle..."
somebody Member since:
2005-07-07

I think it has more to do with politics than anything else.

Linus and kernel developers were not really enthusiastic about GPLv3 and its restrictions. This way kernel gets out with model that is even more free (just as Linus was ranting not so long ago) than before even if they would relicense it as GPLv3. Linus has shown a lot of willfulness towards companies developing drivers for linux and this would be it. But without ABI stability there are still needs for source to be recompiled for each kernel version (or at least wrapper part).

Reply Score: 5

RE[2]: Full circle...
by hobgoblin on Sun 22nd Jul 2007 01:49 UTC in reply to "RE: Full circle..."
hobgoblin Member since:
2005-07-06

well the wrapper is a lesser issue as it will probably not have to change much to interface with the kernel.

i just wonder what this will have to say for ati and nvidia drivers.

never mind, i see from the linked article that there are some issues with DMA and this system. to bad. but its a start ;)

Edited 2007-07-22 01:51

Reply Score: 3

RE[2]: Full circle...
by MamiyaOtaru on Sun 22nd Jul 2007 23:22 UTC in reply to "RE: Full circle..."
MamiyaOtaru Member since:
2005-11-11

But without ABI stability there are still needs for source to be recompiled for each kernel version (emphasis added)

In the past, this has been considered by some to be desirable, including (I thought)Greg Kroah-Hartman (see http://www.mjmwired.net/kernel/Documentation/stable_api_nonsense.tx... ), of which he is the author, or http://thread.gmane.org/gmane.linux.kernel/475654/focus=475727 ).

I'm just rather surprised to see GKH involved in an effort that has the side effect of making binary drivers easier. Re-reading the links I posted, it seems more like GKH supported efforts to allow only GPL drivers to be loaded because he is against binary drivers in kernel space, so it makes a little more sense now to see him working on allowing non GPL drivers in user space.

Seriously though, I have a hard time seeing the sense in it all. Intel's 3945 wireless driver ( http://ipw3945.sourceforge.net/ ) caught a lot of flack for relying on a binary userspace component. What is this new development if not an enabler for such behavior?

I obviously don't understand the issue as well as a lot of people, so I am quite happy I am not involved in the decisions. Curiosity remains however ;)

Reply Score: 3

RE: Full circle...
by butters on Sun 22nd Jul 2007 03:38 UTC in reply to "Full circle..."
butters Member since:
2005-07-08

Now, it's going back to the Minix model....

Not really... There's no message-passing component to this driver framework. It uses a kernel component to expose kernel data to a userspace component through a pseudo-device and a pseudo-filesystem. The userspace component can behave as though it has access to kernel memory, but the kernel component is actually validating accesses and providing the mappings.

So instead a set of permitted operations, we have a set of accessible memory regions. The result is that programming userspace services is more like programming kernel services, and therefore more convenient. Furthermore, it doesn't require a programmatic interface between components, so stability is very achievable.

In a sense, it rejects the basic premise of microkernel design, which is that unprivileged servers must be restricted in terms of what operations they may perform. We don't care about operations, their names, and their purposes. We only care that unprivileged code doesn't mess with kernel memory that it shouldn't be touching. That's all the isolation that really matters.

Edited 2007-07-22 03:39

Reply Score: 5

RE: Full circle...
by Carewolf on Mon 23rd Jul 2007 08:36 UTC in reply to "Full circle..."
Carewolf Member since:
2005-09-08

Actually Linux is only now starting to run its drivers like Windows (NT) does. The old model is more like DOS (and Win95-Me).

Now if they could only make a stable API for binary kernel-mode drivers, like graphics drivers, Linux could come on par with Windows.

Reply Score: 1

Fuse and now that...
by behemot on Sun 22nd Jul 2007 02:08 UTC
behemot
Member since:
2005-11-14

it appears that Linus finally will allow Linux to be more like a microkernel.

Reply Score: 4

I wonder...
by binarycrusader on Sun 22nd Jul 2007 02:10 UTC
binarycrusader
Member since:
2005-07-06

I wonder what all the pundits that have proclaimed that stable APIs are evil for years will say now that their beloved Linux kernel is providing one?

Reply Score: 1

RE: I wonder...
by kaiwai on Sun 22nd Jul 2007 02:26 UTC in reply to "I wonder..."
kaiwai Member since:
2005-07-06

I wonder what all the pundits that have proclaimed that stable APIs are evil for years will say now that their beloved Linux kernel is providing one?


They will claim that there is a difference between a stable API for drivers in user space and one for kernel space.

For me, I think the real reason why they don't have a stable driver API is that it would require them to actually knuckle down and design something rather than merely just throwing at a wall to see what sticks.

When things like the USB stack have been rewritten 3 times, people here point to 'ooh, they're optimising' when in reality it has to do with a lack of planning - Linux kernel developers seem to ignore the cardinal rule that all programmers are taught regarding system design and analysis.

Edited 2007-07-22 02:40

Reply Score: 5

RE[2]: I wonder...
by Almafeta on Sun 22nd Jul 2007 02:36 UTC in reply to "RE: I wonder..."
Almafeta Member since:
2007-02-22

You seem to left out an important verb. They seem to what the cardinal rule? ^^;

Reply Score: 1

RE[3]: I wonder...
by kaiwai on Sun 22nd Jul 2007 02:42 UTC in reply to "RE[2]: I wonder..."
kaiwai Member since:
2005-07-06

You seem to left out an important verb. They seem to what the cardinal rule? ^^;


Design, design, design, document, document, document - rinse and repeat until you're confident enough that the design is robust enough to be forwards compatible and allow future development without needing to continuously throw out old code because the original design was flawed.

Reply Score: 5

RE[4]: I wonder...
by bnolsen on Sun 22nd Jul 2007 02:59 UTC in reply to "RE[3]: I wonder..."
bnolsen Member since:
2006-01-06

Umm...that mentality has big problems....nothing gets produced or an over engineered POS is designed.

Tere's always compromises to be made.

And there's a lot to be said about getting something out there that people can chew on and give you feedback on instead of just sitting around a table talking.

Reply Score: 4

RE[5]: I wonder...
by kaiwai on Sun 22nd Jul 2007 04:36 UTC in reply to "RE[4]: I wonder..."
kaiwai Member since:
2005-07-06

Umm...that mentality has big problems....nothing gets produced or an over engineered POS is designed.

There's always compromises to be made.

And there's a lot to be said about getting something out there that people can chew on and give you feedback on instead of just sitting around a table talking.


Oh, come on. The above is the equivalent of invading Iraq and failing to do the research before hand - now look what has happened.

Same thing if you don't investigate, document and design - you end up with a giant cluster f--k that becomes so hacked, so badly managed you're forced to chuck out the whole thing and replace it - thus costing money, thus very inefficient.

If things are properly documented, properly designed, they can be maintained for the long term rather than the short term gratification of the programmer in question.

If Linux developers and users want to get it as a viable desktop operating system, the above approach is completely ridiculous; simply throwing things out every couple of months or years because someone didn't do their homework.

Write a program, and do it right the first time.

Reply Score: 5

RE[6]: I wonder...
by henrikmk on Sun 22nd Jul 2007 09:06 UTC in reply to "RE[5]: I wonder..."
henrikmk Member since:
2005-07-10

bnolsen: And there's a lot to be said about getting something out there that people can chew on and give you feedback on instead of just sitting around a table talking.

Kaiwai: Oh, come on. The above is the equivalent of invading Iraq and failing to do the research before hand - now look what has happened.


Now, if the Iraqi invasion was prototyped first... :-)

Seriously, after getting stuck in crap code, I've had enough of the "code first and ask questions later" approach. I never had a formal education in real program design, so I don't know all the fancy theories, but it can't be that hard to see that leaving out planning is going to backfire in major and unexpected ways later, causing countless lost hours of creating workarounds to rushed or badly planned code.

Creating prototypes, whenever you can, is the best kind of feedback, I think, but it probably also depends on the context in which you are designing your stuff and how experienced you are in the matter.
I use an iterative approach, and if I have to do 10 prototypes before moving on and putting it into live production code, I'll do that, because I'm pretty sure that the 10th prototype will pay off later.

Reply Score: 2

RE[7]: I wonder...
by kaiwai on Sun 22nd Jul 2007 10:57 UTC in reply to "RE[6]: I wonder..."
kaiwai Member since:
2005-07-06

Now, if the Iraqi invasion was prototyped first... :-)


Iraq is more like a situation where the leader surrounds himself with ideologues and isolates himself from any contrary views.

Basically, it was a battle on two fronts that dooms to failure any country who has ever tried it (something Bismark repeated over and over again regarding the downful of those who tried) and worse still, an attempt to do the invasion on the cheap - Champagne lifestyle on a Coca Cola budget.

Seriously, after getting stuck in crap code, I've had enough of the "code first and ask questions later" approach. I never had a formal education in real program design, so I don't know all the fancy theories, but it can't be that hard to see that leaving out planning is going to backfire in major and unexpected ways later, causing countless lost hours of creating workarounds to rushed or badly planned code.


Believe me, just look at the number of opensource projects that have needed large rewrites because of inadequate code separation, inadequate infrastructure in the code design to allow for future expansion without a complete break in compatibility, code thats so ugly it could crack a mirror.

The problem is that there are too many programmers who want to jump right into the code before doing the boring work - planning. The programming is the easy part, the difficult part is the planning, thats why there are so many programmers who avoid doing that leg work.

Creating prototypes, whenever you can, is the best kind of feedback, I think, but it probably also depends on the context in which you are designing your stuff and how experienced you are in the matter.
I use an iterative approach, and if I have to do 10 prototypes before moving on and putting it into live production code, I'll do that, because I'm pretty sure that the 10th prototype will pay off later.


Creating prototypes are good for 'first time' internal development within an organisation - generally speaking to get feedback on the GUI. But once a trend has been established, 'best GUI practices' in regards to internal programme development should be written up and required reading for all programmers who work at the company before starting on any project.

Again, the problem is, people don't want to write documentation - they want to just fire code at a problem and hope that it works - inadequate documentation, poor code quality and lack of following internally written 'best practices' ends up to code which is unmaintainable for the longterm.

Reply Score: 5

RE[4]: I wonder...
by Dima on Sun 22nd Jul 2007 07:29 UTC in reply to "RE[3]: I wonder..."
Dima Member since:
2006-04-06

Design, design, design, document, document, document

Developers, developers, developers!

Reply Score: 5

RE[4]: I wonder...
by vegai on Sun 22nd Jul 2007 12:31 UTC in reply to "RE[3]: I wonder..."
vegai Member since:
2005-12-25

Perhaps you are forgetting that this is not an ordinary company project. They can actually afford to throw out old code, and throwing out old code is sometimes a very good idea.

Reply Score: 2

RE[4]: I wonder...
by stestagg on Sun 22nd Jul 2007 15:26 UTC in reply to "RE[3]: I wonder..."
stestagg Member since:
2006-06-03

That's kinda what is happening. We're in one of your design phases at the moment (I'm not sure that we'll ever get to the document stage properly), and the public (that's you) is being invited to contribute to the design, redesign process. The point about community projects, is that processes that usually happen behind closed doors, happen in public. This helps prevent situations where software releases take 7 years, and still need lots of trivial problems to be ironed out upon release.

Reply Score: 2

RE[2]: I wonder...
by TBPrince on Sun 22nd Jul 2007 03:03 UTC in reply to "RE: I wonder..."
TBPrince Member since:
2005-07-06

Completely agree. A stable API would require a better design while it appears that development mostly run in a semi-occasional way.

However, this is good news. If they will be able to keep it stable as promised, this could be a good evolution.

Reply Score: 4

RE[2]: I wonder...
by aent on Sun 22nd Jul 2007 03:31 UTC in reply to "RE: I wonder..."
aent Member since:
2006-01-25

That kind of is the whole argument of why no stable API is better. By just sticking to one design and not trying anything else or allowing that design to evolve with a stable API, they may be stuck with the original poor design as they are seeing how it is used. If they can redo the design a couple of times, then they can have a better design.

Taking the USB stack as an example, if they tried to keep the original design and make it a stable API, would that be better then the evolutionized and better API that they replaced it with as the USB technology progressed with stuff like USB 2.0 coming out? There are tradeoffs on both sides, and the argument was that for Linux's architecture and development process, they thought that a stable API would hinder the development they wanted, being able to correct any API issues that weren't well thoughtout the first time around without having to worry about any obsolete or deprecated APIs.

Reply Score: 5

RE[3]: I wonder...
by kaiwai on Sun 22nd Jul 2007 04:50 UTC in reply to "RE[2]: I wonder..."
kaiwai Member since:
2005-07-06

That kind of is the whole argument of why no stable API is better. By just sticking to one design and not trying anything else or allowing that design to evolve with a stable API, they may be stuck with the original poor design as they are seeing how it is used. If they can redo the design a couple of times, then they can have a better design.


On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure!

Take a look at Sun's own USB stack, for instance - it didn't need to be re-written three times and performs the same if not better than Linux's.

Heck, look at Sun's new network infrastructure for instance, nothing stopped them from pushing some great ideas such as the new network infrastructure - Nemo.

What you're saying to me is that progress without complete breakage is impossible. 25 years of commercial development by way of Windows and various UNIX's says otherwise.

Taking the USB stack as an example, if they tried to keep the original design and make it a stable API, would that be better then the evolutionized and better API that they replaced it with as the USB technology progressed with stuff like USB 2.0 coming out? There are tradeoffs on both sides, and the argument was that for Linux's architecture and development process, they thought that a stable API would hinder the development they wanted, being able to correct any API issues that weren't well thoughtout the first time around without having to worry about any obsolete or deprecated APIs.


And if they had their act together they would have written their code modular enough so that the calls exposed to developers are written in such a way that their call isn't based on some arbitrary condition on based on decisions made further down.

For example; if you expose an API, if you write it correctly, what the heck is being done behind the scenes should not matter a donkey's diddle to the programmer concerned. He throws his request to the API, and lets the appropriate libraries to sort out the low level work then regurgitating it back to the developer again. What happens inside that blackbox is none of the programmers concern.

To say that some how if you change the internal processes, you're forced to change the external outcomes of each call is completely ignoring the basic understanding of programming.

Reply Score: 5

RE[4]: I wonder...
by aent on Sun 22nd Jul 2007 06:17 UTC in reply to "RE[3]: I wonder..."
aent Member since:
2006-01-25

On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure![i]
I'm not saying based on today's specifications, but I'm saying based on today's knowledge. Yes, we are able to predict some things, for example, the speed of ethernet will become faster, however before wireless networks appeared and started to be implemented according to the 802.11 standards, designing a proper API for today's networking capabilities would be near impossible. Yes, the old systems could be extended through hacks, as both Windows and Linux did for a long time. Windows has to maintain several different networking stacks in its code because its deprecated them with better APIs and systems. The idea was that Linux doesn't want to have to maintain deprecated APIs, or ones that were poorly thought out. If you look at Windows, there are tons of obsolete and deprecated APIs all over the place, that Microsoft has kept often at the cost of stability and definitely causing a bit of bloat.

[i]What you're saying to me is that progress without complete breakage is impossible. 25 years of commercial development by way of Windows and various UNIX's says otherwise.

No, what I am saying is 25 years of commercial development by way of Windows and various UNIX's says that a stable API is possible (I never denied that it was), however there are disadvantages that come with a stable API. If you want an example of the problems a stable API brings, look at the products you mentioned. Many people have found better ways to implement the APIs then what are done by the commercial products in their first attempt at an intelligent design.

To say that some how if you change the internal processes, you're forced to change the external outcomes of each call is completely ignoring the basic understanding of programming.
Again, you're misinterpreting what I said. If we are designing a networking API and the best technology available right now was 10mbit ethernet, and a few years later, wireless technology shows up, and then another year later, the wireless cards are able to connect to multiple access points and use multiple connections to make the access faster, I don't expect the original API to be able to handle all of these fundamental changes, or for an ideal API to be able to be made by extending the existing API. (An API designed from scratch based on the current knowledge of how things evolved is bound to be better)
Of course, if the speed is just increased to a gigabit for ethernet with some specification changes, I would expect the API to handle it.

Backwards compatibility is, of course, important, however Linux's goal was to have all of the drivers open source and in the kernel, where then a stable API wouldn't even matter very much, as the API evolves, they'd be able to modify everything that uses it. Of course, this isn't a perfect world, and there are drivers that are developed outside of the kernel (or open source for that matter), and I imagine thats why they decided to now have a stable API available.

The fact is both the stable API and unstable API come at a price, the question is which one is worth paying. Just because Microsoft and others paid the price for a stable API doesn't mean that its poor design not to choose to go that route.

Reply Score: 5

RE[4]: I wonder...
by TBPrince on Sun 22nd Jul 2007 11:43 UTC in reply to "RE[3]: I wonder..."
TBPrince Member since:
2005-07-06

On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure!

Stable API means you provide support for backward compatibility, not that you can't change API anymore.

Simply API must be "versioned", by extendeding existing API, not completely replacing it. This usually means that if API gets published for v2.6.x. for example, it will stay stable until 3.x or 2.7 at least.

If you need to extend API, you can do that but you won't remove old API: you will just add new extensions. Software using new API will usually claim itself compatible with newer kernels only (for ex. 2.7+), while software developed for earlier version will keep being compatible.

That's how most systems work and trust me: it's a lot better to attract developers.

Reply Score: 4

RE[5]: I wonder...
by kaiwai on Sun 22nd Jul 2007 12:19 UTC in reply to "RE[4]: I wonder..."
kaiwai Member since:
2005-07-06

Stable API means you provide support for backward compatibility, not that you can't change API anymore.

Simply API must be "versioned", by extendeding existing API, not completely replacing it. This usually means that if API gets published for v2.6.x. for example, it will stay stable until 3.x or 2.7 at least.


I never said that it couldn't be changed - read what I wrote. The issue raised by some here is that if you have backwards compatibility, you magically castrate the developers from making changes and enhancements.

If you need to extend API, you can do that but you won't remove old API: you will just add new extensions. Software using new API will usually claim itself compatible with newer kernels only (for ex. 2.7+), while software developed for earlier version will keep being compatible.

That's how most systems work and trust me: it's a lot better to attract developers.


I never argued against it. Again, I stress - read the WHOLE thread, don't just jump in half way through the conversation making assumptions on what I or others have said.

Reply Score: 3

RE[2]: I wonder...
by butters on Sun 22nd Jul 2007 04:31 UTC in reply to "RE: I wonder..."
butters Member since:
2005-07-08

For me, I think the real reason why they don't have a stable driver API is that it would require them to actually knuckle down and design something rather than merely just throwing at a wall to see what sticks.

So you're one of those intelligent design people. Still not convinced that biology works by random chance and natural selection?

I think it's understandable to believe that your deity of choice had the insight to create existence, sit back, pop open a cold one, and watch it play out exactly as He (She?) intended. But attributing the same level of insight to humans, and software developers in particular, is stretching beyond the realm of my ability to understand different points of view.

Certainly there's more standing in the way of human prescience than our failure to knuckle down. It could have something to do with our shockingly arrogant assertion of greatness in spite of our historical failure to achieve mere civility.

We have to keep throwing stuff at the wall, because not everything sticks. To settle for anything less is to squander whatever potential we actually have.

If we're going to do this free software thing, let's not leave any good idea untried. Let's challenge our preconceptions and give it everything we've got. Let's have no regrets and no apologies. Let's throw the notion of meritocracy at the wall and see if it sticks.

Edited 2007-07-22 04:46

Reply Score: 4

RE[3]: I wonder...
by SReilly on Sun 22nd Jul 2007 06:58 UTC in reply to "RE[2]: I wonder..."
SReilly Member since:
2006-12-28

That is without a doubt the most elegant way of putting our shared goal into words that I have ever read. Bravo!

Reply Score: 1

RE[3]: I wonder...
by PlatformAgnostic on Sun 22nd Jul 2007 08:59 UTC in reply to "RE[2]: I wonder..."
PlatformAgnostic Member since:
2006-01-02

Evolution tends to conserve more than it changes. You can only have life when you have a core set of things that function to maintain homeostasis. Without this, you just have unconstrained chemical reactions which just dissipate energy and create no further development.

We may argue that different levels of homeostasis and transistasis are necessary for different classes of organisms, but most natural organisms evolve by duplicating genes and then mutating one of the replicas so that the old functionality continues working while the other replica either slowly mutates into oblivion or becomes something entirely different.

There's nothing wrong with a stable ABI with clear transitions so that one can be deprecated over several releases while the new one is brought online. The testing and maintanance effort is slightly greater, but with 100s of people involved in the Linux Kernel's development, there really should be no excuse to maintain a pair of ABI revs for major interfaces.

Reply Score: 3

RE[3]: I wonder...
by Oliver on Sun 22nd Jul 2007 09:44 UTC in reply to "RE[2]: I wonder..."
Oliver Member since:
2006-07-15

>Still not convinced that biology works by random chance and natural selection?

Natural selection is the american reason for racism and the Third Reich. Biology doesn't work by random at all and a computer doesn't work by random at all. I don't have to be full of faith to see the nonsense in this absolute saying.
This what you call "by random" is something we don't understand at the moment. But it's nonsense too to invent some god as explanation.

And you if you don't see the analogy within Linux: it's chaos without almost any order, so it's weak and inferior. Without Linus at the top with his "dictatorship", there wouldn't be a Linux anymore. This is your "god", the mastermind behind it.

>If we're going to do this free software thing, let's not leave any good idea untried.

Yes and don't forget to think about "good ideas", some people call this quality software engineering.

Reply Score: 2

RE[4]: I wonder...
by n0xx on Sun 22nd Jul 2007 12:38 UTC in reply to "RE[3]: I wonder..."
n0xx Member since:
2005-07-12

Natural selection is the american reason for racism and the Third Reich.

Natural selection doesn't manifest itself through direct action, hence any violent action performed by Man against his fallow Man its not natural selection. It's human selection.

I don't have to be full of faith to see the nonsense in this absolute saying.

Turn a light bulb on and off repeatedly. Eventually it will stop working. Predicting when it's impossible, because there are variables at play on which we don't have direct control. So yes, random events do occur in nature.

This what you call "by random" is something we don't understand at the moment

Yes we do. A random event is whenever something happens by chance. Things can be purely random, like in quantum mechanics, or fake random, like in the light bulb example I've gave you previously and it has to do with unforeseen and uncontrollable variables.

And you if you don't see the analogy within Linux: it's chaos without almost any order, so it's weak and inferior.

Inferior to? You are aware that the Linux kernel is one of the most advanced, feature rich and valued pieces of software in existence, miles ahead of what BSD or even Windows have to offer, don't you?

Without Linus at the top with his "dictatorship", there wouldn't be a Linux anymore. This is your "god", the mastermind behind it.

Of course there would. He only chooses which patches are stable enough do apply to the main Kernel tree. If he quit right now, there would still be regular kernel maintenance and updates. Plus, the kernel has always been replaceable. Back in the beginning, if Linux hadn't sown up with it's kernel, someone else would have. Plus there was already work in progress on the HURD. If Linux never happened, the HURD would have long since become production ready.

Yes and don't forget to think about "good ideas", some people call this quality software engineering.

Engineering is overrated. Read "The Cathedral and The Bazaar".

Edited 2007-07-22 12:45

Reply Score: 4

RE[5]: I wonder...
by silix on Sun 22nd Jul 2007 16:10 UTC in reply to "RE[4]: I wonder..."
silix Member since:
2006-03-01

Turn a light bulb on and off repeatedly. Eventually it will stop working. Predicting when it's impossible, because there are variables at play on which we don't have direct control. So yes, random events do occur in nature.

same for the microprocessor in everybody's PC - it's usually rated for lifespan in the order of the decades if working under conditions that match factory specifications, but it's a pshysical device after all, so it must be subject to physical phenomena such as electromigration affecting on-chip nanotracks and tearing them apart
but SW is an entirely different matter - an operating system, plus application running on it, all of them running on a digital device, is precision and mathematics for the most part
it's often said that SW design cannot attain true determinism - but that does not mean operating systems and applications are subject to the same rules as living beings (they would be unreliable computation tools if so), it means that dealing with systems of vast (and steadily increasing) complexity, the state and execution flow cannot be fully mapped and analyzed in advance with the limited computing (for simulation and automated testing ) and human (for design) resources available
this holds true in the case of concurrent execution where the system can be in one of several possible states at any given moment - but again , which one depends on the entire instruction sequence the processor(s) have executed until that moment, is not "random" at all

there's only a pair of randomic elements in the system, the first being the RNG source device (if any), the second being the human factor that adds the chance for the device or piece of SW, to deviate from its design specifications and perform an unexpected operation, or go into an unexpected state, always or under certain circumstances

one of the tenets of SW design as i have been taught it, actually is about confining the problem domain in order to cope with the limited and failure-prone resources devoted to solve it (making the single subdomain manageable with finite (and very low) resources, and reducing the impact of eventual bugs in the implemented code
the failure is always on the human side, not on the machine...


Inferior to? You are aware that the Linux kernel is one of the most advanced, feature rich and valued pieces of software in existence, miles ahead of what BSD or even Windows have to offer, don't you?

it is widely acknowledged that's not necessarily the effect of a good start design or of the best programming practices - Linux has always had hundreds times more available manpower than any other kernel on the planet, and it's a fact - that, either intentionally or not, leads to a way of analyzing and solving problems differing from what happens on other OSs (DFBSD, for instance - developers working on it admittedly are forced to get a good start with their own forces because they can't afford the same manpower as Linux, coming and fix bugs for them)

let's try to take away the implications of that fact ( the abundance of drivers, the relative speed at which parts of the code are rewritten and parts are added ) for a moment
are we sure the kernel would be that "advanced", had the hype that surrounds it now, not materialized, had all the people who contributed over the years, not done it, and had corporations not picked it up?
are we sure we actually have a better kernel than, e.g. Syllable's or Haiku's one ( and it is to be noted that the latter one is hacked by literally a handful of people) fromn a purely architectural point of view?

moreover, Linux as a development effort look like being taken by a sort of catch-up frenzy - more and more experimental features are being added to the 2.6 branch every release, and releases seem to have an increasingly tighter pace - this can introduce regressions or new bugs, and often does, but the average user does not seem to complain...

Edited 2007-07-22 16:29

Reply Score: 0

RE[4]: I wonder...
by Almafeta on Sun 22nd Jul 2007 19:02 UTC in reply to "RE[3]: I wonder..."
Almafeta Member since:
2007-02-22

Natural selection is the american reason for racism and the Third Reich.


Guh... wait... Americans caused the third Reich... because we believe in natural selection?

Ow. My head hurts. It's comments like this that make me wish OSNews had an ignore list. ;)

Reply Score: 1

RE[2]: I wonder...
by baadger on Sun 22nd Jul 2007 11:40 UTC in reply to "RE: I wonder..."
baadger Member since:
2006-08-29

As you know, the idea behind a fluid internal API as well as a high frequency release cycle is to make the kernel more agile and see it provide support current technology more quickly.

Initially the result is the lack of a nice clean interfaces and a higher barrier for those trying to get into kernel development (lack of documentation) but it certainly doesn't result in the loss of generic device layers.

As examples look at the new wireless stack (that admittedly we have yet to see a lot of drivers mvoe over too) and libata. Both are good examples of convergence of code and functionality in the kernel. These layers just take time to emerge and are introduced into mature areas of the kernel, there's really no point developing such a layer if over the next X years a whole new class of related devices are going to come along and you haven't foreseen them in your design.

I put you to what is worse: a little continuous effort by driver developers to keep up or developers writing hacks to shoehorn their 2007 hardware into an API that was designed in 2001 (think Windows XP)?

Isn't it true that most wireless drivers for XP have had to implement the majority of their own wireless stack?

Reply Score: 5

RE[2]: I wonder...
by abraxas on Mon 23rd Jul 2007 01:17 UTC in reply to "RE: I wonder..."
abraxas Member since:
2005-07-07

When things like the USB stack have been rewritten 3 times, people here point to 'ooh, they're optimising' when in reality it has to do with a lack of planning - Linux kernel developers seem to ignore the cardinal rule that all programmers are taught regarding system design and analysis

You would like to believe that is the case but it isn't. The USB drivers are a lot better than the drivers available for Windows or OSX. The reason they are better is because they don't have to worry about backwards compatibility. Sure the Windows USB drivers are stable but they suck in comparison. Microsoft could probably write better drivers now if they wanted to but they would have to sacrifice compatibility.

What people like you fail to realize is that a lot of the rewriting that goes on is re-factoring. There are advantages to stabilizing on an API but there are also advantages not to.

Reply Score: 3

my 2c
by nulleight on Sun 22nd Jul 2007 02:17 UTC
nulleight
Member since:
2007-06-22

It won't change anything for nvidia/ati drivers since new userspace drivers dont provide DMA transfers. One would also get much more context switches on interrupts from devices using this userspace api. So it means that userspace drivers are pretty bad for things like network card/sata controllers or graphic cards. The possibility of a closed source drivers is nearly a side effect, as one can read in the original german article. Anyway, i think the stable kernel api would be bad since we will get another monstros drivers like fglrx which can't be fixed, since noone is alowed to fix them and vendors wont't be compelled to release open-source versions( even limited ).

Reply Score: 5

The question that "matters"
by thjayo on Sun 22nd Jul 2007 03:16 UTC
thjayo
Member since:
2005-11-11

Will this help the current wireless or graphics drivers situation?

Reply Score: 2

=D
by Lazarus on Sun 22nd Jul 2007 04:00 UTC
Lazarus
Member since:
2005-08-10

Very interesting. Definitely something to keep an eye on.

The last few years Linux (the kernel) has seemed bloated, buggy and crufty to me, full of fun experiments in instability causing me much self-induced grief as I've played with it on various systems.

If this API becomes widely adopted by driver developers and proves to be as stable as I'd imagine it to be, I could almost see Linux being a system I'd run full time.

Not so much from the point of view of possibly getting binary only user-space drivers, but more due to getting the majority of free driver code out of the kernel, (hopefully) making the system more solid.

Reply Score: 3

Useful API? Useless API !
by kscguru on Sun 22nd Jul 2007 04:31 UTC
kscguru
Member since:
2006-01-21

THIS is a useful API? I looked at the patches; they (1) allow mapping of mmapped-I/O regions to userspace and (2) allow waiting on interrupts. That's it. Useless for anything more complex than a serial port.

Missing interfaces: DMA engines, PCI configuration spaces, fast synchronization primitives, privileged CPU instructions (like disabling of interrupts), fast thread switching (the reason Windows has the graphics subsystem in the kernel - you go 3x slower just leaving the kernel), access to the networking stack, all of this with good performance ... half of these are doable, but several important ones (hint: performance) are flatly impossible outside the kernel. (Smart people have been trying and failing for 40 years, starting with UNIX.)

Oddly enough, this useless API is entirely in keeping with the kernel hacker's thoughts - read Greg K-H's "driver API stability" note and it's quite clear he thinks all drivers do nothing more complex than poke I/O ports and receive interrupts. The 80% of devices that already have GPL drivers are this simple; the other 20% (3D graphics, wireless, and so on) are not.

The API might be useful for embedded devices where describing register layouts is too much information. It is not useful for consumer devices, and it will do nothing to reduce the usage of binary-only drivers.

Reply Score: 5

RE: Useful API? Useless API !
by butters on Sun 22nd Jul 2007 05:16 UTC in reply to "Useful API? Useless API !"
butters Member since:
2005-07-08

You make several good points. There's a fundamental trade-off between performance and isolation. The line in the sand between the kernel and userspace used to be the star of the debate, and now we have guest kernels to worry about.

A few of the shortcomings you mention can be addressed. For example, DMA and memory-mapped I/O can be emulated in userspace in much the same way as they are emulated in high-memory. Bounce-buffering strategies such as the Linux kernel's SWIOTLB service are pretty much the best-case scenario for virtualizing framebuffers and other memory apertures in software. The only way to improve on this is hardware acceleration via enhanced IOMMU functionality.

Isolation is a big deal in computer science today, and we'll no doubt see many innovations in the next decade that will allow hardware and software to manage memory protection in more sophisticated ways. Remember, some commercial UNIX systems still have a fixed segmented memory model. We're only taking the first baby steps toward flexible, high-performance memory protection. The rest will come in due time.

Reply Score: 5

RE: Useful API? Useless API !
by foobar on Sun 22nd Jul 2007 07:39 UTC in reply to "Useful API? Useless API !"
foobar Member since:
2006-02-07

THIS is a useful API? I looked at the patches; they (1) allow mapping of mmapped-I/O regions to userspace and (2) allow waiting on interrupts. That's it. Useless for anything more complex than a serial port.

Missing interfaces: DMA engines, PCI configuration spaces, fast synchronization primitives, privileged CPU instructions (like disabling of interrupts), fast thread switching (the reason Windows has the graphics subsystem in the kernel - you go 3x slower just leaving the kernel), access to the networking stack, all of this with good performance ... half of these are doable, but several important ones (hint: performance) are flatly impossible outside the kernel. (Smart people have been trying and failing for 40 years, starting with UNIX.)

Oddly enough, this useless API is entirely in keeping with the kernel hacker's thoughts - read Greg K-H's "driver API stability" note and it's quite clear he thinks all drivers do nothing more complex than poke I/O ports and receive interrupts. The 80% of devices that already have GPL drivers are this simple; the other 20% (3D graphics, wireless, and so on) are not.

The API might be useful for embedded devices where describing register layouts is too much information. It is not useful for consumer devices, and it will do nothing to reduce the usage of binary-only drivers.


True, but there are some drivers that do not interface with hardware.

Reply Score: 2

RE: Useful API? Useless API !
by draethus on Mon 23rd Jul 2007 14:26 UTC in reply to "Useful API? Useless API !"
draethus Member since:
2006-08-02

One incredibly useful thing with this API that everybody seems to be missing, is that guest operating systems running in virtual machines are now able to access host PCI hardware. There was a few times people already did this: the bochs x86 emulator apparently has/had a pcidev kernel module, and the Gelato guys also did some work in this direction, but this is officially in the kernel so it will work without having to build any out-of-tree modules.

Reply Score: 1

gregorlowski
Member since:
2006-03-20

From a technical standpoint, this is great. It's great to have the option to design components of your driver to run in userspace or in kernelspace, depending on the application.

However, I'm worried that this will make it easier to write binary-only userspace driver components and that companies will see it as an opportunity to claim linux compatibility without releasing open drivers/specs.

I could be wrong about this (I guess the code would still be considered linked to GPL code, but then maybe you could create some sort of bridge between your linked userspace open source component and your nonfree userspace components... more easily).

I'm not saying that technological progress in free software that might encourage nonfree software development should be avoided, but it'll be interesting to see if this has an impact on nonfree software development in the linux world.

Reply Score: 4

question
by asdx24 on Sun 22nd Jul 2007 10:25 UTC
asdx24
Member since:
2007-05-17

will we see a completely re-designed and re-written linux kernel with a micro-kernel design, message-passing, and all the cool features some day?

Reply Score: 1

RE: question
by renox on Sun 22nd Jul 2007 20:06 UTC in reply to "question"
renox Member since:
2005-07-06

No, for the first part: kernel developers prefers solid code to buzzwords, for the second part, it has already many cool features.

Reply Score: 2

REMF
Member since:
2006-02-05

so that this stable API is useful for graphics drivers etc, as well as just the non transfer intensive drivers types?

Reply Score: 1