Linked by Thom Holwerda on Sun 22nd Jul 2007 00:33 UTC, submitted by liquidat
Linux Linus Torvalds included patches into the mainline tree which implement a stable userspace driver API into the Linux kernel. The stable driver API was already announced a year ago by Greg Kroah-Hartman. Now the last patches were uploaded and the API was included in Linus' tree. The idea of the API is to make life easier for driver developers: "This interface allows the ability to write the majority of a driver in userspace with only a very small shell of a driver in the kernel itself. It uses a char device and sysfs to interact with a userspace process to process interrupts and control memory accesses."
Thread beginning with comment 257149
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: I wonder...
by aent on Sun 22nd Jul 2007 03:31 UTC in reply to "RE: I wonder..."
aent
Member since:
2006-01-25

That kind of is the whole argument of why no stable API is better. By just sticking to one design and not trying anything else or allowing that design to evolve with a stable API, they may be stuck with the original poor design as they are seeing how it is used. If they can redo the design a couple of times, then they can have a better design.

Taking the USB stack as an example, if they tried to keep the original design and make it a stable API, would that be better then the evolutionized and better API that they replaced it with as the USB technology progressed with stuff like USB 2.0 coming out? There are tradeoffs on both sides, and the argument was that for Linux's architecture and development process, they thought that a stable API would hinder the development they wanted, being able to correct any API issues that weren't well thoughtout the first time around without having to worry about any obsolete or deprecated APIs.

Reply Parent Score: 5

RE[3]: I wonder...
by kaiwai on Sun 22nd Jul 2007 04:50 in reply to "RE[2]: I wonder..."
kaiwai Member since:
2005-07-06

That kind of is the whole argument of why no stable API is better. By just sticking to one design and not trying anything else or allowing that design to evolve with a stable API, they may be stuck with the original poor design as they are seeing how it is used. If they can redo the design a couple of times, then they can have a better design.


On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure!

Take a look at Sun's own USB stack, for instance - it didn't need to be re-written three times and performs the same if not better than Linux's.

Heck, look at Sun's new network infrastructure for instance, nothing stopped them from pushing some great ideas such as the new network infrastructure - Nemo.

What you're saying to me is that progress without complete breakage is impossible. 25 years of commercial development by way of Windows and various UNIX's says otherwise.

Taking the USB stack as an example, if they tried to keep the original design and make it a stable API, would that be better then the evolutionized and better API that they replaced it with as the USB technology progressed with stuff like USB 2.0 coming out? There are tradeoffs on both sides, and the argument was that for Linux's architecture and development process, they thought that a stable API would hinder the development they wanted, being able to correct any API issues that weren't well thoughtout the first time around without having to worry about any obsolete or deprecated APIs.


And if they had their act together they would have written their code modular enough so that the calls exposed to developers are written in such a way that their call isn't based on some arbitrary condition on based on decisions made further down.

For example; if you expose an API, if you write it correctly, what the heck is being done behind the scenes should not matter a donkey's diddle to the programmer concerned. He throws his request to the API, and lets the appropriate libraries to sort out the low level work then regurgitating it back to the developer again. What happens inside that blackbox is none of the programmers concern.

To say that some how if you change the internal processes, you're forced to change the external outcomes of each call is completely ignoring the basic understanding of programming.

Reply Parent Score: 5

RE[4]: I wonder...
by aent on Sun 22nd Jul 2007 06:17 in reply to "RE[3]: I wonder..."
aent Member since:
2006-01-25

On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure![i]
I'm not saying based on today's specifications, but I'm saying based on today's knowledge. Yes, we are able to predict some things, for example, the speed of ethernet will become faster, however before wireless networks appeared and started to be implemented according to the 802.11 standards, designing a proper API for today's networking capabilities would be near impossible. Yes, the old systems could be extended through hacks, as both Windows and Linux did for a long time. Windows has to maintain several different networking stacks in its code because its deprecated them with better APIs and systems. The idea was that Linux doesn't want to have to maintain deprecated APIs, or ones that were poorly thought out. If you look at Windows, there are tons of obsolete and deprecated APIs all over the place, that Microsoft has kept often at the cost of stability and definitely causing a bit of bloat.

[i]What you're saying to me is that progress without complete breakage is impossible. 25 years of commercial development by way of Windows and various UNIX's says otherwise.

No, what I am saying is 25 years of commercial development by way of Windows and various UNIX's says that a stable API is possible (I never denied that it was), however there are disadvantages that come with a stable API. If you want an example of the problems a stable API brings, look at the products you mentioned. Many people have found better ways to implement the APIs then what are done by the commercial products in their first attempt at an intelligent design.

To say that some how if you change the internal processes, you're forced to change the external outcomes of each call is completely ignoring the basic understanding of programming.
Again, you're misinterpreting what I said. If we are designing a networking API and the best technology available right now was 10mbit ethernet, and a few years later, wireless technology shows up, and then another year later, the wireless cards are able to connect to multiple access points and use multiple connections to make the access faster, I don't expect the original API to be able to handle all of these fundamental changes, or for an ideal API to be able to be made by extending the existing API. (An API designed from scratch based on the current knowledge of how things evolved is bound to be better)
Of course, if the speed is just increased to a gigabit for ethernet with some specification changes, I would expect the API to handle it.

Backwards compatibility is, of course, important, however Linux's goal was to have all of the drivers open source and in the kernel, where then a stable API wouldn't even matter very much, as the API evolves, they'd be able to modify everything that uses it. Of course, this isn't a perfect world, and there are drivers that are developed outside of the kernel (or open source for that matter), and I imagine thats why they decided to now have a stable API available.

The fact is both the stable API and unstable API come at a price, the question is which one is worth paying. Just because Microsoft and others paid the price for a stable API doesn't mean that its poor design not to choose to go that route.

Reply Parent Score: 5

RE[4]: I wonder...
by TBPrince on Sun 22nd Jul 2007 11:43 in reply to "RE[3]: I wonder..."
TBPrince Member since:
2005-07-06

On what basis - you design a stable on the basis of future development; if you design it based solely on todays specifications, of course you're doomed to failure!

Stable API means you provide support for backward compatibility, not that you can't change API anymore.

Simply API must be "versioned", by extendeding existing API, not completely replacing it. This usually means that if API gets published for v2.6.x. for example, it will stay stable until 3.x or 2.7 at least.

If you need to extend API, you can do that but you won't remove old API: you will just add new extensions. Software using new API will usually claim itself compatible with newer kernels only (for ex. 2.7+), while software developed for earlier version will keep being compatible.

That's how most systems work and trust me: it's a lot better to attract developers.

Reply Parent Score: 4