Linked by Thom Holwerda on Sun 22nd Jul 2007 00:33 UTC, submitted by liquidat
Linux Linus Torvalds included patches into the mainline tree which implement a stable userspace driver API into the Linux kernel. The stable driver API was already announced a year ago by Greg Kroah-Hartman. Now the last patches were uploaded and the API was included in Linus' tree. The idea of the API is to make life easier for driver developers: "This interface allows the ability to write the majority of a driver in userspace with only a very small shell of a driver in the kernel itself. It uses a char device and sysfs to interact with a userspace process to process interrupts and control memory accesses."
Permalink for comment 257243
To read all comments associated with this story, please click here.
RE[5]: I wonder...
by silix on Sun 22nd Jul 2007 16:10 UTC in reply to "RE[4]: I wonder..."
Member since:

Turn a light bulb on and off repeatedly. Eventually it will stop working. Predicting when it's impossible, because there are variables at play on which we don't have direct control. So yes, random events do occur in nature.

same for the microprocessor in everybody's PC - it's usually rated for lifespan in the order of the decades if working under conditions that match factory specifications, but it's a pshysical device after all, so it must be subject to physical phenomena such as electromigration affecting on-chip nanotracks and tearing them apart
but SW is an entirely different matter - an operating system, plus application running on it, all of them running on a digital device, is precision and mathematics for the most part
it's often said that SW design cannot attain true determinism - but that does not mean operating systems and applications are subject to the same rules as living beings (they would be unreliable computation tools if so), it means that dealing with systems of vast (and steadily increasing) complexity, the state and execution flow cannot be fully mapped and analyzed in advance with the limited computing (for simulation and automated testing ) and human (for design) resources available
this holds true in the case of concurrent execution where the system can be in one of several possible states at any given moment - but again , which one depends on the entire instruction sequence the processor(s) have executed until that moment, is not "random" at all

there's only a pair of randomic elements in the system, the first being the RNG source device (if any), the second being the human factor that adds the chance for the device or piece of SW, to deviate from its design specifications and perform an unexpected operation, or go into an unexpected state, always or under certain circumstances

one of the tenets of SW design as i have been taught it, actually is about confining the problem domain in order to cope with the limited and failure-prone resources devoted to solve it (making the single subdomain manageable with finite (and very low) resources, and reducing the impact of eventual bugs in the implemented code
the failure is always on the human side, not on the machine...

Inferior to? You are aware that the Linux kernel is one of the most advanced, feature rich and valued pieces of software in existence, miles ahead of what BSD or even Windows have to offer, don't you?

it is widely acknowledged that's not necessarily the effect of a good start design or of the best programming practices - Linux has always had hundreds times more available manpower than any other kernel on the planet, and it's a fact - that, either intentionally or not, leads to a way of analyzing and solving problems differing from what happens on other OSs (DFBSD, for instance - developers working on it admittedly are forced to get a good start with their own forces because they can't afford the same manpower as Linux, coming and fix bugs for them)

let's try to take away the implications of that fact ( the abundance of drivers, the relative speed at which parts of the code are rewritten and parts are added ) for a moment
are we sure the kernel would be that "advanced", had the hype that surrounds it now, not materialized, had all the people who contributed over the years, not done it, and had corporations not picked it up?
are we sure we actually have a better kernel than, e.g. Syllable's or Haiku's one ( and it is to be noted that the latter one is hacked by literally a handful of people) fromn a purely architectural point of view?

moreover, Linux as a development effort look like being taken by a sort of catch-up frenzy - more and more experimental features are being added to the 2.6 branch every release, and releases seem to have an increasingly tighter pace - this can introduce regressions or new bugs, and often does, but the average user does not seem to complain...

Edited 2007-07-22 16:29

Reply Parent Score: 0