Linked by Thom Holwerda on Tue 9th May 2006 21:25 UTC, submitted by luzr
OSNews, Generic OSes Torvalds has indeed chimed in on the micro vs. monolithic kernel debate. Going all 1992, he says: "The whole 'microkernels are simpler' argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. The whole argument that microkernels are somehow 'more secure' or 'more stable' is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure. And the argument that you can 'just reload' a failed service and not take the whole system down is equally flawed." My take: While I am not qualified to reply to Linus, there is one thing I want to say: just because it is difficult to program, does not make it the worse design.
Permalink for comment 123358
To read all comments associated with this story, please click here.
RE[2]: My Take!
by Brendan on Thu 11th May 2006 04:25 UTC in reply to "RE: My Take!"
Member since:

No. You still have to do locking, because only one instance of the driver can drive the hardware at the same time. But now you have to do locking between processes, and not between threads, which actually makes it more complicated.

If there's multiple instances of a device driver (multiple devices of the same type), then each instance of the device driver would only access it's own device, and there'd still be no need for locking in the device driver.

What you seem to be describing is multiple instances of a device driver that control the same device. This never happens (except for exo-kernels I guess).

The only case where you would need locking within a device driver is when that device driver is implemented as a multi-threaded process, but then the locks are "process specific" (just like any multi-threaded user-level code).

What most microkernel people fail to understand (or avoind mentioning, as it defeats their argumentation) is that turning subsystems into independent processes doesn't change the fact that they still have to be synchronized somehow. If you can't share the underlying resources between threads without synchronization, you can't do it with processes either.

You're wrong - there is very little need for any synchonization between device drivers. The things that need sychronization is limited to "shared/global hardware" (PCI configuration space access, PIC and I/O APIC control, legacy ISA DMA controllers) and timing.

For "shared/global hardware", the device driver is not allowed direct access - either the kernel provides functions for device drivers to use or there's some form of "device manager". This provides sychronization and also provides a form of abstraction, such that device drivers don't need to care which PCI mechanism is being used by the chipset, whether APICs are being used instead of the old PIC chips, etc.

For timing, that is the schedulers problem. In general you need a scheduler that handles "priorities" and has functions like "nanosleep()" and "sleep()". In addition the scheduler needs to be able to pre-empt lower priority things (like applications) when a higher priority thing (like a device driver) becomes "ready to run". AFAIK the Linux scheduler is more than adequate for this (if you use it's "real-time" scheduling and if the "niceness" priority adjustment doesn't mess everything up).

And there is also the stability thing... turning drivers into processes doesn't make the system more stable, if the driver for the network card crashes, the system may become unusable, just like on a monolithic kernel.

Here I agree slightly - a simple micro-kernel is no better than a monolithic kernel, unless it includes additional systems to handle failures.

Your networking example is a bad example - all real OS's (including monolithic OSs) will handle "network service failure" because the network is considered unreliable anyway (i.e. the user could unplug the network cable, the ethernet hub could have problems or your ISP might drop your connection).

A better example would be code (and hardware) that handles swap space. A simple micro-kernel or a monolithic kernel would be entirely screwed if it's swap space dies, but (with extra work) an "advanced micro-kernel" could provide additional systems to recover (e.g. redundant swap space providers).

The other thing "monolithic people" don't understand is that finding out what went wrong is very useful for bug fixing. For example, if a device driver is intermittently corrupting some of the kernel's data it would be extremely difficult to figure out which device driver is faulty or when the data was corrupted. For a micro-kernel you'd get a page fault and you'd know exactly what happened as soon as it happens.

Reply Parent Score: 1