Linked by Thom Holwerda on Tue 9th May 2006 21:25 UTC, submitted by luzr
OSNews, Generic OSes Torvalds has indeed chimed in on the micro vs. monolithic kernel debate. Going all 1992, he says: "The whole 'microkernels are simpler' argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. The whole argument that microkernels are somehow 'more secure' or 'more stable' is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure. And the argument that you can 'just reload' a failed service and not take the whole system down is equally flawed." My take: While I am not qualified to reply to Linus, there is one thing I want to say: just because it is difficult to program, does not make it the worse design.
Thread beginning with comment 123397
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[6]: My Take!
by Brendan on Thu 11th May 2006 09:20 UTC in reply to "RE[5]: My Take!"
Brendan
Member since:
2005-11-16

I was responding to the claim that If there's multiple instances of a device driver (multiple devices of the same type), then each instance of the device driver would only access it's own device, and there'd still be no need for locking in the device driver.

What I'm trying to say is that if a device driver is implemented as a seperate single-threaded process then it requires no locks at all, and if a device driver is implemented as a multi-threaded process any locks are confined within that process and don't need to be used by other processes.

Storage drivers, especially those that implement things like software raid, or that service the disk cache, even if they are implemented in separate processes, need to have access to a common pool of memory. That definitely requires them to synchronize.

Need?

I guess it does depends on how the OS is designed (rather than what sort of kernel is used, etc). I use asynchronous messaging, where messages can be large (e.g. 4 MB) rather than tiny (e.g. several dwords). For the "tiny messages" case you probably would need shared data, but it's completely inappropriate for distributed OSs so I don't use it.

For me, only one process ever has access to "common pools of memory". For example, the "VFS process" would have a disk cache that only it has access to. If it needs something that isn't in this cache it requests the data from the appropriate file system code. When the VFS receives the data it inserts it into it's disk cache and complete the original request.

Reply Parent Score: 1

RE[7]: My Take!
by Cloudy on Thu 11th May 2006 16:04 in reply to "RE[6]: My Take!"
Cloudy Member since:
2006-02-15

I guess it does depends on how the OS is designed (rather than what sort of kernel is used, etc).

Yes. I wasn't being clear about context. I was limiting my comments to designs not based on message passing.

But even message passing doesn't entirely relieve the system of synchronization requirements, it merely embeds them in the message model.

For the "tiny messages" case you probably would need shared data, but it's completely inappropriate for distributed OSs so I don't use it.

Again, that depends on the type of OS. I've done single-address-space distributed OSes that share virtual memory across the distributed system. If properly designed there can be advantages to such an approach, although it's definitely easier to resolve fault containment in a distributed system by establishing autonomy and relying on messaging.

Reply Parent Score: 1