Linked by Thom Holwerda on Tue 9th May 2006 21:25 UTC, submitted by luzr
OSNews, Generic OSes Torvalds has indeed chimed in on the micro vs. monolithic kernel debate. Going all 1992, he says: "The whole 'microkernels are simpler' argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. The whole argument that microkernels are somehow 'more secure' or 'more stable' is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure. And the argument that you can 'just reload' a failed service and not take the whole system down is equally flawed." My take: While I am not qualified to reply to Linus, there is one thing I want to say: just because it is difficult to program, does not make it the worse design.
Permalink for comment 122893
To read all comments associated with this story, please click here.
My Take!
by Brendan on Wed 10th May 2006 01:09 UTC
Brendan
Member since:
2005-11-16

I would assume that Linus has been working with monolithic kernels for so long that he isn't fully aware of the possible ways that micro-kernels can be designed.

Firstly, the scheduler and basic memory management doesn't need to be done by seperate processes, and can be built into a microkernel. To be honest I've never seen much point in using seperate processes for these things.

As for complexity (of the OS, not just the kernel), consider any non-trivial device driver. For a monolithic kernel the device driver's code can be run in the context of any number of processes on any number of CPUs at the same time. This means you must have re-entrancy locking, and all of the deadlock/livelock, fairness, etc problems that come with it. In a micro-kernel you can implement each device driver as a seperate single-threaded process, which implies that it's impossible for any of it's code to be running on more than one CPU and all of the re-entrancy issues disappear completely.

This can make a huge difference in performance and scalability...

Consider the Linux boot process - normally it initializes each device driver one at a time. For example, I've got a dual CPU server here with a SCSI controller that has a 15 second "disk drive startup" delay, where the entire computer (including both CPUs) does absolutely nothing for the entire 15 seconds. The reason for this is that it's too complicated for poor Linus to figure out how to reliably initialize device drivers in parallel.

For a well designed micro-kernel, device detection, device driver initialization, file system and networking initialization and GUI startup can all happen in parallel without really adding any extra complexity.

Now consider "run time" - imagine you're writing a simple text editor. When the text editor starts it needs to load DLLs, load user preferences, build it's GUI interface (menus, etc), load the text file being edited, etc. For monolithic systems this would all be done one thing at a time - call "fopen", wait to acquire VFS re-entrancy locks, try to find it in the VFS cache, call the file system, wait to acquire the file system code's re-entracy locks, wait for hardware delays while reading directory information, etc. For a micro-kernel you could just send an asynchronous "open file" request and continue processing (building menus, sending other requests, etc) until the "open file status" is received - the requests themselves can be handled by processes running on other CPUs in parallel (no unnecessary context switches, no thread blocking, etc).

The problem is that device drivers run in the context of the caller. Of course on a monolithic kernel you could spawn seperate threads to handle each seperate operation. That way your main thread could keep working, but I don't think this is "simpler" (re-entancy locks, sychronization, etc) and there's overhead involved with spawning threads.

For the complexity of the "interfaces between components", Linus is correct - having stable, well defined and well documented interfaces between components is much harder than pulling "ad-hock" creations out of your behind whenever you feel like it. Of course you can have stable, well defined and well documented interfaces between components in a monolithic kernel too, but it's not required and most programmers don't do things if they can avoid it.

Reply Score: 5