First a little history lesson. The debate microkernel or monolithic kernel is anything but original. In 1992, Linus Torvalds (you know him?) got entangled in a famous 'flamewar' (I wish flamewars were still like that today) with Andy Tanenbaum, one of the men behind MINIX and a book on MINIX which played a major role in Torvalds education concerning writing kernels. Where Tanenbaum had already proven himself with MINIX, a microkernel-based operating system which served as student material for aspiring kernel engineers, Linus was a relative nobody who just started working on Linux, a monolithic kernel. The discussion went back and forth for quite a few posts in comp.os.minix, and it's a very interesting read.
Anyway, now that I've made it clear I'm not treading into uncharted territory, let's move on to the meat of the matter.
A microkernel (or 'muK', where 'mu' is the Greek letter which indicates 'micro') differs vastly from a monolithic kernel in that in a muK, all things that might potentially bring down the system run outside of kernelspace, in individual processes often known as servers. In MINIX3, for instance, every driver except the clock (I never help but wonder: why the damn clock? I'm going to email Tanenbaum about this one of these days) lives in userspace. If one of those drivers crashes because of a bug, though luck, but it won't bring the system down. You can just restart the failing driver, and continue as you were. No system-wide crash, no downtime. And there was much rejoicing.
This is of course especially handy during system updates. Say you finally fixed the bug that crashed the above-mentioned driver. You can just stop the old version of the driver, and start the improved version without ever shutting the system down. In theory, this gives you unlimited uptime.
In theory, of course, because there still is a part living in kernelspace that can contain bugs. To stick with MINIX3, its kernel has roughly 3800 lines of code, and thus plenty of room for mistakes. However, Andy Tanenbaum and his team believe that these 3800 lines of code can be made close to bug free, which would bring eternal uptime a step closer. Compare that to the 2.6.x series of the monolithic Linux kernel-- which has roughly 6 million lines of code to be made bug free (there goes spending time with the family at Easter).
Another advantage of a microkernel is that of simplicity. In a muK, each driver, filesystem, function, etc., is a separate process living in userspace. This means that on a very local level, muKs are relatively simple and clean, which supposedly makes them easier to maintain. And here we encounter the double-edged sword that is a microkernel; the easier a muK is to maintain on a local level, the harder it is to maintain on a global level.
The logic behind this is relatively easy to understand. I'd like to make my own analogy, were it not for the fact that CTO already made the best analogy possible to explain this local complexity vs. global complexity:
"Take a big heavy beef, chop it into small morsels, wrap those morsels within hygienic plastic bags, and link those bags with strings; whereas each morsel is much smaller than the original beef, the end-result will be heavier than the beef by the weight of the plastic and string, in a ratio inversely proportional to the small size of chops (i.e. the more someone boasts about the local simplicity achieved by his microkernel, the more global complexity he has actually added with regard to similar design without microkernel)." [source]
That explains it well, doesn't it? Now, this global complexity brings me to the second major drawback of a muK: overhead. Because all of those processes are separated, and need to communicate with one another over greater distances, a muK will inevitably be slower, performance-wise, than a comparably featured monolithic kernel. This is why Linus chose the monolithic model for Linux: in the early '90s, computers were not really all that powerful, and every possible way to limit overhead was welcomed with open arms.
However, I think the circumstances have changed a lot since those days. Monolithic made sense 16 years ago, but with today's powerful computers with processors acting as $300 dust collectors most of the time, the advantages of a muK simply outweigh its inevitable, minute, and in user experience probably immeasurable, performance hit.
That's the technical, more objective side of the discussion. However, there's also a more subjective side to it all. I prefer it when an application or device does one thing, and does it well. It's why I prefer a component HiFi set over all-in-one ones, it's why I prefer the GameCube over the Xbox/PS2, it's why I prefer an ordinary TV plus a seperate DVD recorder over a media centre computer, it's why I fail to see the point in 'crossover' music (why combine hiphop with rock when you suck at both?), it's why I prefer a manual gearbox over an automatic one (in the Netherlands, we say, 'automatic is for women'; no offence to the USA where automatic gearboxes are much more popular, since you guys across the pond have longer distances to cover), it's why I prefer a simple dinner over an expensive 4-stage one in a classy restaurant (the end result is the same: replenishing vital nutrients), it's why I prefer a straight Martini Bianco over weird cocktails (again, the end result is the same, but I'll leave that up to your imagination), that's why I prefer black coffee over cream and sugar with coffee, etc., etc., etc.
That leaves me with one thing. Remember how I mentioned that car crash? If you read the accompanied blog post, you might remember how it was caused by a miscalculation on the other driver's end.
Now, would that have happened if the human brain was like a muK?