“I recently had the opportunity to interview Andrew S. Tanenbaum, creator of the extremely secure Unix-like operating sytem MINIX 3. Andrew is also the author of Operating Systems Design and Implementation, the must-have book on programming and designing operating systems, and the man whose work inspired Linus Torvalds to create Linux. He has published over 120 works on computers (that’s including manuals, second and third editions, and translations), and his works are known all over the world, being translated into a variety of different languages for educational use universally. He is currently a professor of computer science at Vrije University in Amsterdam, the Netherlands.”
Interview with Andrew Tanenbaum, Creator of MINIX
About The Author
Follow me on Twitter @david_adams
2008-08-13 7:08 pmhobgoblin
heh, even if they realize, such questions make for blogger traffic, hopefully with links back to the original article, and that means ad revenue.
still, i find it more likely that people will switch to hurd, if that even gets to a linux like level of support, then minix.
MINIX 1 and 2 were intended as teaching tools; MINIX 3 adds the new goal of being usable as a serious system on resource-limited and embedded computers and for applications requiring high reliability
Edited 2008-08-13 18:17 UTC
2008-08-13 6:55 pmPiranha
Yes. It’s a true microkernel using only a mere 4000 line of code or so. Everything including file server, device drivers, UNIX Server, etc all run in userspace versus Kernel space. This means that nearly everything can be upgraded while the system is live.
The Kernel itself only handles basic hardware IPC (inter-process communication) which allows the kernel what ‘modules’ can access what piece of hardware and pieces from other modules. The end result is that buffer overflows are a thing of the past (in theory)
With Linux, most BSDs, Windows… etc all run with drivers compiled and running in Kernel space. This allowed ANY driver to access ANY piece of hardware or pieces of other drivers. The fundamental flaw going this route is that any single piece of code-error (even 1 line) in a driver can bring down an whole system. Running drivers in userspaces allows the reincarnation server to kill the running driver, restart it and log what driver did what without taking the whole system down. Windows crashes, about 60-80% of the time or so, are caused by bad driver code. It’s also believed that drivers carry about 3-7x more bugs than ANY other piece of code in the OS.
Windows NT had tried going this route when it was first released, but failed miserably. Vista is another example of Windows going this route (to each his own on how well they are doing). DragonflyBSD and Darwin(Mac OSX Kernel) are examples of “Hybrid” kernels where some parts (like certain drivers) are run in kernelspace, leaving other parts running in userspace.
Hope that helps.
edit .. FUSE on linux is running Filesystem in Userspace.
Linus seems to dislike microkernels considerably, claiming they cause much unnecessary overhead. 5000 IPC calls, on a 2.2ghz Athlon, expect to only use about 1% of the CPU (not very much at all). Each person has their opinion on performance:security/stability. I’m not campaigning against Monolithic kernels (as I’m a big believer in OpenBSD and how the project is run/audited), but as more and more drivers and features are added to kernels, the more and more code is added along with more and more bugs.
Edited 2008-08-13 19:01 UTC
2008-08-13 7:01 pmhobgoblin
vista is just another incarnation of NT iirc…
and i dont know if drivers running in user space makes things more secure. that would depend on what access rights those user space drivers have…
if it has the equivalent of root access, its only marginally better then linux or similar in that these days its less interesting to crash something then to zombify something.
also, microsoft tried microkernel, found performance to be to bad on some of the drivers, moved those back into kernel space, and now have moved some out again in vista.
i think the big issue with drivers are when they are closed source so that when a driver do something bad, one cant figure out what it did, only that it did something.
thats why one have tainted driver markers in the linux kernel log, for when something like the closed up nvidia drivers do something bad.
as for linus’s opinion on micro-kernels, maybe they have changed over the years, maybe not. but i wold say that using improvements in hardware to justify lower performance in software is lazy at best, the dark side at worst (see current vista hardware demands)…
Edited 2008-08-13 19:05 UTC
2008-08-13 10:33 pmislander
Thank you for this post.I learnt much as it was very informative.
2008-08-14 9:50 amsegedunum
Linus seems to dislike microkernels considerably, claiming they cause much unnecessary overhead. 5000 IPC calls, on a 2.2ghz Athlon, expect to only use about 1% of the CPU (not very much at all).
It’s funny. Even after all this time (eighteen years after Torvalds’s and Tanenbaum’s e-mail exchange) people just don’t get that IPC in a kernel is colossally expensive. Until we get hardware that has unlimited resources and where there is no cost in increasing the layers in a system, microkernels are a total waste of time in all but a very small context of uses, and their use is even debatable there. Embedded devices with limited resources, such as Tanenbaum’s TV example in the argument, is even funnier. It still matters, and will matter for a very, very long time.
IPC and distributed system are also unbelievably complex to get right as anyone knows, which means I have always laughed at the simplicity arguments of microkernel proponents. You also have to do lots of ludicrously expensive things such as copy data. There are lots of implications involved.
Think about this: the reason why people talk about microkernels as great is because they are assuming that they’ll fail, and assuming failure, before a single line of code is written and assuming that everything will be fine because they have a microkernel. Kernels are a unique piece of software (it’s where everything starts, obviously) where that kind of attitude is a tad dangerous. It’s really a self-fulfilling prophecy.
I have never seen one piece of evidence that shows me that in all the uses that kernels have in the world, microkernels make an appreciable positive difference in reliability, especially versus more immediate performance concerns.
That really strikes at the heart of the matter – there is no evidence that microkernels actually matter or make a difference in pretty much all the uses out there. The part of the article where we get to see that Tanenbaum still has no practical sense whatsoever is where he talks about using a microkernel to keep a TV running. It’s one of those hypothetical, academic things of no practical use. Well, people have been doing that for a long time now, and what they generally do is cut down a kernel (usually Linux these days) to the bare minimum to run on hardware that has a limited set of uses – which is how QNX is used anyway. If we saw a microkernel run on systems that do many different things, then we’d see how reliable it really is – but we don’t.
Certainly in a device like a TV, of more paramount importance is the general responsiveness of the system, and the failure of a TV set is almost always a result of the software running on top (or the hardware), not the kernel, and that’s the part with all the functionality. Even then though, if one part of your kernel fails it is still a failure (it’s a kernel!) which usually takes down most of the system regardless, and this is what Andrew still fails to get conceptually about them. You just end up going round in large circles of complexity looking for mythical holy grails of simplicity, reliability and security.
I actually feel pretty sad for him that he still thinks that way after all this time, and sad for most microkernel components who haven’t paid attention to practice rather than academic theory.
2008-08-14 3:49 pmJrezIN
microkernels are a total waste of time in all but a very small context of uses, and their use is even debatable there.
I don’t think that just theory and debates via e-mail are the best way to benchmark this claims… benchmarks and real-world usage are probably a lot more relevant.
Some things should be in the table too… like the fact that we’re living in a world where the type of information processed and the way processors (CPU and GPU/GPGPU) work, and also how this processing is done (not just predictable data, but live data from the network and the internet… where scheduling has a really large hole in the final performance.)
As for the benchmark, BeOS/Haiku is a nice contender for the tests.
BeOS/Haiku example is even nicer, because originally, its Network stack was all user space, but eventually got changed to a kernel space one.
The thing is, there’s no one answer for everything, but better answers for each situation. If you have a highly predictable use scenario, a monolithic kernel could be the best answer, as is complexity will probably be smaller. If you have a highly complex system, with hundreds of different software connecting together, and complexity is your main issue… you may prefer to isolate each process, run as much as you can in user space… depending of you use scenario, you may not see any performance loss, but you may find the hole system a lot easier of maintain and a lot easier to find bugs, extend, etc…
Again, BeOS/Haiku may show you the kind of “perceivable performance” gains and harmonic utilization of system resources by various processes…
2008-08-14 5:12 pmJeffS
QNX is a fabulously successful microkernel OS that is used in the embedded market, where up times have to be tremendously long (like never crash). That’s where microkernels come into use.
Also, AFAIK, Symbian is a microkernel.
But true, for general server and desktop usage, monolithic kernels are more practical.
But microkernels have their place.
2008-08-14 10:11 amsegedunum
DragonflyBSD and Darwin(Mac OSX Kernel) are examples of “Hybrid” kernels where some parts (like certain drivers) are run in kernelspace, leaving other parts running in userspace.
They’re not hybrid kernels, they are monolithic kernels. Being a microkernel implies a specific structure, and running a few things in userspace is not a qualifier. None of the kernels above are microkernels because they’ve discovered that performance an complexity sucks. They’re problems that Apple has bypassed like a Christmas tree with Mach.
Whenever you see the name ‘hybrid kernel’ it’s a kernel that has generally started off with lot of idealistic microkernel ideas and then discovered that, practically speaking, they suck in the real world. Either that, or you’ve got a monolithic kernel that wants to pretend it has some of the marketing advantages of microkernels. That’s an easy definition.
Windows crashes, about 60-80% of the time or so, are caused by bad driver code. It’s also believed that drivers carry about 3-7x more bugs than ANY other piece of code in the OS.
If true, that should tell you something about the driver and kernel development model there. A microkernel isn’t going to help you because focusing on single drivers and worrying about what they do drags down the aggregate system as a whole.
Edited 2008-08-14 10:15 UTC
2008-08-13 9:13 pmebasconp
It confused Minix3 with OpenBSD
the idea behind Minix3 is “highly reliable”, not “secure”.
2008-08-13 9:19 pmhobgoblin
well in that case…
2008-08-13 9:29 pmPiranha
Agreed. However, the way the code is audited on a continual basis, fixing security bugs (buffer overflows, etc) causes the kernel to run more reliably too. Still, any updates to the kernel causes the system to require a reboot.
Same can be said for Minix3.. Running it more reliably, by restricting modular access, CAN make the OS more secure by disallowing modules to take over the system and do anything they’d like.. Less chance for exploitation.
Andrew says (on who will win between Linux and MINIX mascot): Raccoons are quite aggressive. Penguins are not. There would be chicken for dinner.
And Linus once said: Some people have told me they don’t think a fat penguin really embodies the grace of Linux, which just tells me they have never seen an angry penguin charging at them in excess of 100mph. They’d be a lot more careful about what they say if they had.
So Andrew, and Linus should make the experiment, I go for the penguin. 😀
2008-08-13 9:03 pmTommyD
Andrew says (on who will win between Linux and MINIX mascot): Raccoons are quite aggressive. Penguins are not. There would be chicken for dinner. And Linus once said: Some people have told me they don’t think a fat penguin really embodies the grace of Linux, which just tells me they have never seen an angry penguin charging at them in excess of 100mph. They’d be a lot more careful about what they say if they had. So Andrew, and Linus should make the experiment, I go for the penguin. 😀
Oh, PUH-lease. Beastie takes them both down with one horn tied behind his back!
2008-08-13 9:08 pmPiranha
And he takes them down .. down.. down into the earth. He even owns their souls after they’re dead =P
2008-08-14 12:05 amXenu
Pffft… none of them stand a chance against the HURD’s graph thingy. Those vertices sure are badass.
2008-08-14 11:49 amorestes
I do believe Glenda the Plan 9 bunny could take all of them, perhaps in Pythonesque fashion
2008-08-14 2:19 pmhibridmatthias
Or like BunBun from Sluggy Freelance
Edited 2008-08-14 14:20 UTC
2008-08-14 2:52 pmmmu_man
Actually it appears to me like some gaming company actually stole the Plan9 bunny design for some really known game… Didn’t they sue them over copyright violation yet ?
What do you think?
2008-08-13 9:58 pmebasconp
Linux is a very pragmatic OS, first implemented by a very pragmatic guy built on classical OS concepts in its first days.
I thought (before reading the interview) it was to be a large and deep interview (same as the published in the “A-Z programming languages” series)
Interview with Andrew Tanenbaum, Creator of MINIX
Andrew Tanenbaum, Creator of MINIX
Andrew did not create MINIX.
He built MINIX 3
MINIX has been around for 20 years.
I had a copy in Bell Labs twenty years ago
2008-08-14 4:02 pmTommyD
Interview with Andrew Tanenbaum, Creator of MINIX ========================================= Andrew Tanenbaum, Creator of MINIX Andrew did not create MINIX. He built MINIX 3 MINIX has been around for 20 years. I had a copy in Bell Labs twenty years ago
Welcome to the Dubyas!
Here is a short quote from Wikipedia:
“Andrew S. Tanenbaum created MINIX at Vrije Universiteit in Amsterdam to exemplify the principles conveyed in his textbook, Operating Systems Design and Implementation (1987). ”
The rest of the article is at:
Take a look at some of the bright questions the interviewer asked:
“Do you expect a lot of Linux users to switch over to MINIX?”
Did the interviewer not realize that MINIX is for research, not for the desktop? If he did, this question would not have even arose. It seems the interviewer did not read what the interviewee wrote, especially the part where Tanenbaum writes, “…I decided if I wanted a UNIX-like system to teach, I’d have to write one myself.”