A little history lesson might be prudent here, since I'm not sure how many of you understand the significance of MINIX. In and of itself, MINIX is not something you'll encounter in production environments - let alone on popular electronics where you'll mostly find Linux. Traditionally, MINIX has been an education tool, in combination with the book 'Operating Systems: Design and Implementation'.
On top of that, Linus Torvalds used Minix as an inspiration for his own Linux kernel - although he took a completely different approach to kernel design than Tanenbaum had done with MINIX. MINIX is a microkernel-based project, whereas Linux is monolithic (although Linux has adopted some microkernel-like functionality over the years). This difference in approach to kernel design led to one of the most famous internet discussions, back in 1992 - the Tanenbaum-Torvalds Debate.
MINIX stalled for a while, but in 2005, the project, which is hosted at the university I recently got my Master's Degree from, was fired up again with the release of MINIX 3, a new operating system with its roots in the two versions that came before. Since then, Tanenbaum has had three people working on the project at the VU University, ensuring steady progress.
The team is currently focussing on three things, according to Tanenbaum: NetBSD compatibility, embedded systems, and reliability. NetBSD compatibility means MINIX 3.2.0 will have a lot of headers, libraries, and userland programs from NetBSD. "We think NetBSD is a mature stable system. Linux is not nearly as well written and is changing all the time. NetBSD has something like 8000 packages. That is enough for us," Tanenbaum explains.
The reliability aspect is one where MINIX really shines and hikes up its skirt to demonstrate the microkernel running inside - MINIX can recover from many problems that would cause other operating systems to crash and burn. A very welcome side effect of all this is that all parts of the system, except the tiny microkernel itself, can be updated without rebooting the system or affecting running processes.
"We also are working on live update. We can now replace - on-the-fly - all of the operating system except the tiny microkernel while the system is running and without a reboot and without affecting running processes. This is not in the release yet, but we have it working," Tanenbaum states, "There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can't do this."
Another area the team is working on is multicore support, and here, too, they are taking a different approach than conventional operating systems. "Multicore is hard. We are doing essentially the same thing as Barrelfish: we regard our 48-core Intel SCC chip as 48 separate computers that happen to be physically chose to one another, but they don't share memory," Tanenbaum explains, "We are splitting up the file system, the network stack and maybe other pieces to run on separate cores. The focus is really more on reliability but we are starting to work seriously to look at how multicore can improve reliability (e.g., but having other cores check on what the primary cores are doing, splitting work into smaller pieces to be divided over cores, etc.)."
The interview also focusses on Linux, which I guess is inevitable considering the relationship between Tanenbaum and Torvalds. However, I personally don't think there is any real animosity between the two - we're just looking at two scholars with different approaches to the same problems. Adding to that is that both Tanenbaum and Torvalds appear to be very direct, outspoken, and to the point - there's no diplomatic sugar-coating going on with these two.
For instance, Torvalds was interviewed recently by LinuxFr.org as well, and in that interview, Torvalds gave his opinion on microkernels - an opinion anyone with an interest in this topic is probably aware of. "I'm still convinced that it's one of those ideas that sounds nice on paper, but ends up being a failure in practice, because in real life the real complexity is in the interactions, not in the individual modules," torvalds told LinuxFR.org, "And microkernels strive to make the modules more independent, making the interactions more indirect and complicated. The separation essentially ends up also cutting a lot of obvious and direct communication channels."
Tanenbaum was asked to reply to this statement. "I don't buy it. He is speculating about something he knows nothing about," he replied, "Our modules are extremely well defined because they run in separate address spaces. If you want to change the memory manager, only one module is affected. Changing it in Linux is far more complicated because it is all spaghetti down there."
There's a lot more interesting stuff in there, so be sure to give it a read. In any case, especially those of you who have been hanging around OSNews for a long time will know that my personal preference definitely lies with clean microkernels. I honestly believe that the microkernel is, in almost every way, better than the monolithic kernel. The single biggest issue with microkernels - slight performance hits - has pretty much been negated with today's hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.
But as we sadly know, the best doesn't always win. Ah, it rarely does.