You all know MINIX – a microkernel operating system project led by Andrew Tanenbaum. The French Linux magazine LinuxFr.org has an interview with Andrew Tanenbaum about MINIX’ current state and future. There’s some interesting stuff in there.
A little history lesson might be prudent here, since I’m not sure how many of you understand the significance of MINIX. In and of itself, MINIX is not something you’ll encounter in production environments – let alone on popular electronics where you’ll mostly find Linux. Traditionally, MINIX has been an education tool, in combination with the book ‘Operating Systems: Design and Implementation’.
On top of that, Linus Torvalds used Minix as an inspiration for his own Linux kernel – although he took a completely different approach to kernel design than Tanenbaum had done with MINIX. MINIX is a microkernel-based project, whereas Linux is monolithic (although Linux has adopted some microkernel-like functionality over the years). This difference in approach to kernel design led to one of the most famous internet discussions, back in 1992 – the Tanenbaum-Torvalds Debate.
MINIX stalled for a while, but in 2005, the project, which is hosted at the university I recently got my Master’s Degree from, was fired up again with the release of MINIX 3, a new operating system with its roots in the two versions that came before. Since then, Tanenbaum has had three people working on the project at the VU University, ensuring steady progress.
The team is currently focussing on three things, according to Tanenbaum: NetBSD compatibility, embedded systems, and reliability. NetBSD compatibility means MINIX 3.2.0 will have a lot of headers, libraries, and userland programs from NetBSD. “We think NetBSD is a mature stable system. Linux is not nearly as well written and is changing all the time. NetBSD has something like 8000 packages. That is enough for us,” Tanenbaum explains.
The reliability aspect is one where MINIX really shines and hikes up its skirt to demonstrate the microkernel running inside – MINIX can recover from many problems that would cause other operating systems to crash and burn. A very welcome side effect of all this is that all parts of the system, except the tiny microkernel itself, can be updated without rebooting the system or affecting running processes.
“We also are working on live update. We can now replace – on-the-fly – all of the operating system except the tiny microkernel while the system is running and without a reboot and without affecting running processes. This is not in the release yet, but we have it working,” Tanenbaum states, “There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can’t do this.”
Another area the team is working on is multicore support, and here, too, they are taking a different approach than conventional operating systems. “Multicore is hard. We are doing essentially the same thing as Barrelfish: we regard our 48-core Intel SCC chip as 48 separate computers that happen to be physically chose to one another, but they don’t share memory,” Tanenbaum explains, “We are splitting up the file system, the network stack and maybe other pieces to run on separate cores. The focus is really more on reliability but we are starting to work seriously to look at how multicore can improve reliability (e.g., but having other cores check on what the primary cores are doing, splitting work into smaller pieces to be divided over cores, etc.).”
The interview also focusses on Linux, which I guess is inevitable considering the relationship between Tanenbaum and Torvalds. However, I personally don’t think there is any real animosity between the two – we’re just looking at two scholars with different approaches to the same problems. Adding to that is that both Tanenbaum and Torvalds appear to be very direct, outspoken, and to the point – there’s no diplomatic sugar-coating going on with these two.
For instance, Torvalds was interviewed recently by LinuxFr.org as well, and in that interview, Torvalds gave his opinion on microkernels – an opinion anyone with an interest in this topic is probably aware of. “I’m still convinced that it’s one of those ideas that sounds nice on paper, but ends up being a failure in practice, because in real life the real complexity is in the interactions, not in the individual modules,” torvalds told LinuxFR.org, “And microkernels strive to make the modules more independent, making the interactions more indirect and complicated. The separation essentially ends up also cutting a lot of obvious and direct communication channels.”
Tanenbaum was asked to reply to this statement. “I don’t buy it. He is speculating about something he knows nothing about,” he replied, “Our modules are extremely well defined because they run in separate address spaces. If you want to change the memory manager, only one module is affected. Changing it in Linux is far more complicated because it is all spaghetti down there.”
There’s a lot more interesting stuff in there, so be sure to give it a read. In any case, especially those of you who have been hanging around OSNews for a long time will know that my personal preference definitely lies with clean microkernels. I honestly believe that the microkernel is, in almost every way, better than the monolithic kernel. The single biggest issue with microkernels – slight performance hits – has pretty much been negated with today’s hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.
But as we sadly know, the best doesn’t always win. Ah, it rarely does.
I know, Microkernels definately are the nicer architectures, but it’s horrifying to read peoples attempts, to find a practial advantage for the cleaner structure.
Reliability is one, that is named the most. Something like “we can replace everything during runtime, so our computer never has to be rebooted, which is why it can run forever”…So what? Can MINIX deal with burning CPUs or RAM banks? Every sane person, who has to deliver extraordinary uptimes will go for a distributed system, where nodes can go up and down dynamically without impacting availability of the whole cluster. And when you have this ability, it doesn’t matter at all if you have to reboot that one node for an upgrade or not. In such environments failing nodes are not an exception, but the rule.
Hmm, maybe you think too much about servers in certain environments. I am defiantly not into this topic, but what about for example embedded systems that for example need a rapid update and can’t simply/cheaply be taken offline.
For example everything that is space based, but also robots/drones or some bigger infrastructure (be it for telecommunication or measuring <something>) where you don’t want to physically visit (or even restart) everything when you need to update. I don’t really think Minix targets the server market and with Linux, BSD and to a certain degree Solaris and even Windows there are more than enough options available. However, they all develop in a certain general-purpose direction, that may be fitting in most situations, but certainly not in all of them. It can be a huge relieve to find something that “just fits” in a certain situation and in some cases this may be Minix.
In some situations lots of backup systems can be too costly.
In other words I am a huge fan of diversity.
Edited 2011-11-21 12:18 UTC
You can do online kernel updates without requiring a microkernel arch. However, its obviously more difficult.
http://www.ksplice.com/
The only market MINIX targets is education. It’s sole purpose in life is to make teaching OS internals, micro-kernel internals, and similar topics. It’s small, easy-to-understand, and teachable. Nothing more.
There’s virtually no software available for it.
Somebody obviously hasn’t looked at the MINIX web page[1]. Your contention was true for version 2 (and presumably version 1), but:
And as for where you say
Except that it’s POSIX compliant, so well-written Linux/BSD software should (theoretically) be just a compile away. (I’m guessing that Your Mileage May Vary, though.) In particular, the site lists Emacs, which is certainly 75% of what I need.
Don’t get me wrong, I won’t be switching to MINIX anytime soon. But the reasons you brought up aren’t valid ones for not switching.
[1] http://www.minix3.org
Edited 2011-11-21 18:29 UTC
In _theory_ MINIX should be able to run the same software as Linux or BSD if the user is willing to compile. In practice that’s far from the truth. A lot of software, even trivial software, won’t compile and run “as is” on MINIX. A while back I tried to port some small apps from Linux to MINIX. Eventually I got them to compile, but they wouldn’t run properly. A lot of little things are different enough to make porting a hassle.
MINIX is an interesting little system, but it doesn’t really offer anything over Linux or FreeBSD, except as a learning tool.
Are going to be all crap until the research specifies exactly how to do it. You can read a ton of literature out there on how to organize everything and nobody agrees on any particular solution.
This is not a problem monolithic kernel designs have. _VERY_ cut and dried.
Why is this important?
It is important, because what they don’t tell you in a lot of these articles, is that without an agreement of how to do MicroKernel’s, hardware manufacturers like Intel, won’t invest the billions in hardware to speed them up.
Which is why MicroKernels can’t hold a stick to Monolithic ones at the moment.
So in my opinion, if the research community really thinks MicroKernel’s are better, there would emerge a consensus on how to do it.
I do not see that in the research at the moment.
It is a great idea, but until the hardware manufacturers are sure they are not taking a huge risk in making orphaned hardware to support those ideas, the Microkernel will remain at a huge disadvantage to Monolithic kernels.
-Hack
As opposed to all the consensus on how to design a monolithic kernel…?
Microkernels offers the possibility of system stability should one of it’s components fail. The cost is performance. Despite of Thom’s claims that the performance degradation is ‘slight’ anyone who knows how micro-kernels operate understands that there’s nothing ‘slight’ about this loss of performance.
Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data.
As to wether or not this stability is worth the performance loss it all depends on how important this stability is and of course how unstable the more performant non-micro-kernel designs are.
Now obviously the market has shown that the non-micro-kernel based operating system are stable enough that they rather have the performance. There are certainly cases where extreme stability is of outmost importance and in those areas micro-kernels certainly has alot to offer, but for general os demands it’s obviously not worth the loss in performance as the demand for micro-kernels is very low.
Now, from a purely architectural standpoint I find micro kernels more elegant, from a practical standpoint I prefer the performance since my non-micro kernel operating system isn’t prone to crash. And it seems so does the vast majority of the computing world.
Who said that microkernel-based OSs cannot use shared memory ?
As long as shared memory blocks are handled with an appropriate amount of care (it is untrusted data from the outside world, it should be used in a thread-safe way, etc…), and as long as shared blocks can survive the crash of a subset of their owners without being lost for other owners, I have no problem with it myself.
Edited 2011-11-21 21:47 UTC
Valhalla,
“Microkernels offers the possibility of system stability should one of it’s components fail. The cost is performance.”
“Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data.”
It depends on the IPC mechanisms used. Writing into pipes is slow, but who says we couldn’t use shared memory to do IPC, allocated specifically for that purpose?
Also, in one of Neolander’s OS articles, we talked about using a managed language to provide microkernel semantics within a shared kernel address space. The language would enforce isolation, but objects could be explicitly transferred between modules without inuring any of the traditional context switching costs.
So I do think microkernels have a chance to be competitive on performance, but I doubt they can make any inroads into the already crowded market since the established macrokernels are good enough.
That’s all well until you consider that XNU (Mac OS X) and NTOSKRN (among others Windows 7) are both NOT monolithic kernels. They are not microkernels either, they are what some call macrokernels. But by your logic that would just mean that both, monolithic and microkernels should get the shaft. They don’t because the hardware does not care. Micro, macro, mono — it’s all the same to your garden variety AMD64 CPU, and you know why.
you have to be more precise with the nt-kernel
in the beginning it was not a real microkernel, but pretty close to one
after nt4 they went more monolithic
and since vista they are moving back to the micro-side
today win 7 even survives a crash of the graphics-driver
Edited 2011-11-21 14:35 UTC
Tbh, it survives it most of the time. I had google maps crash a intel display driver yesterday … first time I have seen a graphics driver crash take down Win 7.
For somereason the laptop had a Windows Vista Driver on a Win 7 machine … updating seemed to fix it.
Edited 2011-11-21 17:50 UTC
Lucky you, I get those crashes twice per day.
Get better hardware.
Edited 2011-11-22 13:17 UTC
Yeah… WindowsXP works like a charm. Hardware it is then…
That is because Windows XP isn’t using you card for acceleration like DWM display manager does.
Anyway I am pretty convinced you have made this up anyway.
Yes… I made this up. I make everything up. /s
No hardware profile … just saying that you have some random crahes, except the majority of users that don’t have an agenda to make Windows look bad (unlike yourself) … don’t have problems.
It is funny how the hardcore Linux supporters always have really odd problems with Windows … but are willing to fuck about endlessly when using Linux and thinking editing random text files is okay … but reinstalling an .exe is unacceptable.
.. Just saying like ….
lucas_maximus,
“It is funny how the hardcore Linux supporters always have really odd problems with Windows … but are willing to f–k about endlessly when using Linux and thinking editing random text files is okay … but reinstalling an .exe is unacceptable.”
Not that I’d recommend linux to everyone, but lets not pretend windows doesn’t have issues. All of the tech support I do on desktops is for windows machines, and I know windows users do have tons of problems. Someone asked me how to get “switch user” working on their new windows 7 home basic computer – obviously MS disabled it on this version, however they were having a very real problem: The logged in person would away and the computer would wake up to the “computer locked” screen. Other users have no way to get on. The result – they had to force a hard shut down on the laptop and reboot.
I searched but I couldn’t find a way to fix it. My solution was to disable the password prompt entirely after hibernation, not acceptable from a security point of view, but better than the alternative of hard-powerdowns. Clearly it’s a very stupid limitation which causes real problems. If you have a better solution I’d love to know it.
With the right hacks and tools, I can solve most of the problems windows users come to me with, but those users still need me to do it, it’s certainly not without usability problems.
I used to be a windows user myself, I ended up switching because of the problems that I had to deal with over and over and over again. I have problems with linux too, but I was so fed up with windows that it drove me away.
Edited 2011-11-24 04:19 UTC
Interestingly enough I moved to Linux only after I got tired of f***king with Windows over too many things. And beyond tweaking config files that I have to as part of my direct job responsibilities(setting up development environment for me and my teammates) I have done less configuration work with Linux then with Windows ever.
And people like you get defensive, but that is not a surprise because you have a lot invested into the platform.
I’m only reporting the thing that I see every day. Just like now, I’m looking at my screen where a popup menu item remains(I clicked it 2 minutes ago) on top of all windows. I don’t care who is to blame, just like you don’t care who is to blame for what’s wrong in Linux.
The other point, is that I’m just using the same logic that die-hard Windows fanboys use when talking about Linux problems. Let me remind you – It doesn’t matter who is to blame about driver/software/system support in Linux, it’s still Linux’s problem.
You’ll be happy to know that I’m still happy to apply absolutely the same criticisms to Ubuntu(my distro of choice).
If you figure it’s graphics card related, you could try a good gpu benchmarking utility and see if the heavy load or full range of function use finds a crash. You might also confirm if the GPU manufacturer has a solid Win7 driver. If win7 is that crashy using more of the GPU than WinXP’s 2D GPU needs then it could very well be the manufacturer’s driver.
Not saying I don’t have my own win7 issues but they are not related to crashy hardware.
The interesting part is that TeamFortress2 works really well. The crashes are quite often when I’m at the office, doing work stuff.
(Intel 3000 GPU)
I’ve posted this numerous times before on this board, but here it goes again…
1. According to Dave Cutler (NT Chief Architect), quoted in numerous interviews, NT is not, was not, and was never intended to be a microkernel. If anything it was based loosely on VMS, which was certianly not a microkernel. That label got thrown about by marketing people for totally invalid reasons (the quotes are hard to find because they were in print journals, but I have seen at least 2 myself).
2. Tanenbaum himself has stated unequivocally that NT was never a microkernel: http://www.cs.vu.nl/~ast/brown/followup/
“Microsoft claimed that Windows NT 3.51 was a microkernel. It wasn’t. It wasn’t even close. Even they dropped the claim with NT 4.0.”
3. By the commonly accepted definition of a microkernel, it simply doesn’t come close and never did. The VM subsystem, the file systems, and numerous other subsystems are kernel mode, and always were kernel mode. The do not run in userland, never did run in userland, and were never intended to run in userland. It was in no significant way different from linux or any other monolithic kernel from the point of view of memory seperation.
4. In 3.51, the VDM (video display manager) DID run in userland, along with its drivers. This was done to protect the kernel from driver faults. In practice this had 2 problems. First it was slow. Second it more often than not didn’t work – if a fault put the VDM in a state where it could not be restarted the whole system had to be rebooted. They reversed this in 4.0 and moved the VDM back to the kernel. Regardless, this does not make it a microkernel – they simply chose to run this one subsystem this way. Moving it back to kernel mode required pretty massive changes – if it were designed as a microkernel it would have been simple…
I post this because Microsoft marketing was so successful at calling NT something it was not that even 16 years later this misinformation still manages to propogate around. There is nothing wrong with NT – it is a well designed monolithic kernel. But it is not a microkernel and never was.
Edited 2011-11-21 19:14 UTC
To your 3rd point: I never saw a definition that says, all “modules” attached to a micro-kernel have to run in user mode.
This is a common misconception. People think the definition of a microkernel hinges on “everything must run in userland”. This is not true. You can have a microkernel plus ALL of its modules running in kernelspace, and it’d still be a microkernel.
Exactly my point 🙂
Everything must be capable of running in userland… Not the same thing.
If you don’t have code isolation and memory protection (when possible) you do not have a microkernel. If code in your file system can step directly on kernel memory then what is the point?
Exactly.
Then Ill augment my post to clarify… Change this
“The VM subsystem, the file systems, and numerous other subsystems are kernel mode, and always were kernel mode. The do not run in userland, never did run in userland, and were never intended to run in userland.”
to this:
“The VM subsystem, the file systems, and numerous other subsystems were inseparable from the rest of the kernel“
Does that work?
You can have memory protection even if you run in supervisor mode. Only supervisor code is “capable” of changing the MMU/MPU to enhance its rights.
But if the supervisor code is proven (big word I know) to be correct (either mathematically or by design/review: Check IEC61508), then there is no problem.
But for sure, the more software is in userland the easier to protect the kernel and other parts of the system.
You are confusing the issue. In simple laymen’s terms a microkernel is simply a kernel which implements only the minimum needed functionality needed to build an OS on top of it. Generally, this includes low-level memory management and IPC. All of the other pieces parts (device drivers, file systems, network stacks, etc.) are implemented so that they communicate with the microkernel and each other through IPC (or a functional equivalent abstraction).
The point is that the other pieces parts do NOT interact with a microkernel directly – they do so through some form of IPC – separation of concerns and all that…
Technically you do not have to run these other parts in usermode – it may in fact be desirable to run them in kernel mode. But it should be possible to run them in usermode with very little or no change – that is kind of the entire point.
So no, all modules do not have to run in usermode. But if your kernel runs virtually _everything_ in a shared address space with function calls intertwining between all the various subsystems and no protections between them then you do not have anything close to a microkernel.
Just to make it clear:
supervisor mode != shared memory
I just designed a system where processes run in supervisor mode and have _no_ way to interact with other processes memory or even the OS memory.
A customer of us even increased separation such, that some (supervisor) code does not even see any memory/code other than his.
Really? QNX is crap?
Wow, Tanenbaum is pretty arrogant actually. Linux not a success because less than 5% of this web site visitors use Linux?
Wow.
Check out the server market, andy.
Edited 2011-11-21 12:32 UTC
And the smartphones =)
Actually, Tanenbaum is a really nice guy and very approachable. His talks are very interesting.
Add to it the Android devices that are nothing but derivatives of Linux.
If you set Andrew and Linus Torvalds and all arrogant guys in the world, Linus would win for a lot!!
You took the words right out of my mouth. He is sounding more and more like a sourpuss to this reader, sour that his baby, or the horse he betted on didn’t finish first. wah! I want my bsd! wah! I want my bsd. Time to put the baby to bed.
Dave
That’s irrelevant to the end user.
But as we sadly know, the best doesn’t always win. Ah, it rarely does.
The best isn’t microkernels, but OS’s running managed code (Single address space OS’s). All the benefits from microkernels, without any of the many downsides (intramodule complexity being #1)
Not that it is likely to “win” in the short run, but it’s clearly the way of the future.
Edited 2011-11-21 12:36 UTC
BS, the perfect system would only use verified code.
But as with microkernels, “it just doesn’t work in the real world”(tm) for most use cases.
And Tannenbaum is just a bitter old man that is full of it. Dog slow Minix on embedded systems .. yeah right.
Head over to LWN to read the other side of the story ( http://lwn.net/Articles/467852/#Comments )
Bullshit is not the future. Managed (verified, as you say) code is. Moron. Read what I said.
If you want to see a moron look into a mirror.
http://en.wikipedia.org/wiki/Formal_verification
http://en.wikipedia.org/wiki/Managed_code
Maybe you will grok the difference but I doubt it.
Please respect the old guys; you are walking on the roads they built for us!
About microkernels: XNU (the Mac OS X microkernel base) shows they are completely viable; QNX is also a viable option.
XNU is not a microkernel and a million posts by AFs will not change that. Sure some parts of XNU were based on MACH (http://en.wikipedia.org/wiki/Mach_kernel) long ago, but combined with all the FreeBSD stuff Apple ended up with something that is defintely not a microkernel. It is even more a monolith than it is a hybrid. The difference between KFreeBSD and XNU is not that great.
Sure, managed OSs which run in a single address space are great, until the day your interpreter of choice starts to exhibit a bug that gives full system access to a chunk of arbitrary code.
Consider the main sources of vulnerabilities in the desktop world, and you will find the JRE, Adobe Reader, Flash Player, and Internet Explorer near the top of the list. All of these software are interpreters, dealing with a form of managed code (Java, PDF, SWF, HTML, CSS, and Javascript in these examples).
Interpreters are great for portability across multiple architectures, but I would be much more hesitant to consider their alleged security benefits as well-established.
Neolander,
“Consider the main sources of vulnerabilities in the desktop world, and you will find the JRE, Adobe Reader, Flash Player, and Internet Explorer near the top of the list. All of these software are interpreters, dealing with a form of managed code (Java, PDF, SWF, HTML, CSS, and Javascript in these examples).”
Well, to be fair, these are all internet facing technologies which have been tasked with running arbitrary untrusted code. Non network facing tools, such as GCC, bison, libtool, etc could also have vulnerabilities (such as stack/heap overflows), but these are far less consequential because these tools aren’t automatically run from the internet.
An apples to apples comparison would have web pages serve up C++ code to be compiled with G++ and then executed. In this light the security of JRE, JS, flash all come out far ahead of GCC because it has no defensive mechanisms at all.
I think highly optimized managed languages would do very well in an OS. Even if there are some exploits caused by running untrusted code, it’s not like a responsible admin should go around injecting untrusted code into their kernel.
There are other reasons a managed kernel would be nice, I know we’ve talked about it before.
Or something. I’ll just leave this here. From EuroBSDcon 2011.
http://tar-jx.bz/EuroBSDcon/minix3.html
It’s Mach with a bunch of other stuff running in kernelspace, which means you get the downsides of the Mach architecture and the failure proneness of a monolithic kernel. Add in the fact that Mac drivers are often an afterthought (in some cases even moreso than Linux drivers!), and you have a recipe for kernel panics.
How do you mean “afterthought”? I/O Kit has been around for longer than Mac OS X itself. It provides common code for device drivers, provides power management, dynamic loading, …
I mean that the OS X driver is often an afterthought compared to the Windows driver (or Linux driver, if we’re lucky) for a given peripheral. Are you being deliberately obtuse?
“There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can’t do this.”
Doesn’t Linux have on-line patching via Ksplice? So the question isn’t about can’t, it’s about not in the main development plans.
I find amusing that the poster would have such an unbalanced opinion: Linus claims that microkernels adds complexity so they wouldn’t be the best here, of course as the author of a monolithic kernel he could be biased, but given the not-so-successful history of micro-kernels, he may also be right..
*hehe* not always the best technique makes the race.
Writing code for a microkernel with a clear message based interface is for most programmer a very different paradigm from what they are used to.
So, you get more guy working the old way, but it does not prove it is the better way.
BTW: Most embedded RTOSs could be seen as micro-kernels and there are a lot around. Far more then Linux installations !
Edited 2011-11-21 21:16 UTC
Examples?
For what now ?
* “best technique” Classic example: betamax <-> VHS
* embedded RTOS: well known µKernel: QNX (Neutrino as kernel), others like OSE (RTOS + middle ware) or SCIOPTA (RTOS + middle ware) can IMHO also be seen as OS with µKernel.
All those (can) have memory protection and use message passing as IPC.
Something has always bothered me when people say that modern hardware has essentially taken away the performance hit, the idea is to get all you can out of the system. Sometimes I do agree that you need to take a hit do get a better system but it just bothers me that “modern hardware” is being used as an excuse. Anyway I’ve never used a microkernel system before, definitely something I should look into. One of the things I’m worried about is trying to interact with one, but I’ll figure this out sometime soon.
The microkernel proponents want to argue that on modern systems, the overhead of message passing isn’t very much (because CPUs are fast now), and moreover, people have gotten cleverer with the design of message passing interfaces so as to make the relative overhead smaller as well.
If so, why do we keep seeing poor performance numbers for microkernels?
One possibility is that the message-passing overhead is higher than they think.
But I think a bigger factor has to do with optimizations elsewhere in the kernel. Linux has so many people working on it, thinking up smarter ways to optimize every little thing that it’s kicking the crap out of less-supported OS’s in areas having nothing to do with communcation. Linux has really clever process and I/O schedulers.
FreeBSD also has a lot of developers, and as such, they too have optimized the heck out of things, and this is why it and Linux are in the same league.
But something like Minix is a toy project. It’s something written by academics as a teaching tool, and as a result, it lacks many of the optimizations that would obfuscate what they’re trying to teach. Thus, when you do comparative benchmarks, it sucks. But this has nothing to do with it being a microkernel, and they’re not trying to make a fast OS. They’re trying to make something whose design is simple and transparent.
Comparisons between microkernels and monoliithic kernels are all much too abstract, and when you do benchmarks, it’s not fair, because you’re comparing too many things not related to this particular architectural choice. I ASSUME that, if all other things were equal, message passing adds enough overhead compared function calls that we would notice it in benchmarks. But that is just a guess, and not a very well-informed guess.
Really, the argument here isn’t microkernel vs. monoithic. That’s a red herring. The debate stems from a more deeply-rooted philosophical difference between academics and industry engineers. Engineers are willing to do things that work, even if they’re ugly (to a purist of some sort) that academics won’t touch because it’s not how they think people should be trained. That isn’t to say that Linux has a lot of hacks (although I’m sure it has some), but there are cases where the KISS principle is violated for pragmatic reasons, while the academics want to start from an elegant theory and produce an implementation that maps from that 1-to-1.
I’ve worked as an engineer for a long time, and I’m also working on a Ph.D., and the motivating philosophies are night-and-day different.
Well, a problem with performance discussion is the multitude of performance metrics.
As an example, in my WIP OS project, I would not claim to beat mainstream OSs on pure number-crunching tasks. If that happened, it would be an incident. But I bet that I can beat any of those on reactivity, foreground vs background prioritization, glitch-free processing of I/O with real-time constraints…
Which is important ? It depends on the use cases. If you want to build a render farm or a supercomputer, then Linux or something even more throughput-optimized like BareMetalOS would be best. But if you want to install something on your desktop/tablet for personal use, what I want to prove is that there are different needs which may require different approaches.
Recently, Mozilla have been bragging about how they’re back in the JS performance race. But they have quickly realized that the reason why people say Firefox is slow is its UI response times. And now they work on it.
If MINIX people is successful
in the mimecry
of the 48-core Intel SCC chip
then we all be very happy
of having
a new kid in the block.
micro-kernels wedding multi-core
are a natural.
Quite amazing work Andrew,
you will need a bigger team.
🙂
There is more than a reason why Linux is successful, but one of them is for being practical. Microkernel design took much longer to crystallize so that it wouldn’t have race conditions and be efficient. Linux got implemented much faster and gained component separation later, and where it matters, that is on the driver side. By the way, Linux supports replacing the kernel on the fly since many years, and it’s called kexec.
zimbatm,
“Microkernel design took much longer to crystallize so that it wouldn’t have race conditions and be efficient.”
I think you are right that early on in a kernel’s development, a macrokernel takes less work. As it gets more and more complex though, a microkernel should theoretically pull out ahead by being easier to manage.
Microkernels are a natural fit for contract based programming where independent developers can work on different components without stepping on each other. This is absolutely a shortcoming of linux today, where each kernel causes new things to break for out of tree developers and modules have to be routinely recompiled in unison or they’ll break.
“By the way, Linux supports replacing the kernel on the fly since many years, and it’s called kexec.”
I don’t believe this is what was meant by not rebooting. What was meant was updating the kernel in place without loosing state such that applications won’t notice the upgrade. So for instance, all the running applications and all their in-kernel handles and sockets need to be saved and restored right back where they left off after being patched. Supposedly ksplice does it.
Well said.
It’s exactly what I meant. It’s feasible to build an efficient and robust micro-kernel but contracts are hard and should put in the right place to not impact performance too much.
Another aspect was that personal computers didn’t have hot-swappable components (even today except for SATA and USB). Once the bugs are ironed out of the drivers, there is little use of compartmentalization if you need to reboot your computer anyways. Moreover, if the CPU, RAM, bus or disk fail there is little you can do.
In the end I believe that micro or macro, all practical kernels (as in not for reasearch) tend to go in the same direction even if they didn’t start at the same point. Darwin for example has a micro-kernel (Mach 3) base but got augmented with some BSD code. Linux adds compartmentalization where needed.
That said, I’m not an expert so what I’m saying might be bullshit
I read this article and he sounds jealous because of Linux success.
The BSD lawsuits had nothing to do with Linux success. That’s just an excuse for saying “BSD didn’t succeed”.
Linux did succeed for its own merits, and that doesn’t have anything to do with any lawsuits.
Andrew Tanenbaum is just a jealous man.
I beg to differ.
One of the reasons Linux took off is because it was commercially friendly. With lawsuits hanging over the *BSDs, business became wary of them.
That and companies could hire people to work on Linux and there was little or no boundary to getting their work included for the most part.
Linux development is so rapid because an army of people get paid to work on it. There’s nothing like a wage to help accelerate the amount of work one can do on a project.
Too much fuss for stuff people don’t care about, I mean most of the users don’t even know what a kernel is and I doubt they are interested in computing and operating systems theory, may them be Apple, Windows or non-tech Linux users.
At the end of the day what really matter is your system works properly and you have a nice software selection to fulfil your needs, everything else is “blah, blah, blah ..my system is better than yours”
Yehoodi,
“Too much fuss for stuff people don’t care about…”
Nobody said normal people care about it, but then many of us are here on osnews because *we* do.
I like the idea of an OS that blurs the distinction between kernel code and user code, which is kind of what microkernels do – in theory there’s no need for userspace and kernel space development to be foreign to one another.
For example, I like “fuse” file systems under linux, and I like file system kernel modules under linux, but I find it rather unfortunate that the code needs to be implemented twice to do the exact same thing from user or kernel space.
I know what you mean, I love computer science too and that is why I am here, reading this page regularly.
But it was Tanenbaum in that article who talked about success stories giving all sorts of other OS usage ratios just to justify his own point of view. As far as I know, a successful software is one that is widely used, otherwise I would just call it a nice proof of concept but practical failure.
Either way this is my own opinion and, of course, yours may differ…
Tannenbaum is quoted as saying:
“There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can’t do this.”
I say that anybody who says 24/7/365 is innumerate.
24 hours a day
7 days a week
365 weeks a WHAT?
Oh shit, I know… a 0.7 decade!
The single biggest issue with microkernels – slight performance hits – has pretty much been negated with today’s hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.
But as we sadly know, the best doesn’t always win. Ah, it rarely does.
Thom, I need an evidence: Give me an example of a pure, microkernel OS (not hybrid: as per Tanenbaum’s design)that was in use in production systems.
If you can’t provide that, then Linus’ stance on microkernel is true: “Good in paper, rarely usable in practice.” We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it.
I will bookmark this date, and then wait for five years or more if the Minix will become the next big thing in smart devices.
allanregistos,
“If you can’t provide that, then Linus’ stance on microkernel is true: ‘Good in paper, rarely usable in practice.’ We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it.”
Linus may or may not be right, but it is a fallacy to suggest that just because microkernels have a small market share, then microkernels are unusable.
The biggest reason independent operating systems out of academia don’t have much to offer in general usability is because they don’t receive billions of dollars in investment every single year. It’s somewhat of a catch 22, but it really doesn’t mean the technological underpinnings are bad, some of them may be genius.
Now I can’t deny that Tanenbaum appears to be extremely jealous, but I do think he is correct when he said that non-technical attributes have far more to do with a project’s success than technical merit.
(For the record, I don’t know anything about Minix in particular).
[/q]
This might be true with respect to Windows vs. Unix on servers, a success of any OS deployed in production might include the factor of non-technical attributes and ignore the importance of technical superiority. But for kernel design, I think many factors comes to play, since I am not an expert in any of this, this is just my opinion.
Yes, Linus could be wrong. But philosophically, I find Linus’ stance to be more acceptable than the professor’s.
Visiting minix3 site with a confusing statement:
While there are lots of work for developers at: http://wiki.minix3.org/en/MinixWishlist
which is more important than porting the kernel to different architectures. I might be missing something here.
Alfman:
I considered myself an inexperience desktop developer.
I am also an audio/multimedia user and uses applications such as Ardour and jack.
If you are a microkernel expert or any of you here reading this, I have a question.
Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?
Since I believe this is where the microkernel’s future holds. Regardless of the efficiency, stability and security of a microkernel system, if it isn’t useful to a desktop developer doing his work, to an Ardour/jack user, or any other end user, it will become useless but a toy.
allanregistos,
“Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?”
I am afraid it is out of my domain.
I know that pulse audio recently underwent a shift away from using sound card interrupts to using higher resolution sources like the APIC clock. This inevitably caused numerous problems on many systems, but never the less the goal was to get lower latencies by having the system write directly into the memory being read simultaneously a moment later by the sound card.
I don’t see why any of this couldn’t also be done with a micro-kernel driver. In fact I think the audio mixing for pulseaudio under linux today already occurs in a user space process using “zero-copy” memory mapping. I’ve never looked at it in any detail though.
That is enough for me, alfman. I believe that the current monolithic structure of OS kernels will be modified in the future to scale to new innovations in hardware architectures. Thank you for the insights in Pulse audio, I am not capable to respond to you regarding the technical side of it.
As an end user and an OS hobbyist, I think I need some information in the future of what OS is the best for my desktop needs. I think today’s operating systems(except for the MAC) were too focused on servers and then in the desktop as an afterthought. The fact that in the Linux kernel we have -rt patches proves that.
QNX ? Symbian ?
Tanenbaum has a longer list on his website, although it takes some tricky moves to reach it : http://www.cs.vu.nl/~ast/reliable-os/ (section “Are Microkernels for Real?”)
Edited 2011-11-23 09:01 UTC
Neolander,
That’s an excellent link.
I’m not entirely in agreement with everything he says, but he makes some strong points.
I disagree with him quite strongly that microkernel IPC should be limited to byte/block streams. I’d strongly prefer working with objects directly (ie being atomically transferred). Object serialization over pipes is inefficient and often difficult, particularly when the encapsulated structures need to be reassembled from reads of unknown length. I find it ironic he views IPC pipes to be the equivalent of OOP. Sure they hide structure, but they also hide a real interface.
I know Tanenbaum was merely responding to Linus’ remark about how microkernels make it extremely difficult to manipulate structures across kernel boarders. In a proper OOP design, one shouldn’t be manipulating structures directly. Arguably, linux components wouldn’t break as often if they didn’t.
There are good arguments for either approach. But I do think microkernels have more merit as systems become more and more complex.
I also take this paper with a significant grain of salt, but for different reasons. While I agree with the need for standard interfaces, I do not agree with the pure OOP vision that data structures cannot constitute an interface and that their inner information should always be hidden away like atomic weapon plans. In some situations, a good data structure is better than a thousand accessors.
I feel the same with respect to shared memory. Okay, it’s easy to shoot yourself in the foot if you use it improperly, but it is also by far the fastest way to pass large amounts of data to an IPC buddy. And if you make sure that only one process is manipulating the “shared” data at a given time, as an example by temporarily marking the shared data as read-only in the caller process, it is also perfectly safe.
Edited 2011-11-23 17:59 UTC
I concede that I might be wrong here.
I am interested to test Minix as an OS hobbyist(I am not a OS developer or any of that low-lever language user).