Username or EmailPassword
I know, Microkernels definately are the nicer architectures, but it's horrifying to read peoples attempts, to find a practial advantage for the cleaner structure.
Reliability is one, that is named the most. Something like "we can replace everything during runtime, so our computer never has to be rebooted, which is why it can run forever"...So what? Can MINIX deal with burning CPUs or RAM banks? Every sane person, who has to deliver extraordinary uptimes will go for a distributed system, where nodes can go up and down dynamically without impacting availability of the whole cluster. And when you have this ability, it doesn't matter at all if you have to reboot that one node for an upgrade or not. In such environments failing nodes are not an exception, but the rule.
Hmm, maybe you think too much about servers in certain environments. I am defiantly not into this topic, but what about for example embedded systems that for example need a rapid update and can't simply/cheaply be taken offline.
For example everything that is space based, but also robots/drones or some bigger infrastructure (be it for telecommunication or measuring <something>) where you don't want to physically visit (or even restart) everything when you need to update. I don't really think Minix targets the server market and with Linux, BSD and to a certain degree Solaris and even Windows there are more than enough options available. However, they all develop in a certain general-purpose direction, that may be fitting in most situations, but certainly not in all of them. It can be a huge relieve to find something that "just fits" in a certain situation and in some cases this may be Minix.
In some situations lots of backup systems can be too costly.
In other words I am a huge fan of diversity. Edited 2011-11-21 12:18 UTC
You can do online kernel updates without requiring a microkernel arch. However, its obviously more difficult.
The only market MINIX targets is education. It's sole purpose in life is to make teaching OS internals, micro-kernel internals, and similar topics. It's small, easy-to-understand, and teachable. Nothing more.
There's virtually no software available for it.
In _theory_ MINIX should be able to run the same software as Linux or BSD if the user is willing to compile. In practice that's far from the truth. A lot of software, even trivial software, won't compile and run "as is" on MINIX. A while back I tried to port some small apps from Linux to MINIX. Eventually I got them to compile, but they wouldn't run properly. A lot of little things are different enough to make porting a hassle.
MINIX is an interesting little system, but it doesn't really offer anything over Linux or FreeBSD, except as a learning tool.
Are going to be all crap until the research specifies exactly how to do it. You can read a ton of literature out there on how to organize everything and nobody agrees on any particular solution.
This is not a problem monolithic kernel designs have. _VERY_ cut and dried.
Why is this important?
It is important, because what they don't tell you in a lot of these articles, is that without an agreement of how to do MicroKernel's, hardware manufacturers like Intel, won't invest the billions in hardware to speed them up.
Which is why MicroKernels can't hold a stick to Monolithic ones at the moment.
So in my opinion, if the research community really thinks MicroKernel's are better, there would emerge a consensus on how to do it.
I do not see that in the research at the moment.
It is a great idea, but until the hardware manufacturers are sure they are not taking a huge risk in making orphaned hardware to support those ideas, the Microkernel will remain at a huge disadvantage to Monolithic kernels.
Microkernels offers the possibility of system stability should one of it's components fail. The cost is performance. Despite of Thom's claims that the performance degradation is 'slight' anyone who knows how micro-kernels operate understands that there's nothing 'slight' about this loss of performance.
Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data.
As to wether or not this stability is worth the performance loss it all depends on how important this stability is and of course how unstable the more performant non-micro-kernel designs are.
Now obviously the market has shown that the non-micro-kernel based operating system are stable enough that they rather have the performance. There are certainly cases where extreme stability is of outmost importance and in those areas micro-kernels certainly has alot to offer, but for general os demands it's obviously not worth the loss in performance as the demand for micro-kernels is very low.
Now, from a purely architectural standpoint I find micro kernels more elegant, from a practical standpoint I prefer the performance since my non-micro kernel operating system isn't prone to crash. And it seems so does the vast majority of the computing world.
Who said that microkernel-based OSs cannot use shared memory ?
As long as shared memory blocks are handled with an appropriate amount of care (it is untrusted data from the outside world, it should be used in a thread-safe way, etc...), and as long as shared blocks can survive the crash of a subset of their owners without being lost for other owners, I have no problem with it myself. Edited 2011-11-21 21:47 UTC
"Microkernels offers the possibility of system stability should one of it's components fail. The cost is performance."
"Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data."
It depends on the IPC mechanisms used. Writing into pipes is slow, but who says we couldn't use shared memory to do IPC, allocated specifically for that purpose?
Also, in one of Neolander's OS articles, we talked about using a managed language to provide microkernel semantics within a shared kernel address space. The language would enforce isolation, but objects could be explicitly transferred between modules without inuring any of the traditional context switching costs.
So I do think microkernels have a chance to be competitive on performance, but I doubt they can make any inroads into the already crowded market since the established macrokernels are good enough.
Tbh, it survives it most of the time. I had google maps crash a intel display driver yesterday ... first time I have seen a graphics driver crash take down Win 7.
For somereason the laptop had a Windows Vista Driver on a Win 7 machine ... updating seemed to fix it. Edited 2011-11-21 17:50 UTC
Get better hardware. Edited 2011-11-22 13:17 UTC
Yeah... WindowsXP works like a charm. Hardware it is then...
If you figure it's graphics card related, you could try a good gpu benchmarking utility and see if the heavy load or full range of function use finds a crash. You might also confirm if the GPU manufacturer has a solid Win7 driver. If win7 is that crashy using more of the GPU than WinXP's 2D GPU needs then it could very well be the manufacturer's driver.
Not saying I don't have my own win7 issues but they are not related to crashy hardware.
I've posted this numerous times before on this board, but here it goes again...
1. According to Dave Cutler (NT Chief Architect), quoted in numerous interviews, NT is not, was not, and was never intended to be a microkernel. If anything it was based loosely on VMS, which was certianly not a microkernel. That label got thrown about by marketing people for totally invalid reasons (the quotes are hard to find because they were in print journals, but I have seen at least 2 myself).
2. Tanenbaum himself has stated unequivocally that NT was never a microkernel: http://www.cs.vu.nl/~ast/brown/followup/
"Microsoft claimed that Windows NT 3.51 was a microkernel. It wasn't. It wasn't even close. Even they dropped the claim with NT 4.0."
3. By the commonly accepted definition of a microkernel, it simply doesn't come close and never did. The VM subsystem, the file systems, and numerous other subsystems are kernel mode, and always were kernel mode. The do not run in userland, never did run in userland, and were never intended to run in userland. It was in no significant way different from linux or any other monolithic kernel from the point of view of memory seperation.
4. In 3.51, the VDM (video display manager) DID run in userland, along with its drivers. This was done to protect the kernel from driver faults. In practice this had 2 problems. First it was slow. Second it more often than not didn't work - if a fault put the VDM in a state where it could not be restarted the whole system had to be rebooted. They reversed this in 4.0 and moved the VDM back to the kernel. Regardless, this does not make it a microkernel - they simply chose to run this one subsystem this way. Moving it back to kernel mode required pretty massive changes - if it were designed as a microkernel it would have been simple...
I post this because Microsoft marketing was so successful at calling NT something it was not that even 16 years later this misinformation still manages to propogate around. There is nothing wrong with NT - it is a well designed monolithic kernel. But it is not a microkernel and never was. Edited 2011-11-21 19:14 UTC
To your 3rd point: I never saw a definition that says, all "modules" attached to a micro-kernel have to run in user mode.
Exactly my point :-)
You can have memory protection even if you run in supervisor mode. Only supervisor code is "capable" of changing the MMU/MPU to enhance its rights.
But if the supervisor code is proven (big word I know) to be correct (either mathematically or by design/review: Check IEC61508), then there is no problem.
But for sure, the more software is in userland the easier to protect the kernel and other parts of the system.
Just to make it clear:
supervisor mode != shared memory
I just designed a system where processes run in supervisor mode and have _no_ way to interact with other processes memory or even the OS memory.
A customer of us even increased separation such, that some (supervisor) code does not even see any memory/code other than his.
Really? QNX is crap?
Wow, Tanenbaum is pretty arrogant actually. Linux not a success because less than 5% of this web site visitors use Linux?
Check out the server market, andy. Edited 2011-11-21 12:32 UTC
And the smartphones =)
Add to it the Android devices that are nothing but derivatives of Linux.
If you set Andrew and Linus Torvalds and all arrogant guys in the world, Linus would win for a lot!!
You took the words right out of my mouth. He is sounding more and more like a sourpuss to this reader, sour that his baby, or the horse he betted on didn't finish first. wah! I want my bsd! wah! I want my bsd. Time to put the baby to bed.
That's irrelevant to the end user.
But as we sadly know, the best doesn't always win. Ah, it rarely does.
The best isn't microkernels, but OS's running managed code (Single address space OS's). All the benefits from microkernels, without any of the many downsides (intramodule complexity being #1)
Not that it is likely to "win" in the short run, but it's clearly the way of the future. Edited 2011-11-21 12:36 UTC
BS, the perfect system would only use verified code.
But as with microkernels, "it just doesn't work in the real world"(tm) for most use cases.
And Tannenbaum is just a bitter old man that is full of it. Dog slow Minix on embedded systems .. yeah right.
Head over to LWN to read the other side of the story ( http://lwn.net/Articles/467852/#Comments )
Bullshit is not the future. Managed (verified, as you say) code is. Moron. Read what I said.
If you want to see a moron look into a mirror.
Maybe you will grok the difference but I doubt it.
Please respect the old guys; you are walking on the roads they built for us!
About microkernels: XNU (the Mac OS X microkernel base) shows they are completely viable; QNX is also a viable option.
XNU is not a microkernel and a million posts by AFs will not change that. Sure some parts of XNU were based on MACH (http://en.wikipedia.org/wiki/Mach_kernel) long ago, but combined with all the FreeBSD stuff Apple ended up with something that is defintely not a microkernel. It is even more a monolith than it is a hybrid. The difference between KFreeBSD and XNU is not that great.
Sure, managed OSs which run in a single address space are great, until the day your interpreter of choice starts to exhibit a bug that gives full system access to a chunk of arbitrary code.
Interpreters are great for portability across multiple architectures, but I would be much more hesitant to consider their alleged security benefits as well-established.
Well, to be fair, these are all internet facing technologies which have been tasked with running arbitrary untrusted code. Non network facing tools, such as GCC, bison, libtool, etc could also have vulnerabilities (such as stack/heap overflows), but these are far less consequential because these tools aren't automatically run from the internet.
An apples to apples comparison would have web pages serve up C++ code to be compiled with G++ and then executed. In this light the security of JRE, JS, flash all come out far ahead of GCC because it has no defensive mechanisms at all.
I think highly optimized managed languages would do very well in an OS. Even if there are some exploits caused by running untrusted code, it's not like a responsible admin should go around injecting untrusted code into their kernel.
There are other reasons a managed kernel would be nice, I know we've talked about it before.
Or something. I'll just leave this here. From EuroBSDcon 2011.
It's Mach with a bunch of other stuff running in kernelspace, which means you get the downsides of the Mach architecture and the failure proneness of a monolithic kernel. Add in the fact that Mac drivers are often an afterthought (in some cases even moreso than Linux drivers!), and you have a recipe for kernel panics.
How do you mean "afterthought"? I/O Kit has been around for longer than Mac OS X itself. It provides common code for device drivers, provides power management, dynamic loading, ...
I mean that the OS X driver is often an afterthought compared to the Windows driver (or Linux driver, if we're lucky) for a given peripheral. Are you being deliberately obtuse?
"There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can't do this."
Doesn't Linux have on-line patching via Ksplice? So the question isn't about can't, it's about not in the main development plans.
I find amusing that the poster would have such an unbalanced opinion: Linus claims that microkernels adds complexity so they wouldn't be the best here, of course as the author of a monolithic kernel he could be biased, but given the not-so-successful history of micro-kernels, he may also be right..
*hehe* not always the best technique makes the race.
Writing code for a microkernel with a clear message based interface is for most programmer a very different paradigm from what they are used to.
So, you get more guy working the old way, but it does not prove it is the better way.
BTW: Most embedded RTOSs could be seen as micro-kernels and there are a lot around. Far more then Linux installations ! Edited 2011-11-21 21:16 UTC
Something has always bothered me when people say that modern hardware has essentially taken away the performance hit, the idea is to get all you can out of the system. Sometimes I do agree that you need to take a hit do get a better system but it just bothers me that "modern hardware" is being used as an excuse. Anyway I've never used a microkernel system before, definitely something I should look into. One of the things I'm worried about is trying to interact with one, but I'll figure this out sometime soon.
The microkernel proponents want to argue that on modern systems, the overhead of message passing isn't very much (because CPUs are fast now), and moreover, people have gotten cleverer with the design of message passing interfaces so as to make the relative overhead smaller as well.
If so, why do we keep seeing poor performance numbers for microkernels?
One possibility is that the message-passing overhead is higher than they think.
But I think a bigger factor has to do with optimizations elsewhere in the kernel. Linux has so many people working on it, thinking up smarter ways to optimize every little thing that it's kicking the crap out of less-supported OS's in areas having nothing to do with communcation. Linux has really clever process and I/O schedulers.
FreeBSD also has a lot of developers, and as such, they too have optimized the heck out of things, and this is why it and Linux are in the same league.
But something like Minix is a toy project. It's something written by academics as a teaching tool, and as a result, it lacks many of the optimizations that would obfuscate what they're trying to teach. Thus, when you do comparative benchmarks, it sucks. But this has nothing to do with it being a microkernel, and they're not trying to make a fast OS. They're trying to make something whose design is simple and transparent.
Comparisons between microkernels and monoliithic kernels are all much too abstract, and when you do benchmarks, it's not fair, because you're comparing too many things not related to this particular architectural choice. I ASSUME that, if all other things were equal, message passing adds enough overhead compared function calls that we would notice it in benchmarks. But that is just a guess, and not a very well-informed guess.
Really, the argument here isn't microkernel vs. monoithic. That's a red herring. The debate stems from a more deeply-rooted philosophical difference between academics and industry engineers. Engineers are willing to do things that work, even if they're ugly (to a purist of some sort) that academics won't touch because it's not how they think people should be trained. That isn't to say that Linux has a lot of hacks (although I'm sure it has some), but there are cases where the KISS principle is violated for pragmatic reasons, while the academics want to start from an elegant theory and produce an implementation that maps from that 1-to-1.
I've worked as an engineer for a long time, and I'm also working on a Ph.D., and the motivating philosophies are night-and-day different.
Well, a problem with performance discussion is the multitude of performance metrics.
As an example, in my WIP OS project, I would not claim to beat mainstream OSs on pure number-crunching tasks. If that happened, it would be an incident. But I bet that I can beat any of those on reactivity, foreground vs background prioritization, glitch-free processing of I/O with real-time constraints...
Which is important ? It depends on the use cases. If you want to build a render farm or a supercomputer, then Linux or something even more throughput-optimized like BareMetalOS would be best. But if you want to install something on your desktop/tablet for personal use, what I want to prove is that there are different needs which may require different approaches.
Recently, Mozilla have been bragging about how they're back in the JS performance race. But they have quickly realized that the reason why people say Firefox is slow is its UI response times. And now they work on it.
If MINIX people is successful
in the mimecry
of the 48-core Intel SCC chip
then we all be very happy
a new kid in the block.
micro-kernels wedding multi-core
are a natural.
Quite amazing work Andrew,
you will need a bigger team.
There is more than a reason why Linux is successful, but one of them is for being practical. Microkernel design took much longer to crystallize so that it wouldn't have race conditions and be efficient. Linux got implemented much faster and gained component separation later, and where it matters, that is on the driver side. By the way, Linux supports replacing the kernel on the fly since many years, and it's called kexec.
"Microkernel design took much longer to crystallize so that it wouldn't have race conditions and be efficient."
I think you are right that early on in a kernel's development, a macrokernel takes less work. As it gets more and more complex though, a microkernel should theoretically pull out ahead by being easier to manage.
Microkernels are a natural fit for contract based programming where independent developers can work on different components without stepping on each other. This is absolutely a shortcoming of linux today, where each kernel causes new things to break for out of tree developers and modules have to be routinely recompiled in unison or they'll break.
"By the way, Linux supports replacing the kernel on the fly since many years, and it's called kexec."
I don't believe this is what was meant by not rebooting. What was meant was updating the kernel in place without loosing state such that applications won't notice the upgrade. So for instance, all the running applications and all their in-kernel handles and sockets need to be saved and restored right back where they left off after being patched. Supposedly ksplice does it.
It's exactly what I meant. It's feasible to build an efficient and robust micro-kernel but contracts are hard and should put in the right place to not impact performance too much.
Another aspect was that personal computers didn't have hot-swappable components (even today except for SATA and USB). Once the bugs are ironed out of the drivers, there is little use of compartmentalization if you need to reboot your computer anyways. Moreover, if the CPU, RAM, bus or disk fail there is little you can do.
In the end I believe that micro or macro, all practical kernels (as in not for reasearch) tend to go in the same direction even if they didn't start at the same point. Darwin for example has a micro-kernel (Mach 3) base but got augmented with some BSD code. Linux adds compartmentalization where needed.
That said, I'm not an expert so what I'm saying might be bullshit
I read this article and he sounds jealous because of Linux success.
The BSD lawsuits had nothing to do with Linux success. That's just an excuse for saying "BSD didn't succeed".
Linux did succeed for its own merits, and that doesn't have anything to do with any lawsuits.
Andrew Tanenbaum is just a jealous man.
I beg to differ.
One of the reasons Linux took off is because it was commercially friendly. With lawsuits hanging over the *BSDs, business became wary of them.
That and companies could hire people to work on Linux and there was little or no boundary to getting their work included for the most part.
Linux development is so rapid because an army of people get paid to work on it. There's nothing like a wage to help accelerate the amount of work one can do on a project.
Too much fuss for stuff people don't care about, I mean most of the users don't even know what a kernel is and I doubt they are interested in computing and operating systems theory, may them be Apple, Windows or non-tech Linux users.
At the end of the day what really matter is your system works properly and you have a nice software selection to fulfil your needs, everything else is "blah, blah, blah ..my system is better than yours"
"Too much fuss for stuff people don't care about..."
Nobody said normal people care about it, but then many of us are here on osnews because *we* do.
I like the idea of an OS that blurs the distinction between kernel code and user code, which is kind of what microkernels do - in theory there's no need for userspace and kernel space development to be foreign to one another.
For example, I like "fuse" file systems under linux, and I like file system kernel modules under linux, but I find it rather unfortunate that the code needs to be implemented twice to do the exact same thing from user or kernel space.
I know what you mean, I love computer science too and that is why I am here, reading this page regularly.
But it was Tanenbaum in that article who talked about success stories giving all sorts of other OS usage ratios just to justify his own point of view. As far as I know, a successful software is one that is widely used, otherwise I would just call it a nice proof of concept but practical failure.
Either way this is my own opinion and, of course, yours may differ...
Tannenbaum is quoted as saying:
"There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can't do this."
I say that anybody who says 24/7/365 is innumerate.
24 hours a day
7 days a week
365 weeks a WHAT?
Oh shit, I know... a 0.7 decade!
The single biggest issue with microkernels - slight performance hits - has pretty much been negated with today's hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.
But as we sadly know, the best doesn't always win. Ah, it rarely does.
Thom, I need an evidence: Give me an example of a pure, microkernel OS (not hybrid: as per Tanenbaum's design)that was in use in production systems.
If you can't provide that, then Linus' stance on microkernel is true: "Good in paper, rarely usable in practice." We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it.
I will bookmark this date, and then wait for five years or more if the Minix will become the next big thing in smart devices.
"If you can't provide that, then Linus' stance on microkernel is true: 'Good in paper, rarely usable in practice.' We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it."
Linus may or may not be right, but it is a fallacy to suggest that just because microkernels have a small market share, then microkernels are unusable.
The biggest reason independent operating systems out of academia don't have much to offer in general usability is because they don't receive billions of dollars in investment every single year. It's somewhat of a catch 22, but it really doesn't mean the technological underpinnings are bad, some of them may be genius.
Now I can't deny that Tanenbaum appears to be extremely jealous, but I do think he is correct when he said that non-technical attributes have far more to do with a project's success than technical merit.
(For the record, I don't know anything about Minix in particular).
I considered myself an inexperience desktop developer.
I am also an audio/multimedia user and uses applications such as Ardour and jack.
If you are a microkernel expert or any of you here reading this, I have a question.
Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?
Since I believe this is where the microkernel's future holds. Regardless of the efficiency, stability and security of a microkernel system, if it isn't useful to a desktop developer doing his work, to an Ardour/jack user, or any other end user, it will become useless but a toy.
"Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?"
I am afraid it is out of my domain.
I know that pulse audio recently underwent a shift away from using sound card interrupts to using higher resolution sources like the APIC clock. This inevitably caused numerous problems on many systems, but never the less the goal was to get lower latencies by having the system write directly into the memory being read simultaneously a moment later by the sound card.
I don't see why any of this couldn't also be done with a micro-kernel driver. In fact I think the audio mixing for pulseaudio under linux today already occurs in a user space process using "zero-copy" memory mapping. I've never looked at it in any detail though.
QNX ? Symbian ?
Tanenbaum has a longer list on his website, although it takes some tricky moves to reach it : http://www.cs.vu.nl/~ast/reliable-os/ (section "Are Microkernels for Real?") Edited 2011-11-23 09:01 UTC
That's an excellent link.
I'm not entirely in agreement with everything he says, but he makes some strong points.
I disagree with him quite strongly that microkernel IPC should be limited to byte/block streams. I'd strongly prefer working with objects directly (ie being atomically transferred). Object serialization over pipes is inefficient and often difficult, particularly when the encapsulated structures need to be reassembled from reads of unknown length. I find it ironic he views IPC pipes to be the equivalent of OOP. Sure they hide structure, but they also hide a real interface.
I know Tanenbaum was merely responding to Linus' remark about how microkernels make it extremely difficult to manipulate structures across kernel boarders. In a proper OOP design, one shouldn't be manipulating structures directly. Arguably, linux components wouldn't break as often if they didn't.
There are good arguments for either approach. But I do think microkernels have more merit as systems become more and more complex.
I also take this paper with a significant grain of salt, but for different reasons. While I agree with the need for standard interfaces, I do not agree with the pure OOP vision that data structures cannot constitute an interface and that their inner information should always be hidden away like atomic weapon plans. In some situations, a good data structure is better than a thousand accessors.
I feel the same with respect to shared memory. Okay, it's easy to shoot yourself in the foot if you use it improperly, but it is also by far the fastest way to pass large amounts of data to an IPC buddy. And if you make sure that only one process is manipulating the "shared" data at a given time, as an example by temporarily marking the shared data as read-only in the caller process, it is also perfectly safe. Edited 2011-11-23 17:59 UTC