The micro vs. monolithic kernel debate is now very much alive. Not too long ago, I wrote an article on the merits of microkernels, while a week later we featured a retort. Now, the greatest proponent of the microkernel steps in– yes, Andy Tanenbaum writes: “Microkernels – long discarded as unacceptable because of their lower performance compared with monolithic kernels – might be making a comeback in operating systems due to their potentially higher reliability, which many researchers now regard as more important than performance.” Now, we only need Torvalds to chime in, and it’s 1992 all over again.
In others words, he’s saying, “I was wrong 16 years ago, but THIS TIME I’m right!”
When Minix even begins to start to have a sliver of the *NIX market, then I’ll believe him.
I don’t believe Minix was ever intended to capture any sort of measure of real-world market share. For what Minix is (or was), basically a tool to teach students about OS design, it does a formidable job.
Tanenbaum’s argumens are fundamentally right, but his vision of the future was made on flawed assumptions such as:
-GNU Hurd would be available soon. 15 years later it still isn’t.
-People would ditch x86 for superior architectures. 15 years later, it’s clear that x86 won, at least in the personal computing market. But you can’t hardly blame him for failing to forsee the rise of Microsoft and the formidable power of having to run legacy software.
His version of the future was a bit idealized.
His version of the future was a bit idealized.
Yes, but let’s be real in that EVERYONE thought that way back then. Read the AST/Linus flamewar, and you’ll notice that Linus too thought the i386 arch. was going to be ditched soon enough.
Amazingly enough, AST and Linus do not qualify as EVERYONE
By 92, the handwriting was on the wall, and only people who needed to believe otherwise couldn’t see it. Workstation makers, such as Intergraph, had gotten out of the processor business and moved to x86 already, and other companies, such as HP had started making plans for the eventual demise of their home-grown ISAs.
> By 92, the handwriting was on the wall, and only people
> who needed to believe otherwise couldn’t see it.
> Workstation makers, such as Intergraph, had gotten out
> of the processor business and moved to x86 already, and
> other companies, such as HP had started making plans
> for the eventual demise of their home-grown ISAs.
Hardly. Hindsight may make it seem that way, but there was still much development ahead in 1992.
1992 saw the release of the Alpha processor, and preceded the release of the PowerPC alliance. MIPS had just done the MIPS IV 64-bit processor architecture, and PA-RISC and SPARC were yet to enter the 64-bit world.
The fastest x86 was the i486 DX2 66, which was humiliated by all the RISC CPUs of the time (including SPARC), was a joke. It was by no means clear that Windows would be as dominant as it turned out to be with OS/2 still on the scene and MS hanging on using illegal means, so the future of the OS was still potentially up for grabs.
Merced was but a glimmer in Intel and HPs eye. And Itanium is hardly an x86 processor anyway.
And where are Intergraph now? Basically just another Wintel OEM.
Only one company actually make x86 competitive. AMD. Without competition, x86 would suck more than they do now.
OK, I overestimated the ability of people to read the handwriting on the wall.
In 92, it was clear to the people I worked with. I had just helped Intergraph finish the Clipper C4 processor and see the light, and had gone to HP and was involved in what would become Merced. To us it was clear that Intel had won.
And it wasn’t about immediate performance, it was about market economics. We all knew how much it cost to turn out a new processor, or to support our Unix variants, and we all knew how many copies of that processor or Unix we were going to sell. The math made it clear that eventually Intel would catch up and bypass all the in-house processors, and the market made it clear that no one was doing very well selling their ISAs to other companies.
I also knew how management inside Fortune 50 companies were responding to the FUD Microsoft was spreading about NT, and how badly fragmented the Unix-variant marketplace had become.
I believe there are stille people out there that might tout OS/2 as being technically superior to ALL Windows versions until Windows NT 4.0 and possibly even past that point in time…
However, the market share of OS/2 never was the argument for its strengths and it never will be.
Instead it was a question of the object oriented design of the WorkPlace Shell, WPS, the stability and performance of the kernel’s multitasking and multithreading capabilities that made up its merits.
the same thing might very well be the case of Minix vs Unix – Minix might do things RIGHT in certain areas where Unix fare worse, however, market share has never been the compelling argument.
…In such a case, Windows would be considered better than Unix at any given time… Which I fimly believe it is not!
That’s not necessarily a meaningful measure considering how rarely things win out in the marketplace on the basis of technical superiority.
Do you actually understand the terms you use? Free software doesn’t mean it’s open source. Take Winamp for example. Open source doesn’t have to be free. Take Linspire for example.
True, but there is always one thing holding back the technically superior option; in the case of BETA MAX, had Sony allowed any company to create BETA MAX video cassett players without the need to paying royalities, you would have seen BETAMAX over take VHS.
Sony has made the same mistake with mini-disc again; they kept the format to themselves, the never sold it in terms of it being used as a recordable media for stereo’s; they never released music on mini-discs and like the Betamax, is slowly dying out in favour of something that is ‘good enough.
The article is pretty poor, coming from such repected researchers. It starts of with a lame rethorical question: Why are TV sets, DVD recorders, MP3 players, cell phones, and other software-laden electronic devices reliable and secure but computers are not? The unstated implication seems to be that it’s because PCs run complex monolithic kernels, while the specialised devices don’t.
Well, I have seen plenty of MP3 players and cell phones crash. Not as often as PCs, and no TV sets or DVD’s yet, but of course, all these specialised devices do a lot less than PCs. In the next line the authors mention this very reason, but they don’t address it at all. The thing is, if you use your PC as only a TV, or only a DVD or MP3 player, and NOTHING ELSE, they’d be as reliable as a stand-alone TV or DVD or MP3. This has very little to do with kernel design.
TVs, DVDs and MP3s players are secure because they are typically not connected to any type of network. Just leave your PC permanently unconnected to any network and let it open any files except media files, then we can honestly compare security. Cell phones would be the only real comparison, but the networks have a LOT more incentive to tighten network security than PC OS writers.
Finally, let’s see a general purpose microkernel OS that has as many users, applications, hardware support and real-world testing as Linux OS X (XNU is NOT Mach!) or even Solaris, and see if they are indeed significantly more reliable/secure. Otherwise they have only been proven better in the labs and on paper.
Edited 2006-05-05 21:16
and real-world testing as Linux OS X (XNU is NOT Mach!) or even Solaris
QNX is deoployed in settings either of the above mentioned can only dream of. I’m sorry, but I wouldn’t trust medial equipment if it ran on Linux. Luckily, most run QNX. Just like the space shuttle’s arm.
QNX is deoployed in settings either of the above mentioned can only dream of. I’m sorry, but I wouldn’t trust medial equipment if it ran on Linux. Luckily, most run QNX. Just like the space shuttle’s arm.
Then you design it for that purpose, and in the name of redundancy QNX is the correct choice. Something else could be reliable for 99.9999% of the time, but it only has to happen once.
However, Torvalds and others were not designing a system for general use that needed to power medical equipment or a space shuttle arm. Practical considerations had to take priority or absolutely nothing would have got done (look at Hurd) ever. Even on modern hardware the performance penalties are pretty severe and are not justified by any perceived benefits – certainly not by end users on desktops.
The performance loss of 10% that Tanenbaum talks with a microkernel is an absolute eternity. Conversely, on any modern hardware you’ll simply always be able to get more done with a monolithic kernel and the pendulum will swing back in that direction – as it has always done. It will only be when when we get a real Turing machine and something like a quantum computer with infinite resources that we’ll be able to even think about having our cake and eating it.
In the wider world microkernels lost the argument many, many years ago. Although microkernels have a place, and even people like Toravlds will freely admit that, let’s just learn from the past and let the matter rest in peace rather the re-inflating peoples’ egos.
QNX is deoployed in settings either of the above mentioned can only dream of. I’m sorry, but I wouldn’t trust medial equipment if it ran on Linux. Luckily, most run QNX. Just like the space shuttle’s arm.
Why not? I used to write POS systems that ran for years without crashing on DOS. Does that mean DOS is much more reliable/secure than Windows/Linux/OS X/Solaris/whatever? Should I switch back to DOS on the desktop? Of course not, it only means that it’s much easer to run stable if you build single purpose devices, or run a specific small set of applications. Which is exactly the point that the authors of the original article ignore in their ridiculous TV and DVD example.
I’d have no problems at all trusting medical equipment or space shuttle arms running on Linux or DOS for that matter. I wouldn’t even be surprised if some already do. But Linux is not a RTOS so it’s probably not the best fit here. BTW, would you trust VxWorks in your medical equipment? Plan 9? OS/2? Kadak AMX?
Anyway, nobody uses QNX as a general purpose desktop/workstation OS. Do you suppose a hypothetical desktop OS, say OS X style, built on top of QNX would be much more reliable/secure than it is now?
Edited 2006-05-05 22:10
You missed the point. He’s not saying that the devices are secure because they run microkernels, he’s saying that users want the reliability of such devices from their PCs.
However, you inadvertendly make an excellent point. Doing only one thing and doing it well is better than doing a lot of things at once. It seems to me that’s a prime benefit of microkernels. Each program (server) only does one thing, instead of providing all the services needed by the system.
The most secure microcomputer OS is OpenBSD:
— Is OpenBSD a microkernel? Nope.
The most secure mini-computer OSs are VMS, OS/400:
— Is VMS or OS/400 based on a microkernel? Nope.
The most secure mainframe OS is zOS/MVS:
— Is zOS/MVS based on a microkernel? Nope.
The LEASE secure OS is MS Windows:
— Is MS WIndows based on a microkernel? YES
The LEASE secure OS is MS Windows:
— Is MS WIndows based on a microkernel? YES
What are you doing here if you think Windows’ secutiry problems stem from its kernel?? Are you mad?? The NT kernel is very highly regarded. It’s the Windows USERLAND that causes the security problems, my friend, NOT its kernel.
I’m baffled someone can post such nonsense.
Not to mention the fact the NT kernel is *not* a microkernel. its a hybrid.
Not to mention the fact the NT kernel is *not* a microkernel. its a hybrid.
Well, that’s debatable. Do we call Linux a hybrid because it has modules? The NT kernel is still monolithic because it doesn’t exhibit enough of the characteristics of a true microkernel.
The NT kernel is still monolithic because it doesn’t exhibit enough of the characteristics of a true microkernel.
It seems you are misunderstanding the definition of a microkernel, because you do not seem to make the distinction between in kernelspace and in the kernel.
As an example: QNX is built up out of the Neutrino muK, with various servers, drivers, and filesystems running in userspace. Now, if all those servers etc. were moved into kernelspace, QNX would still be a muK. The same applies to the NT kernel: it is a hybrid because it has certain parts literally in the kernel, but other parts simply in kernelspace. In other words: in kernelspace != in the kernel, and therefore a muK can still be a muK (or a hybrid still a hybrid), even when it has its servers running in kernelspace.
I know too little of the charactaristics of kernel modules in Linux to give a sensible comment on them. I’ll dive into that one of these days, it bugs me I know so little of them.
The same applies to the NT kernel: it is a hybrid because it has certain parts literally in the kernel, but other parts simply in kernelspace.
It doesn’t at the moment (and NT has traditionally lumped all and sundry into its kernel), but Vista apparently will change things. It’s a hotch potch. As a whole, the thing isn’t a microkernel. You can’t just yank some stuff out of the kernel, put it into userspace and call it a microkernel.
It doesn’t at the moment (and NT has traditionally lumped all and sundry into its kernel), but Vista apparently will change things. It’s a hotch potch. As a whole, the thing isn’t a microkernel. You can’t just yank some stuff out of the kernel, put it into userspace and call it a microkernel.
I didn’t. You said the NT kernel is monolithic– utter nonsense as NT is a hybrid. It is a hybrid in XP, and it will be a hybrid in Vista.
utter nonsense as NT is a hybrid. It is a hybrid in XP, and it will be a hybrid in Vista.
Microsoft has some pretty pictures at http://www.microsoft.com/technet/archive/ntwrkstn/reskit/archi.mspx… that label some parts of the NT kernel “microkernel”, and even appear to show device drivers moving into user land.
The problem with the picture is I could easily draw a similar picture for just about any monolithic operating system, at that level of (lack of) detail.
“Hybrid” microkernel is an oxymoron. Either the ‘microkernel’ is in a separate protection domain than the rest of the code that runs in supervisor mode, or it’s not a microkernel.
And on Intel hardware, if it is in a separate protection domain, then context switch overhead will do a fine job of consuming your cpu and heating your house.
I didn’t. You said the NT kernel is monolithic– utter nonsense as NT is a hybrid.
It is not a microkernel, or even a hybrid. At its heart it is still a monolithic kernel. Vista will exhibit some microkernel aspects with audio drivers amongst other things being pulled into userspace. That’s only because it’s flavour of the month though. I daresay you can do the same with Linux, and people have even done so. Just because someone has bunged an out-of-kernel driver on, does that make Linux a microkernel, or even a hybrid?
You could tentatively call Windows a hybrid (it just doesn’t have anywhere near enough microkernel aspects about it) but it doesn’t make any difference. At its heart it is still monolithic. NT definitely is monolithic in every sense. As I stated, you can’t just yank stuff out of the kernel, bung it into userspace and call it a microkernel or even a hybrid. You can’t be a little bit pregnant.
You’re just trying to come up with the existence of a hybrid to try and give the impression that microkernels are actually happening. What Microsoft is doing with Vista does not make it a microkernel in any sense.
Edited 2006-05-08 10:27
I’m baffled someone can post such nonsense.
I’m pretty sure he was making the point that the kernel isn’t the most important part of security, the very point you try to make again.
The NT kernel is very highly regarded. It’s the Windows USERLAND that causes the security problems, my friend, NOT its kernel.
The kernel might be good, but unfortunately the drivers run in kernel space, and when they bug, they bring down the whole “very highly regarded” kernel. So, for the nastiest of the bugs/crashes (and they are plenty), the kernel space is to blame (that’s opposed to userland/userspace).
Edited 2006-05-10 10:54
Any body designing an OS today would chose a microkernel approach….
And not the old monolithic style likes Linus chose.
http://www.qnx.com is one great version what a great OS can do…
Linux sufers great from the bad graphic
Any body designing an OS today would chose a microkernel approach….
I’m designing one right now. It’ll be the fifth time. None of them have evern been microkernels, and the sixth one wouldn’t be either, if I’m still in the business that long.
He chimed in fourteen years ago, and in his usual robust manner made it pretty and starkly clear what was wrong with Minix. Tanenbaum was wrong then and he’s wrong now, as the subsequent success of Linux has proved.
Many microkernel concepts would be wonderful and fantastic in a perfect world. Even Linus admitted that microkernels from a technical and aesthetic point of view are nicer. But we don’t live in a perfect world, not now and not then, and sooner or later we have to grow up and realise what is actually going to be practical. I always chuckle at this statement:
“I still maintain the point that designing a monolithic kernel in 1991 is a fundamental error. Be thankful you are not my student. You would not get a high grade for such a design :-)”
You can’t just do nothing and produce nothing that works practically for over fifteen years, come back and say “Oh, I was right all along” when you’re not.
Tanenbaum was wrong then and he’s wrong now, as the subsequent success of Linux has proved.
If quantity is the basis of quality… Where does that leave Windows?
If quantity is the basis of quality… Where does that leave Windows?
Missing the point. Tanenbaum had nothing but scorn for Linus and Linux in those early discussions. If Andy was right, Linux should be a non-entity by now. Just because he’s written (by Linus’ own admission)a good textbook on the subject doesn’t make Tanenbaum the ultimate OS design guru. Afterall, if he was as good as he fancies himself then he should be applying his talents in the private sector where the common user needs it the most. They can always find CS professors to teach using his textbook.
If quantity is the basis of quality… Where does that leave Windows?
Used on 90%+ of the world’s computers as an operating system that proves to be just about reliable enough, with other systems with monolithic kernels being much more reliable and better quality than it :-).
Used on 90%+ of the world’s computers as an operating system that proves to be just about reliable enough,
Off course with there zero percent on supercomputer , they have 90% + of the world computers …. Please provide your data and its provider.
What exactly does the success of Linux prove wrt monolithic kernels, and how does that compare to what the success of Win9x proves about monolithic kernels?
Commercial success measures nothing!
Many microkernel concepts would be wonderful and fantastic in a perfect world. Even Linus admitted that microkernels from a technical and aesthetic point of view are nicer. But we don’t live in a perfect world, not now and not then, and sooner or later we have to grow up and realise what is actually going to be practical.
Well, its like a rerun of the RISC vs. CISC arguments that raged through the 1980s and 1990s – don’t look at today’s RISC as what RISC was; the RISC of today is a bastardised version of what it used to be and was described on paper.
Going by the pure paper theory, RISC is supposidly meant to be the superior solution, but when reality came into the mixture, and real work is thrown at these processors, they didn’t stand up to the challenge.
So here we are, with so-called ‘RISC Processors’ which are nothing more than a RISC processor, with strategically added on pieces from the CISC world to improve performance; branch prediction, OOE and the likes were never meant to be the domain of RISC.
RISC was mean to to be like EPIC/Itanium is today, stripped down, bare basic, high number crunching, and relying on the clock cycle to make up for any inefficiencies resulting from unpredictable workloads – the reality, it didn’t work out.
Same goes for Monolythic vs. Micro; Monolythic has adopted some things that make sense from the Micro world, whilst maintaining the good parts of Monolythic kernel design. Micro has been relegated to niche areas, just as those ‘pure’ RISC designs have.
Oh, and the current crappy performance on Micro kernels more have to do with the crappy design of the x86 rather than anything to do with any fundamental flaws in the design itself; Tru64, which ran on Alpha is an example of a OSF Microkernel with M:N threading, and all the ‘can’t be done’ features, and yet, in terms of performance, it was great.
So ultimately, the outcome of the OS is dictated as much by the actual design as by the underlying hardware, and how it operates.
Edited 2006-05-06 03:12
RISC was mean to to be like EPIC/Itanium is today, stripped down, bare basic, high number crunching, and relying on the clock cycle to make up for any inefficiencies resulting from unpredictable workloads – the reality, it didn’t work out.
No. EPIC is another strategy altogether (Explicit Parallel Instruction Computing). It should also be much easier (i.e. requiring less logic and circuitry) to make “broader” architectures and fully utilize OoO etc. on a RISC architecture.
Why are TV sets, DVD recorders, MP3 players, cell phones, and other software-laden electronic devices reliable and secure but computers are not?
Um, Andy? Pay attention. Every one of those devices has had implementations that crash in interesting ways, and cell phones have been hacked already.
They’re not all that reliable, nor all that secure.
If I’m not mistaken, FreeBSD uses a microkernel. My experience with it show it to be both reliable and fast. I’m not saying it’s better than Linux though, merely different.
You are mistaken.
so what does it use? Just curious.
Monolithic http://en.wikipedia.org/wiki/FreeBSD
It would be Interesting to get Theo’s spin on micro/monolithic kernels, given OpenBSD’s extensive work on priviedge revocation/seperation.
it already has been done. OpenBSD; it’s free, functional and secure. Only one remote hole in the default install, in more than 8 years!
The mono-micro debate is a lot more interesting now in this age of more advanced micro designs. Linux vs Mach — eh, not a great competition. Linux vs EROS or L4 or K42? Much more interesting. Also, don’t let the commercial success of mono-kernels close you to the possibilities. It is not unusual for a technology to be initially rejected from the market, only to surface again later when conditions are more appropriate. Consider forward-swept wings, which are a cutting-edge technology for modern fighters. The design was actually first implemented by the Germans in the 1940’s, they were just not appropriate designs until the advent of modern composite materials.
Edited 2006-05-06 00:02
There are a few points that I think he either oversimplified or just plainly wrong.
Tanenbaum compared TVs, remotes and cell phones with a computer. There are two major characterics of these devices that doesn’t match PC.
1. Flexibility. If you lock down a PC to the original configuration you got it with, 99% of the time it works. There have been hardware that locks down computer kiosks for example and those work wonderfully – Those computers hardly crash. Same with hardware – most of Windows hardware problems comes from bad drivers for hardware. If you have a finite set of hardware and have it rigoriously tested (like Apple), you have much less problems.
2. Network capability. The other side of the reliability problems comes from network capability. A Windows 95/98 machine, however vulnerable, would not be infected if it’s kept isolated and not have floppy/USBstick connected or have network access. Out of the list, cell phones are the only exception on the list he gave.
The again, cell phones happens to be one of the most locked down devices ever (with ROM, carrier locks, etc.). On the flip side of the coin, the newer PDA phones do get unreliable. So, the problem with not-so-reliable cell phones actually points back to point #1 which means that the added flexibility of PDA phones caused instability.
So, the bottom line is that it’s not monolithic or microkernel that’s at issue here. It’s how much one locks down his device to achieve reliability. So, based on Tanenbaum’s arguments we should all be used locked down devices.
As a side note for the QNX debate, I work with QNX is an academic environment and I can say that QNX crashes just as bad as Linux (if not worse) if the students are not careful. QNX works better than Linux in the shuttle/medical environment because it’s true real-time (linux is not), it’s lightweight and can be stripped down. It’s only the last point where Linux loses ground somewhat.
Just because QNX is a microkernel doesn’t mean it’s reliable.
The key thing about DVD players and their reliability has little to do with the OS design. It has more to do with the stability of the hardware and the software.
As a person earlier mentioned writing POS systems that ran for years on DOS. If you’re dedicating a machine to well defined task, it’s much easier to keep it stable and running long term than a general purpose machine capable of running general purpose tasks.
If you have an unlimited number of inputs (inluding potentially aggressive and malicious software), it is much more difficult to harden one of these systems.
Finally, the overall environment that the system is designed for affects the software design and implementation. People look at telnet and ftp today and worry about the security considerations of the protocol, but when those protocols were designed, it simply wasn’t an issue. Similar arguments can be said for the DOS and even Windows releases.
Granted there’s no excuse today.
The #1 source of bugs and thus instability of OSs are its GUI interface; next comes the device drivers. The kernel is not the most important piece to watch to improve stability of the OS. But still of course microkernels are better than monolithic ones in stability arenas. Let’s look at a simple example of sun os 5.11.37 on x86, before it was simple CLI (Command Line Interface) then they stepped on GUI with CDE (Common Desktop Environment), and now they took GNOME and mimicked it with their JDS (Java Desktop system); every thing was fine with CLI untill CDE came and they took years to debug it, and today with JDS every bug, roaches,flys and insects erupted from solaris, Now whos problem is it the “Kernel” or the “GUI”.
Note: Last time I checked bug reports on http://www.opensolaris.com it was in the magnitude of 30,000!!!
What he says is perfectly correct.
if IPC, Memory management those which has a complete programming with no bugs(since the codes for these may be small) and which wont have any frequent code change, only runs in the kernel and everything else(remaining drivers) which might have code change even daily runs in the user-level, will result in a good secured and reliable OS.
frankly linux had nothing special when it started.its a just weak kernel which is not at all portable.
the only thing he made which made linux to start is changed its license to GPL and build his kernel with all the GNU utilities like the powerfull GCC and nothing else.since at that time many eager programmers were waiting for a free OS which at that time only GNU/Linux was available, they started to coding this GNU/Linux.
If Linus would have written a microkernel(as suggested by Tanenbaum) even a buggy and not at all portable kernel at that time and joined that with GNU, by now it would have been very great sparkling OS. since he was only a student, he can do only easy coding and that was the reason he wrote linux kernel.
a monolithic kernel is very easy to write.anyone who knows little system programming with kernel concepts can a write a monolithic kernel.but only a good programmer with high creativity can write a microkernel.
The big mistake Tanenbaum has done was he didnt realise the power of GNU and how it will change the future.if he would have joined his hands at that time with GNU, now everyone would have been running an OS with 100% perfection, portable and secured.
another interesting person was Jochen Liedtke who is also a very high creative and talented programmer like Tanenbaum.
i could say very high talented programmer with great creativity are
Tanenbaum, Richard Stallman, Jochen Liedtke
a small ordinary programmer who got his name and fame because of GNU GPL is
Linus Torvalds
Even now if all people starts to code the MINIX code with GNU’s utilities, and concentrate on it, surely within 1 year, the same great OS can appear.its possible.but for that Tanenbaum and Richard Stallman should plan well. and also all the good people.
Hope this happens !!!
frankly linux had nothing special when it started. […] since at that time many eager programmers were waiting for a free OS which at that time only GNU/Linux was available, they started to coding this GNU/Linux.
What about BSD and GNU/Herd? At that time, the former was under trial and the latter was far from completion, but they were still available.
As for the rest, I believe you have an over-optimistic point of view of microkernels… In reality, the reliability of a microkernel-based OS will depend on the maturity of its services. Your kernel might be “100% perfect”, but it doesn’t mean the rest is. Your OS might survive to a fs driver crash, but the safety of your data could still be in jeopardy.
They have nice advantages, but they are not the best thing since sliced bread. Monolithic and hybrid kernels are not that bad, after all.
I won’t pretend to be an expert on this, but reading the arguments I get the feeling that micro kernels are trying to solve a problem that may not exist. Yes I’m willing to accept they’d probably be even more stable, but my monolithic Linux kernel is more than stable enough for me already. I’ve used Linux full-time for over a year and a half now, and the kernel has never let me down. I’ve managed to screw up none kernel things so bad once or twice that just hitting the reset button was the quickest way out, but the kernel hasn’t failed once.
So for me the question is not whether a micro kernel might be more stable or more secure, but whether I should take a performance hit when the system I’m using is already stable and secure enough.
I reread the article again – and for me I would say that *gasp* MS Singularity’s solution is the best.
I have worked with QNX and a bit of Minix and I find that the IPC overhead is quite significant. If you remember that as processor speed increases the memory latency became a bigger and bigger issue. Same goes with IPC. The more IPCs you have the more queues you have for IPC communciations. The overhead for context switch and delay in those instances will impact the performance of the system.
Besides, as I have seen with students with QNX, a bad program can still hose a microkernel system. I started programming with Pascal myself and I would say that type safety deals away with a lot of programmer mistakes.
The only problem with Singularity is a whole new language (and MS’ proprietary nature) will prevent wide adoption. I would say that out of the 4 methods outlined I prefer language safety the best. I am uncertain about the performaqnce impact of type safety checks. As I know in Computer Engineering – you can never get any performance for free – there is always some tradeoffs.
Do you work for Microsoft? I ask this because, at this stage, who outside of Microsoft knows anything about Singularity that Microsoft hasn’t told them?
Does singularity prevents really compatibility?
True Singularity kernel use a single address space to avoid the cost of TLB flushing, but legacy applications could be in a different address space using normal HW protection for compatibility..
While it means that the legacy application won’t see much difference, it’s IMHO the only possibility to gain real word usage, otherwise if every application must be rewritten, it’ll remain a research toy.
The IPC overhead is becoming a solved problem now. The newer microkernels have very efficient IPC and process scheduling. Even with L4, they managed to get performance to near the level of Linux.
Remove the user (^_^)
Heh, by far the most insightful (and if not insightful then at least correct) comment here.
Reverse engineering drivers is such a huge waste of time and causes reliablity issues.
It would be better to use windows binaries instead.
Except then you get to reverse engineer the behavior of the nt kernel that said windows drivers expect (including bugs and quirks), so you’re back at square one with the same issues.
Browser: Mozilla/5.0 (Danger hiptop 2.0; U; AvantGo 3.2)
While I like the microkernel idea, a lot actually. I just want o adress the idea that stability is more important than performance; it’s utter and complete bollocks, assuming that the performant OS is stable enough. Researchers are not the ones having to tell their customers their server is not as performant but extremely stable as that other server vendor who uses a very fast but not quite as stable OS. Ofc, stability must be good enough(tm) but tell me which of todays major monolithic/hybrid kernels that are not stable enough for production use? (please don’t bring up hobby OS X or Y because those are irrelevant).
micro-kernel is just another idea not a fundamentally different idea from monolithic. The main conceptual difference between a micro-kernel and a kernel is that a micro-kernel is a portable/virtual CPU over which the OS is built. The OS runs on the virtual versatile CPU and provides more modularity. The device drivers and protocol stacks these days seem to be very complicated and there is a deceiptful concept that a micro-kernel could ease the develiopment of the OS. I believe and please tell me if I am wrong, that in the case of monolithic or ukernel the use of a subset of c++ for developing could prove very useful. C++ without templates and STL could organize the code and provide an easy means of sharing/inheriting interfaces. Take into consideration that C++ can be used for low level programming. Namespaces, exceptions, polymorphism and multiple inheritance can be beneficial for an OS. Of course we should sacrifice speed, but how much actually? C++ can be very good for device drivers. ObjC with manual allocation/deallocation plus namespaces(Where are they actually? Who chose to get rid of them?) can also provide and alternative and C99 compatible solution. If you want my opinion ObjC+namespaces is the way forward for kernel programming. C++ can be another direction. I think this is the problem and not the uKernel vs Monolithic debate. Both have disadvantages and advantages. I favor monolithic for legacy and performance reasons with enhanced ObjC. However there should be uKernel alternatives for RAD and prototyping, it is easier to work there and port back to monolithic.
“Linux sufers great from the bad graphic”
What? You totally lost me here.