The microkernel vs. monolithic debate, whether you boys and girls like it or not, rages on. After Tanenbaum’s article and an email from Torvalds, another kernel developer steps up, this time in favour of the muK. A developer of the muK-based Coyotos writes: “Ultimately, there are two compelling reasons to consider microkernels in high-robustness or high-security environments: there are several examples of microkernel-based systems that have succeeded in these applications because of the system structuring that microkernel-based designs demand, [and] there are zero examples of high-robustness or high-security monolithic systems.”
I personally don’t think that “micro vs. monolithic” is a primary determinating factor in whether or not a project will be successful. Most successful kernels are monolithic, but I imagine that the correlation is at least partly anomalous.
I agree with you – to be fair, no matter how much we discuss, the REAL problem is userspace (excluding ukernel daemons) programs, and not in other place.
I mean, why would I care that my kernel is super-stable and reliable (be it micro or monolithic) if the apps that I run (gnome, KDE, firefox) are not reliable? Sure, it doesn’t crashes your system, but if you’re in a important videoconference, for you a kernel crash is just as serious as a app crash
For effect you mean:
“for you an app crash is as serious as a kernel crash.”
This is my argument against the microkernel idea. If the base kernel is amazing, no one cares (except those in high security situations possibly). If my disk driver crashes I don’t care if the kernel crashes too; neither is acceptable!
If I’m hosting a website and my network system keeps crashing that’s utterly unacceptable.
I think that anything being designed by a kernel team is there not just because it makes sense there; but because it’s as important for it to not fail as it is for the memory management to not fail (or very close).
So, the trouble is that with muK people seem to be thinking it will be acceptable to have buggy user-space kernel apps: This won’t be acceptable.
I wonder if anyone has researched designing hardware with microkernels in mind. It seems to me that current hardware may be one of the speed and design limitations that make muK less practical.
This is my argument against the microkernel idea. If the base kernel is amazing, no one cares (except those in high security situations possibly). If my disk driver crashes I don’t care if the kernel crashes too; neither is acceptable!
If the disk driver crashes on a space probe, do you want the whole system to go down? No! You want the disk driver to silently restart itself while keeping everything else going.
A big problem with this discussion is that people have gotten so used to traditional operating systems, that they do not even realize that it might be possible to design systems that can not only withstand a crash of an important component, but can gracefully recover.
Very few systems are so critical as that recovery is not possible in the case of their failure.
1) If the disk driver crashes, open files should be kept in memory, the disk should be remounted and checked for consistency, and pending writes should be flushed back to disk. It should be possible for a disk driver crash to not even be noticed by the user.
2) If the network driver crashes, existing connections should be held open until it can be restarted. The user shouldn’t notice anything more than a pause in their browsing.
3) If the display server crashes, things get harder, but it should still be recoverable. The running applications presumably haven’t crashed, so they could be instructed to save their data, the display server could be restarted, then applications could be instructed to regenerate their display. Heck, applications already have to do this on windowing systems where the OS doesn’t guarantee that window contents are preserved (GDI, X11).
4) If the input driver crashes, it should just be restarted. Input drivers don’t have a whole lot of state to begin with.
The simple fact is that large parts of the system can go down if the recovery mechanisms are good enough. On BeOS, the net_server was buggy as hell, but even when it crashed and restarted, the imact on the system wasn’t huge.
Edited 2006-05-11 21:14
Hell, even on a desktop: I’d rather the disk driver crash and silently restart itself than wait around for a full reboot.
Exactly, and this lies at the base of why I like the muK design; contrary to many, I think the muK is suited much better for desktops than a monolithic design. The overhead problems of yesteryear are nullified by today’s powerful computers. We have processors wich are acting as expensive dust collectors most of the time anyway– why not live with a 1% performance decrease if it gives back a much more robust system, a system where bugs in drivers will actually not cause the system to crash?
There’s a reason why I like QNX so much. One of these days, I want to interview people at QSS (I got the important email addresses collected already, so step 1 is taken, and they know me after a short email convo) and ask them if they ever thought about QNX’ opportunities as a desktop operating system. Their PhotonUI graphical user interface (which does not use X) is modelled after the muK principle (highly modular) and has its own 3D framework and 3D drivers.
It’s a business opportunity for which QNX needs to do little. The entire OS is there, a lot of apps are as well (thanks to QNX’ high POSIX compliance, much higher than Linux’s), and they have an advanced package manager as well. A little more promotion is all they need.
http://www.osnews.com/story.php?news_id=8911
There.
Edited 2006-05-11 21:17
Yeah, but what really is the feasiblity of QNX on the desktop? Wouldn’t one have to say that the stability of QNX is due largely to its limited uses? If it had to do as many things as Linux or BSD and run on as many different types of hardware, would it still be as stable?
Yeah, but what really is the feasiblity of QNX on the desktop? Wouldn’t one have to say that the stability of QNX is due largely to its limited uses? If it had to do as many things as Linux or BSD and run on as many different types of hardware, would it still be as stable?
Err, why would it need to run on every toaster out there? The fact Linux runs everywhere is nice, but what does it mean to the normal desktop user? QNX runs on PowerPC, ARM/StrongARM/xScale, x86, MIPS, and SH-4– not bad at all, I’d say.
And since when is OS/kernel quality measured by the number of platforms it can run on?
It’s not just a matter of different platforms, but device configurations as well. Can I install QNX on any x86 and have it recognize all my hardware? OS quality isn’t measured by the number of platforms, but the fact remains that the more versatile it is, the more problems it’s going to have. People hacking OS X to run on other x86 machines have their work cut out for them, do they not? Nobody questions whether OS X is good OS or not…as long as it’s running on Mac hardware.
Actually QNX’s hardware support is pretty good. A lot of it is basic (VESA drivers for example) as the desktop isn’t its primary target, but it works quite well. They were famous for their desktop-on-a-floppy a few years back (the OS and desktop fit on a 1.7MB floppy).
Your comments aren’t making any sense. QNX – the operating system – is enormously well engineered and reliable, and powers everything from BMW nav-systems, to developer workstations, to nuclear power-stations. The number of platforms supported by an OS has nothing to do with how good it is, though QNX supports quite a few. Most of them you’ve probably never heard of, but it’s likely you’ve used them and never noticed (ATMs, etc.)
I think what you mean is that while Linux may be 70% device drivers because it runs on everything including the toaster that QNX is probably much less because it runs on a smaller subset of hardware.
But the issue is moot because that’s kind of the point of micro-kernels. To make bad drivers, strange configurations and such tolerable when they fail.
The overhead problems of yesteryear are nullified by today’s powerful computers.
Actually it’s opposite. As computers become more powerful the overhead gets bigger, so to speak. The difference in performance between a pure muK and a monolithic design increases as cpu performance increases.
QNX is not a pure muK. There is no such thing today. BeOS is not a pure muK either, yet it still has a poor performance. The fact it does yield a nice desktop experience is more due to the modularity in user space than the kernel itself, and _can_ be achieved with monolithic systems. On the other hand, the performance of monolithic systems _can_ be achieved with muK-like designs, if and _only_ if some elements in a pure muK-design is abandoned.
Linux and the Windows kernel are not monolithic kernels, nor muK. The same goes for BeOS, AmigaOS, Syllable and QNX.
dylansmrjones,
“QNX is not a pure muK.”
Says what reliable source? The QNX kernel only has CPU scheduling, interprocess communication, interrupt redirection and timers built into it. If you reduced the number of tasks the kernel has any more, it wouldn’t be a kernel.
Thom,
Here’s hoping QNX management can see an opportunity for widespread use on general purpose machines. I think QNX would be very suitable for use in robotics and control automation running on generic personal computers.
Edited 2006-05-11 23:00
I like QNX for what it does – but for a “business” opportunity to occur for QNX they do have to drop one of their strengths – the O(1) scheduler.
QNX is a hard RTOS so everything that’s done can be analyzed and guaranteed to be completed in real time. (That’s what the powerful profiler is for).
The scheduling is done in such a way that it may not always guarantee the most responsive system (depending on the priority of drivers and things lik that). Granted that there is a new scheduler already in the works, I am not sure QNX is viable as a desktop system.
For example, there is little video accelerated drivers. Networking driver are few and far in between. Wireless drivers are non-existent (except for the prism one which I couldn’t get it to work after days of debugging). QNX do have a nice model for writing your own drivers but who among us have the skills (or the time) to do so?
For some to say that Linux is not fit for the desktop because of devices support, QNX is even farther behind.
On top of all that, there is the issue of Photon UI as Thom as mentioned. It’s light weight and fast. However, each application, be it Firefox or Thunderbird, have to be ported with a different set of UI calls. You could use X11 on QNX but it’s not “officially” supported.
I love QNX for what it does as a development platform for embedded and mission-specific systems. I just don’t think it’s ready for the masses – I would say that Linux back in 1999 is more ready for the masses than QNX is now.
Why are you talking about desktop systems? We’re talking about areas where reliability and security are the most important things.
This is a discussion about design, not marketability.
Probably because other posters earlier mentioned QNX and microkernels in relation to desktop systems.
“However, each application, be it Firefox or Thunderbird, have to be ported with a different set of UI calls.”
I don’t think QNX lends itself to porting consumer applications. I was picturing using it to build custom solutions for scientific and engineering applications on generic hardware. Even if that generic hardware had to be certified, it would still be easier to get things done so to speak. Not everything in the world of engineering runs on embedded hardware, so think this is a great space for QNX.
“Hell, even on a desktop: I’d rather the disk driver crash and silently restart itself than wait around for a full reboot.”
Exactly, and this lies at the base of why I like the muK design; contrary to many, I think the muK is suited much better for desktops than a monolithic design.
This, once again, confuses two separate implementation issues as if they were one. I routinely restart crashed device drivers on monolithic systems, most notably USB devices on Linux.
You may want a disk driver to crash and restart silenetly, but I do not, because I understand both the fault model of disk devices and the fault containment issues with disk drivers.
You are assuming that when a disk driver fails it’ll do so in an obvious way with almost no side effects. This is called “fail fast, fail silent” in the literature. But it’s not typically the way drivers fail. Disk drivers fail because the hardware did something unexpected by the designer, as often as not, and I, for one, want to know what that was. (Real disk devices fail in intermitten ways, and the damage can be long done before the driver falls over.)
This, once again, confuses two separate implementation issues as if they were one. I routinely restart crashed device drivers on monolithic systems, most notably USB devices on Linux.
It’s not an implementation issue, its a design issue. If the driver and the kernel are co-located, and the driver crashes, you cannot restart it. It might be possible, but you have no guarantees about whether the kernel was compromised during the crash.
You are assuming that when a disk driver fails it’ll do so in an obvious way with almost no side effects. This is called “fail fast, fail silent” in the literature. But it’s not typically the way drivers fail. Disk drivers fail because the hardware did something unexpected by the designer, as often as not, and I, for one, want to know what that was. (Real disk devices fail in intermitten ways, and the damage can be long done before the driver falls over.)
That’s fine, the device can log the fault or whatever, and ask the user whether they want to remount the device. The recovery mechanism you want isn’t really pertinent here. What is pertinent is whether recovery is possible. In a monolithic design, recovery is not possible, not in any trustworthy or robust fashion.
It’s not an implementation issue, its a design issue. If the driver and the kernel are co-located, and the driver crashes, you cannot restart it. It might be possible, but you have no guarantees about whether the kernel was compromised during the crash.
It’s both a design and an implementation issue, and for some kinds of crashes you can make such guarentees, which is why monolithic kernels do have restartable drivers.
The recovery mechanism you want isn’t really pertinent here. What is pertinent is whether recovery is possible. In a monolithic design, recovery is not possible, not in any trustworthy or robust fashion.
A microkernel does not increase the trustworthiness nor robustness of driver recovery. All it does is change the way in which faults fail to be contained.
A driver with an error in bad block handling may well pass a lot of bad data on to the file system layer before it falls over, for instance, and whether the driver is message passing and in a separate address space or not has zero impact on the system’s ability to contain that kind of error. (Such an error took ebay down for three days a couple of years ago, getting them front page coverage in the local press…)
Address space separation contains exactly one kind of fault, which while being a particularly difficult fault to debug isn’t even in the top ten list of ways in which broken drivers cause propagation of corrupted data.
And yes, there are ways to reduce your exposure to that particular kind of fault without going to separate address spaces. See, for instance, sparse data address space models or self-repairing data structures.
>> thought about QNX’ opportunities as a desktop operating system.
desktop operating systems. there’s a market with a lot of growth.
home users shut their computers off once a day
business users just reboot
people wants apps and compatibility. got skype for qnx?
pissing away capital trying to muscle into a settled market would be an elaborate way for qnx to commit suicide.
1.) But if you can write crappy disk drivers, how often is this going to start happening?
2.) This one’s easier, NIC’s aren’t as damageable as disks.
3.) Yes they have, they’re child processes of the WM which is a child of the server. This is really unfortunate, but I think Microsoft has actually worked around this in Windows; but they have a different architecture.
4.) Definitely, that makes sense.
My concern is that this ideology makes it OK to write shitty drivers. Monolithic kernels seem to provide a nice idiot proof layer where idiots have a hard time getting a driver to work at all.
1.) But if you can write crappy disk drivers, how often is this going to start happening?
That’s a facetious argument. There is no reason disk drivers for microkernels should be any less reliable than one for macrokernels. Indeed, there is reason to believe it should be the other way around — microkernels tend to enforce very simple kernel-driver interfaces (because everything must go through IPC), and maker the resulting driver less complex.
3.) Yes they have, they’re child processes of the WM which is a child of the server. This is really unfortunate, but I think Microsoft has actually worked around this in Windows; but they have a different architecture.
The WM is not a child process of the server, and apps are not child processes of the WM. The reason your apps die when the X server does is because they have a pipe open to the server, and don’t handle the signal
My concern is that this ideology makes it OK to write shitty drivers.
It’s not an ideology that makes it “OK to write shitty drivers”. It’s an ideology that says “bugs are inevitable, shit happens, so let’s try to minimize the damage when it does.”
There are practical ramifications to this that go beyond ideology. Linux, as a monolithic kernel, cannot handle a network stack crash without crashing the system. Even BeOS, a hybrid-microkernel that wasn’t particularly designed for robustness, can do that. In QNX, you can take out drivers left and right without crashing the system. That is not true of almost any monolithic kernel. Linux is also not an OS that has any hope of achieving EAL7 certification (or something along those lines). Verifying all that kernel-level code would be impossible. Systems like EROS and CoyoteOS, with their tiny kernels, have at least some hope of achieving something like an EAL7 certification.
The “shit happens” ideology is going to help how? Seriously, you’re completely failing to address the concern that lower quality drivers will be shipped because you can get away with shipping lower quality drivers.
It’s really less a concern in a Linux system where people are going to notice, be embarrassed, and not get drivers accepted anymore.
But on a system like Windows, already plagued with terrible drivers, the profit motive of getting the drivers thrown out quicker may become more and more important when you can get away with it.
I suppose this isn’t a design issue as much as a political issue surrounding the design; but I think it’s something you have to address more and more the more you add fault tolerance.
You’re making a really stupid argument. Should we stop making cars safer because it might encourage people to drive more recklessly? If you’re going to go to all that trouble to design a secure and stable OS, you’re not going to put a bunch of crappy drivers on top of it.
Have you ever used Windows ME?
You’re missing my point. My point is that the difficulty provides a barrier of entry, sort of like a college admissions office, which keeps the people incapable of writing really stable code from trying to.
A better counterpoint, to make your argument for you, is that you might be able to improve debugability in your muK allowing those “better” developers to write even more reliable code. Of course, the counterpoint to that is the concern of it becoming a crutch and an excuse (sort of like: I don’t need to worry about getting every free in there, I’ve got a memory debugger to find it for me later).
Considering that one of the express purpose of the microkernel implementation strategy besides increasing runtime flexibility is to increase the reliability of the system, why would its adherents feel that buggy software was more acceptable because the system could recover? The idea is simply to mitigate the damage done in the event that a simplistic error occurs. You might as well argue that languages that raise exceptions when an array index is out of bounds promotes the development of buggy software.
You could more easily say that it doesn’t do enough to develop correct software, and because that is your intended goal, that it is largely unimportant when compared to approaches that do.
The idea isn’t to just to prevent further damage, but to allow recovery. The problem with a monolithic design is that if the disk driver segfaults, you have absolutely no idea whether the integrity of the rest of the system has been compromised. Thus, you have no recovery option — all you can do is bail out. By subdiving everything into protected domains, you can ensure that if something goes down, the other domains are not compromised. That allows you the option of recovery.
The idea isn’t to just to prevent further damage, but to allow recovery. The problem with a monolithic design is that if the disk driver segfaults, you have absolutely no idea whether the integrity of the rest of the system has been compromised. Thus, you have no recovery option — all you can do is bail out. By subdiving everything into protected domains, you can ensure that if something goes down, the other domains are not compromised. That allows you the option of recovery.
This is fault containment, and it’s hard to do even when you have protected domains. message passing funnels fault propagation but doesn’t prevent it. Also, as in Brevix, you don’t need a microkernel to use protected domains, you can do it just fine in a ‘monolithic’ system.
Brevix isn’t a monolithic kernel. Whether it is a microkernel depends on your use of the term. If you insist that microkernels depend on message passing, then no it is not. But then again, neither is L4 or MINIX3, which is built, like Brevix, on RPC. The control flow between a multiserver and a Brevix-like design is a little different, but at the core of Brevix is a privleged piece of code that handles threads, interrupt-handling, and RPC — a microkernel.
which is built, like Brevix, on RPC
Brevix was definitely not based on RPC. And no, ‘at the core’, Brevix was not a ‘privleged piece of code’.
Brevix is entirely agnostic about which parts of the system live on which side of the user/supervisor boundary , except that those parts that the hardware demand live on the supervisor side because they have to execute priviliged instructions.
> The idea isn’t to just to prevent further damage, but to allow recovery.
Is-it really possible?
If a disk driver crash, it may damage the filesystem, the data, so to allow full recovery you have to: do a fsck, and use a FS which journalise both data and metadata..
And that’s not really enough: what if a buggy driver lock up the PCI bus (on Windows I remember that Creative sound cards could create problems abusing PCI timings), put garbage in the DMA engine which may perhaps corrupt any part of the memory..
AFAIK current hardware do not allow full separation, so let’s not be naive and think that a microkernel really allow fault recovery: it helps, sure but it’s not magic either.
Except that those domains aren’t protected from other hardware on the computers in which we’re discussing.
One problem is that moving a device driver into a separate process doesn’t mean that it will recover gracefully from executing erroneous code. That erroneous code can leave the system in an unrecoverable state in a number of ways without ever causing a “crash” that will be prevented by the MMU of a PC.
You are right for most part but not all. Think of what happens when a disk driver does access violation? It is important to crash the whole system rather than let it continue and damage the disk. In fact it is advisable to not let that driver run again because it can cause further disk corruption and destroy even remaining data on disk.
Network drivers, yeah they should be restartable. Who gives a damn if your network connections die. design your socket libraries to have sufficient resiliency.
I think except disk driver, there are hardly any other drivers that can’t be restarted easily. Windows is doing this now by moving many drivers to user mode like Audio n stuff with UMDF.
“On BeOS, the net_server was buggy as hell”
– Heck, I can top that one…
I actually physically ‘jiggled’ the sound card (truly catastrophic for most OSes), and after the audio barfed I restarted the media_server – and the friggin’ thing worked!. Now I don’t know many specific details as to what exactly happens, but after I did this, my curiosity (and pluckiness) was piqued – so I tried it ten more times. Seven out of the ten times, I could restart the audio without any problems and without rebooting. I have some serious doubts that Windows or possibly even Linux could survive such a flagrant type of physical system abuse. Maybe the muK has some part to play in this stability, or maybe I’m just lucky. If someone else has also experienced this phenomenon, let me know.
PS. Seriously, don’t try this yourself on Windows or anything else (or at least don’t blame me for anything). After presenting this to a friend, he stupidly tried something similar while burning a CD in XP, and trashed his filesystem. He got mad at me, so I told him to get mad at himself instead. What a dummy.
I personally don’t think that “micro vs. monolithic” is a primary determinating factor in whether or not a project will be successful. Most successful kernels are monolithic, but I imagine that the correlation is at least partly anomalous.
I agree entirely.
IMHO the 2 most important factors for “success” is applications and developers. A brilliant OS with no applications and no developers won’t be successful, while a crappy OS with heaps of applications and developers has a much higher chance of success.
Saying “let’s do a boring UNIX clone” is completely different to “let’s do something innovative” – a large number of developers will understand how things work before you even start, and all of the applications for all of the other UNIX clones can be ported quickly.
This gives 3 different methods – do something different (e.g. a micro-kernel) and struggle trying to get native applications and developers, do something boring and take advantage of everything else that is similar (e.g. a UNIX clone), or have millions of dollars (e.g. Windows).
Of course there’s always the possibility of doing something different and then having an ugly “compatibility” layer for non-native software. This doesn’t work so well though – it adds overhead while hiding the benefits, and no-one bothers writing native software because it’s easier to port existing non-native software.
Trying to build the perferct microkernel has cost the GNU project enormously. We now call an operating system (GNU) after its kernel (Linux). Not that I am an FSF fan(atic) or anything, I am just pointing out that getting out a useable kernel instead of a theoretically better one has proven to be highly succesfull for Mr Torvalds and the Linux developers.
Few people want to develop HURD. It’s had as its basis an anachronistic microkernel that has been mostly passed by by researchers since maybe 1996-1997 in favor of lighter approaches. The less research-oriented folk are more interested in practical usage problems, and Linux has much more inertia in this area. The use-only people are far more interested in pragmatic things like running programs to do work, than kernel implementation techniques. This affects all other alternative operating systems pretty heavily, even if they have as a basis the most monothilic of kernels.
Even now when people look for personalities to slap on top of their light-weight microkernels they just adapt linux to run on them and focus on developing other personalities for domain-specific tasks that interest them.
It’s not so much a matter of whether it’s theoretically better or not, it’s a matter of inertia. Developing an entire operating system from scratch with all of the user-visible functionality of say Windows, no matter what approach taken, is a really difficult. It takes a lot of manpower, and most of that manpower today is devoted to developing Linux.
I think that the technical issues are one thing, but what really matters is the leadership. While RMS continues to do a brilliant job, he chose not to focus on kernel issues. Instead, we have a LBT, who continues to do a superior job on Linux. My argument is that if Hurd had a Torvalds, we’d not be squabbling so much about “Linux” vs. “GNU/Linux” as a symbol representing the whole system.
The site Linus posted his response on seems to be down still.
There is a mirror of his comment here:
http://www.mirrordot.org/stories/3f6b22ec7a7cffcf2847b92cd5dec7e7/i…
It looks like he posted a couple of other times also under the same thread, but I havn’t had a chace to read them yet.
seriously, this “debate” is so stale…
seriously, this “debate” is so stale…
Why? Because this discussion is actually about something, instead of the usual pointless and childish everyone vs. Microsoft or KDE vs. GNOME?
Why? Because this discussion is actually about something, instead of the usual pointless and childish everyone vs. Microsoft or KDE vs. GNOME?
Or the childish monolithic vs microkernel? This has been done. Apart from a very few select areas where a microkernel makes sense for the functions being performed, the monolithic argument has won because of what has happened in the real world.
Yet almost all academic research is being performed on micro/exo kernels. Monolithic kernels are oging the way of the dodo — its just a matter of how long it takes. The commercial world catches up to academia eventually, it’s just usually a few decades late
research is happening on exo and micro-kernels because its a unknown area compared to monolithic designs. basicly a monolithic kernel is a engineering problem, not a research problem.
kinda like how building a bridge today is a engineering problem because all the research into stuff like what structures work for what problem have allready been done…
research is happening on exo and micro-kernels because its a unknown area compared to monolithic designs. basicly a monolithic kernel is a engineering problem, not a research problem.
kinda like how building a bridge today is a engineering problem because all the research into stuff like what structures work for what problem have allready been done…
You betray two common confusions. First, in most disciplines, engineering usually predates scientific research, not the other way round. Second, there’s still a lot of research being done in statics, especially with respect to issues like corrosion and fatigue.
engineering predates because its in engineering new problems are first discoverd (and often solved, in some way). but then reseach steps up to take over the asking of “why” and “how” it worked so that the next time the problem shows up, the engineers dont have to fly by the seat of their pants…
on the topic of flying, the wright brothers ended up doing a whole lot of research when their engineering efforts failed because of bad data.
so yes, engineering often predates research. but its still research that does the theoretical heavy lifting. they are not at conflict with each other, they fill each other out. the question is when to apply one over the other. and by the looks of it, micro and exo kernels need more research and less engineering.
corrotion and fatigue is long term effects. my thinking was basic bridge designs, like a when to use wires rather then arches and so on. that is where micro-kernels are right now (from the view im getting), trying to go beyond the old “roman” arch and into new ways of building bridges.
Yet almost all academic research is being performed on micro/exo kernels.
That doesn’t in any way proove that monolithic kernels are on the way out. In fact, nothing proves that. And infact, nothing proves the opposite either. So, what’s your point ? And, take it from someone who works in academia, not everything academia does is something that should be followed and/or used in the real world.
Monolithic kernels are oging the way of the dodo — its just a matter of how long it takes.
Yay, more academic projects. The only way microkernels are going to make it is if we end up getting a Turing computer system with unlimited resources. No matter how much you think you can run a microkernel on a modern system adequately the boundaries will always be pushed where the resources will always be filled. That’s the way it is.
Apart from a very few select areas where a microkernel makes sense for the functions being performed, the monolithic argument has won because of what has happened in the real world.
You play the success-card in each of these threads, Segedendum, and evry time the retort you get is: “does that also mean Windows is the best operating system?”
Every time, people reply to that with: “no, Windows got where it was not because it is the best, but because of factor a, b, c, and d– but NOT because it was the best.”
Why is Windows excluded from the “real-world” argument, but monolithic-ness is not?
Disclaimer: I know Windows got ther enot because its the best, but I needed this for the argument.
You play the success-card in each of these threads, Segedendum, and evry time the retort you get is: “does that also mean Windows is the best operating system?”
It was the same topic when you played the same card and got struck the same way.
Windows is not the best, neither is Linux, nor Solaris and god forbid OSX.
Success can be measured with usage. Something almost not being used is obviously not successfull, being used enough means at least that. Look at forks, why do they go down?
“does that also mean Windows is the best operating system?”
No, but other monolithic OS systems are far better and are still far more successful than any microkernel based system in the wider world.
“no, Windows got where it was not because it is the best, but because of factor a, b, c, and d– but NOT because it was the best.”
If they’d stuck to a microkernel architecture like hanging on to apron strings then no, Windows would not have got where it was. That much is clear.
I don’t know where you’ve been, but this childish debate prexists both KDE and Gnome
Infact, this discussion may well be a whole lot more useless.
When it comes to desktop issues, at least everyones opinion matters. And if they do decided to switch, good for them.
I can’t say the same for someone with years of exprience writing ‘Hello, World’ in PHP.
ROFL, you are trolling on your own site….
The guy never said KDE vs GNOME or MS bashing are useful arguments or that this piece of news about micro vs monolithic shouldn’t be posted on OSNews.
All the guy is saying is that trying to shove one solution like the best for all applications is a fruitless debate that should end.
“There are zero examples of high-robustness or high-security monolithic systems. ”
By what criteria? openBSD is a very secure OS, and it isn’t a microkernel. What about high reliability OSs like AIX, Solaris and openVMS? some VMS systems have been up with no downtime for 10 years! Please, article writers, don’t confuse opinion with fact
The type of systems Dr. Shapiro is talking about are in an entirely different league than OpenBSD. He’s talking about systems robust enough to run nuclear reactors, and systems secure enough to use for the most confidential data.
For those not familiar with his work, Dr. Shapiro was one of the implementors of EROS. His goal with his current project, CoyoteOS, is to design a secure operating system that is not only high-performance (building on some ideas from the L4 projects), but also highly secure and analytically verifiable. The last bit is interesting, because it applies some of the current trendy research into verifiable programming languages to an operating system kernel. CoyoteOS is being written in BitC, a language designed to lend itself to analysis, with the goal of being able to mathematically verify some of the security properties of the resulting system.
The type of systems Dr. Shapiro is talking about are in an entirely different league than OpenBSD. He’s talking about systems robust enough to run nuclear reactors, and systems secure enough to use for the most confidential data.
And yet he appears to be unaware of such systems as already exist that are robust enough to run reactors or secure enough to reach A status.
It’s amusing to see the clock spin back around to the 70s, but we’ve been over this ground before, and it was sterile the last time.
Solaris has been deployed in both of those cases. It has EAL4+ for RBACPP, CAPP and when the Trusted variant is used LSPP as well. For confidential data, try upto and above the military Top Secret Classification, whats more it is trusted to enforce data separation and authoristed flow between classified and unclassfied data.
There are companies trying to do the same with SE Linux (and they will succeed) and there were several other UNIX systems that had labeled data protection functionality in them.
I’m sick and tired of hearing about how robust Microkernels are. In all fairness, microkernels are a really neat idea on paper. In practice however, monolithic systems can be made just as secure and I haven’t seen a single study to show how many man-hours were required to make a secure micro vs. monolithic kernel system.
OpenBSD is very secure, SELinux is quite secure.
Good design and good code are what’s necessary here, and microkernels don’t force either on you any more than monolithic kernels. A microkernel may have less interfaces, but it can be poorly designed nonetheless.
muK against monolithic crap passing all over again:)
Questions those 3 should really answer are:
Tanenbaum:
1. Why do you think minix is not a success (aka. people using it are counted by fingers)?
2. You can’t forget fact Linux (and Linus) won the public opinion, can’t you?
Linus:
1. Why don’t you just say that you like monolithic design? And you just stand behind your choice.
2. I’ve seen bad monolithic designs too, does this makes monolithic, should we say,… crap?
Jonathan Shapiro:
1. WTF is CoyoteOS and who uses it?
2. Many muK successful designs? Which real muK design is still alive and successful (OSX is based on hybrid not on muK), hurd maybe? Name one single success deployed in millions.
I really hope they all stop pissing against the wind targeting neighboors foot.
Edited 2006-05-11 20:32
Name one single success deployed in millions.
QNX. Symbian. Symbian alone probably has many more users than Linux. I also need to name MINIX– maybe not successful in the number-of-installs kind of way, but definitely successful in that MINIX and its accompanying book is probably read by every aspiring kernel designer.
QNX. Symbian
Massively, yes. But, I was taking his sentences about success AND security. My bad if I was not clear enough
And those two would be example of security, how?
Edited 2006-05-11 20:51
And those two would be example of security, how?
Well, QNX is secure enough to power parts of the International Spacestation, a lot of medical equipment, and the Space Shuttle’s robotic arm. Symbian is secure enough to power loads and loads and loads of mobile phones.
Well, QNX is secure enough to power parts of the International Spacestation, a lot of medical equipment, and the Space Shuttle’s robotic arm. Symbian is secure enough to power loads and loads and loads of mobile phones.
1. What you’re saying is called robust and stable, not secure.
2. Windows are running on most computers, but this doesn’t make them secure
3. I wouldn’t mention phones and secure in the same sentence
1. What you’re saying is called robust and stable, not secure.
I don’t think NASA would use insecure software.
2. Windows are running on most computers, but this doesn’t make them secure
What does Windows have to do with this?
3. I wouldn’t mention phones and secure in the same sentence
I’ll give you that.
I don’t think NASA would use insecure software.
Read security bulletins about QNX. Then speak.
What does Windows have to do with this?
[your own words] Symbian is secure enough to power loads and loads and loads of mobile phones. [/your own words] Now, think… let me help you connect the question: loads ands loads means secure? well, this is what windows have with your security:)
Edited 2006-05-11 21:31
“Security” is relative to the environment in which the object exists. NASA could take FreeDOS, put it on a small device, and run calculations on this device in orbit. Is this device secure or insecure? As another example, recall that NASA has done exactly this with Debian. Was that particular release of Debian secure or insecure? In what environment, right? With what usage parameters, right?
I don’t think NASA would use insecure software.
We sure did when I was there and nothing has changed.
I don’t think NASA would use insecure software.
Uh… QNX’s big focus is to be a real time deterministic OS. The examples you gave (ISS, robo-arm, medical equipment) are all applications where the OS needs to be able to interact in real time with the real world. I think that has a lot more to do with the choice of QNX than security.
Edited 2006-05-12 01:04
> I don’t think NASA would use insecure software.
someone there is using windows 98 ME to do something mission critical, you can be sure of that
1. What you’re saying is called robust and stable, not secure.
I don’t think NASA would use insecure software.
In what world do you live in ? What has security to do with using it on robotic arms or special hardware elements ?
[quote]I don’t think NASA would use insecure software. [/quote]
never heard of crashing vxworks in the marsrover because the os couldnt load the fat tables in memory?
You don’t think security is important on mobile phones?! Can’t wait to get a virus on your phone, eh?
You don’t think security is important on mobile phones?! Can’t wait to get a virus on your phone, eh?
When did I said that? At least read the comment before spewing sonsense.
I said phones as they are now are too insecure to be mentioned in the same sentence as security.
“I said phones as they are now are too insecure to be mentioned in the same sentence as security.”
Perhaps you can provide a URL to that comment.
This is the comment I read:
“3. I wouldn’t mention phones and secure in the same sentence”
But now that you clarified your statement, which phones? The ones running Windows CE?
Perhaps you can provide a URL to that comment.
Google is your friend
This is the comment I read:
Well, you should read complete conversation not just sentences. How do you read books then? Pick the page 157 read the 3rd sentence and that’s it? You’ve read the book?
But now that you clarified your statement, which phones? The ones running Windows CE?
Pick any phone, the more functions it has the worster it is. It will be a while more before phone makers start considering security as selling practice. Phones in this era are just the same as OSes in 95 (security? what is that?).
“Google is your friend”
Right. As if Google has already cached your comment within a day. In other words, you didn’t say it.
“Well, you should read complete conversation not just sentences.”
I read the entire thread. You didn’t say it. It’s obvious you didn’t say it if you can’t even provide a link to it. Your obfuscation isn’t helping anyone.
“Pick any phone”
Why should I pick any phone? I’m not the one who made the claim. But to humor you:
http://www.symbian.com/phones/foma_d702i.html
Quote from site: “This is the first 702i series to offer an automatic Security Scan function, which downloads security updates to the phone as and when they become availiable from DoCoMo.”
“Phones in this era are just the same as OSes in 95 (security? what is that?).”
You were saying?
“Google is your friend”
Right. As if Google has already cached your comment within a day. In other words, you didn’t say it.
“Well, you should read complete conversation not just sentences.”
I read the entire thread. You didn’t say it. It’s obvious you didn’t say it if you can’t even provide a link to it. Your obfuscation isn’t helping anyone.
“Pick any phone”
Why should I pick any phone? I’m not the one who made the claim. But to humor you:
http://www.symbian.com/phones/foma_d702i.html
Quote from site: “This is the first 702i series to offer an automatic Security Scan function, which downloads security updates to the phone as and when they become availiable from DoCoMo.”
“Phones in this era are just the same as OSes in 95 (security? what is that?).”
You were saying?
Seriously, are you really so retarded?
With “google is your friend” I was telling you to search for information your self. Not that I specified link, I wasn’t.
In translation: I won’t even bother since you haven’t provided even a simple sign of inteligence.
Yep, and PR for Windows 95 was what (or any PR? Did you ever read PR which would say our product sucks?)? When the hell PR was to be believed? If you think it is then you can simply commit suicide and end your life before it becomes miserable.
I’m saying what? Phones and security don’t walk together on the same street. For now they walk different planet.
‘With “google is your friend” I was telling you to search for information your self.’
You know I’m not looking for “information.” You know I’m looking for where you actually said what you claimed you said. Do you really think people reading what you’re saying right now are that stupid? Do you take everyone for a fool? You made one unclear statement about phones and security and claim you said something else. It’s as simple as that.
‘When the hell PR was to be believed?’
That’s beside the point of your assertions so far. You said, “It will be a while more before phone makers start considering security as selling practice.” Not only does the PR go against your state of denial, but you have no proof that the security is not real. Insulating yourself in delusions of expertise isn’t helping you.
You know I’m not looking for “information.” You know I’m looking for where you actually said what you claimed you said. Do you really think people reading what you’re saying right now are that stupid? Do you take everyone for a fool? You made one unclear statement about phones and security and claim you said something else. It’s as simple as that.
Get some oxygen into your brains. You seriously lack it.
[from my previous comment] With “google is your friend” I was telling you to search for information your self. Not that I specified link, I wasn’t.[/from my previous comment] Even Thom, with whom original discussion started agreed on that one without me needing to provide any link.
That’s beside the point of your assertions so far. You said, “It will be a while more before phone makers start considering security as selling practice.” Not only does the PR go against your state of denial, but you have no proof that the security is not real. Insulating yourself in delusions of expertise isn’t helping you.
Ok, lets take on security. Go on bluetooth hacking pages (with google, you’re just not worth it). They specify how and which models you can hack in.
Second thing, my last phones were 3 times on service because of security bugs, which caused them to (two times loose all memory thus needed to be reset and once stop working). Second one reacted badly on some text messages which caused to set loudness to almost zero. Firmware update always solved the problem. Enough personal phone problems for me to feel the need to search same stories on the net to proove anybody I’m right.
Java runs on Phones. And it is quite common to find Java phone virus. Check how many they are in the wild.
Simply said, get serious and get some oxygen into your brains with a touch of reality.
That’s beside the point of your assertions so far. You said, “It will be a while more before phone makers start considering security as selling practice.” Not only does the PR go against your state of denial, but you have no proof that the security is not real. Insulating yourself in delusions of expertise isn’t helping you.
Thing you don’t seem to understand is… For now security on phones is a FEATURE not a MUST BE. As soon as it beecomes a MUST BE, security will become selling practice. Until then it is just one of the phone features. So,… no it doesn’t invalidate what I said even for a little bit.
When MUST BE will be introduced (and with it security as selling practice started), you’ll see phone makers competing in security like OSes do now.
Learn the difference between those two. It is a giant leap. Look at the Win95 security features, they are specified. But, were they really adequate? Are any?
Now the summary, dumbest translation which you might even understand. I said this once or twice already You don’t provide enough good question for me to be searching links to back my comments. If you want to check if I’m right or not check it your self. I simply don’t see the reason why I would bother to prove something to someone who can’t even pose good question (good question != verbal confrontation or picking the nits). I don’t feel the need to prove you something. It is just you who wants this from me.
Ask better question and I might even bother to look for the backings. Until then,… not worth it.
p.s. I wouldn’t bother to answer if I wouldn’t enjoy childish stupidity like your comments present. So you can expect answer every time you would desire to continue your sensless beating of the fog.
“Until then,… not worth it.”
Yeah. Not worth the time to prove you’re right, but worth the time to add line upon line of defamatory remarks. Very original. Very believable. Keep polishing that skill of yours.
“So you can expect answer every time you would desire to continue your sensless beating of the fog.”
Here’s your chance, thou without source of worth.
Yeah. Not worth the time to prove you’re right, but worth the time to add line upon line of defamatory remarks. Very original. Very believable. Keep polishing that skill of yours.
Yeah, its simple.
Unintuitive conversation? Not worth it.
Enjoyable pissing contest without any target since the other side doesn’t even know what it talks about? At least its fun to toy around with people like you are.
Since my answers come to second category…:)
On the other hand, I think you noticed I took my time to dispute your second part of comment, which was at least trying to be informative. You just completelty missed what “selling security as practice” means in the rest of the world without including your brain.
Here’s your chance, thou without source of worth.
My god, will we be having this pissing contest in poetry now? Have to prepare for the next time then:)
Edited 2006-05-14 13:43
“3. I wouldn’t mention phones and secure in the same sentence”
I assume you are refering to the Symbian viruses, but 1st, they’re all for the S60 interface, and 2nd they rely on user dumbness to infect the system, you must accept an unknown file from an unknown device via bluetooth wich must be on and discoverable, install the file after being warned that it is not secure, even the name of the file stinks to danger(eg: x23fGr8d.sis).
And as for symbian bluetooth vulnerabilities, they all (still?)live in the S60 interface.
There isn’t a single virus for the UIQ Symbian interface, and this version of the OS has a really rock-solid stability, it can run for months without a restart, and there are no known vulnerabilities AFAIK.
So, the problem, as someone else said, remains in userspace, at least for this particular example.
…and secure enough for Cisco Internet backbone routers.
QNX.
QNX is a system designed for a specific purpose. That doesn’t mean that the rest of us mere mortals should be using a similar design.
Symbian
Yay. And what a fantastically responsive system that thing is. I still don’t know of any reason why Symbian has actually needed a microkernel and what advantages it gives to the system. Yes, the minimum is necessary and you can add to it, but you can easily add modules in the same way as Linux does. There is not a plus point anywhere for the microkernel design in it.
Forks have many more users than Ferrari’s, this doesn’t mean a simple fork is better than a Ferrai. User-base only, is not a good factor for judging overall success of an implementation.
I sense this debate is heading in a childish “my implementation has more users than your implementation” direction.
Perhaps the mu-kernel supporters should lead by example and create this fabled ubber-kernel. Show us the future, do not tell us the future!
Perhaps the mu-kernel supporters should lead by example and create this fabled ubber-kernel. Show us the future, do not tell us the future!
Best… comment… ever… on this one:)
Summ of this topic
I at least know of successful AND secure monolithic kernels, muK, no.
If I would have friggin’ mod points I would mod you up.
Depends on your needs. You’ll have an hard time to eat with a Ferrari… I assure you the simple fork is a better choice.
For that reason, µK might be great, but they are not the ultimate solution. They are elegant, but their complexity and their lack of performance can explain the lower number of OSes based on them.
Furthermore, I don’t really have major issues with the stability of the available solutions, be it micro, macro or hybrid… That said, I have issues with the stability/quality of applications, something a µK cannot really help.
µK are nice, but they won’t bring peace on Earth or stuff like that.
Name one single success deployed in millions.
QNX.
“1. Why do you think minix is not a success (aka. people using it are counted by fingers)? ”
That is how do you define success? Do you use your moving target of how many users? is more users than os-x a success or has it be more than windows?
If you use Tanenbaum’s goal for the project, teaching students about operating systems that it is a very successful project. You can only measure success if you state the measurement of your success before you do the project.
Edited 2006-05-11 21:07
Minix was originally used for educational purposes. It was quite successful in that area, and has been used across the globe to teach undergraduates how to write various parts of a kernel. If you mean contemporary Minix used for embedded purposes, then you really can answer this question for yourself easily-enough: why develop Minix when other people will develop Linux for you, or you can license third-party operating systems that have various certifications necessary for your task?
I could probably comment further on other aspects, but I’d like to minimize my involvement in this subject.
I want to try minix 3.
http://www.minix3.org/
“2. Many muK successful designs? Which real muK design is still alive and successful (OSX is based on hybrid not on muK), hurd maybe? Name one single success deployed in millions.”
I don’t know if it counts but QNX? Maybe Neutrino aswell.
It might be useful to realize that CoyoteOS is still a gleam in someone’s eye.
oh, and apparently, its developers aren’t particularly familiar with the history of operating systems:
[i]”there are zero examples of high-robustness or high-security monolithic systems.”
is easily refutable by a quick literature search.
I think this is a more honest piece than the what Tanenbaum has wrote.
When I learn QNX I did use shared strctures but it was more for ease of use than for “being right”. I would have failed in the ukernel concept by using shared memory.
That said, I think Linus is still more or less right about the security than monolithic kernels. This latest piece doesn’t change that.
Like many others have said. When you compared a same subset of services that a system provides, be it monolithic or ukernel, you’ll end up with similar resutls. A ukernel PLUS the extra services/code outside the ukernel that provides the such services would just work the same as a monolithic kernel. So, compare the bare ukernel with a monolithic kernel that contains so much more is comparing apples and oranges. The only way to make this comparison equal is to add the drivers/services outside of the ukernel into the equation.
On the issue of secure system, “highly successful/highly secure” ukernel system works by stripping down to only the “mission essential” outside ukernel components. A monolithic kernel like Linux could be stripped down to the bare minimum too – and I suspect that the result would be comparable robustness.
You know, I’m troubled by the fact (and I hate playing sides and discrediting but) that both first paragraphs are incorrect (this article, and Tennenbaums):
This article first:
Well, it appears to be 1992 all over again. Andy Tannenbaum is once again advocating microkernels, and Linus Torvalds is once again saying what an ignorant fool Andy is. It probably won’t surprise you much that I got a bunch of emails asking for my thoughts on the exchange. I should probably keep my mouth shut, but here they are. Linus, as usual, is strong on opinons and short on facts.
When did Linus call Andy an ignorant fool? Linus certainly ranted, and probably called ideas idiotic; but I don’t remember any libel against Tennenbaum. But I’m sure he was just exaggerating with memories of the 1992 war.
Tennenbaums:
hen was the last time your TV set crashed or implored you to download some emergency software update from the Web?
I’ve heard of quite a few modern HD devices requiring firmware upgrades to get things to work correctly. The fact that they only receive signals from relatively trusted hosts makes the analogy completely misleading, but that’s beside the point that he doesn’t seem to have a lot of experience with modern tv’s . They’re a complex nightmare of incompatibility and broken features much like Windows 95.
Of course, Tennenbaums article is supposed to be for everyone to read and think about. But I still don’t think bad analogies help although I can understand why he’s using them.
Tennenbaum is starting to sell me on muK though. But I’m unconvinced yet that it’s practical.
Maybe he wasn’t thinking about those tellys and smart(ish) phones that run the Windows OS suite.
I think overall he was analogizing with large complex modularised systems where one component failure is contained and has no _direct_ effect on another component, including the ship anaology, a complex modularised analog electronic system or a modularised digital system.
Even with the ship analogy (the toilet getting blocked), the overall system (the ship) must still allocate resource (send a guy down with a plunger) to reboot that module, and even though there is no direct effect, there are still indirect effects (like a nasty whiff or poo flowing down the corridor) which should be correctable, for example, flushing pipelines or queues etc. (perfume spraying and a swift corridor mopping).
But the way the other components and the library above the microkernel would be designed, would be to be prepared for a flushing of data or loss and can thus correct for it as best as possible.
It is not as though every module and library would be designed in the same way as a monolithic kernel or library. For this reason, the microkernel is complex but should ultimately be more stable.
“Well, QNX is secure enough to power parts of the International Spacestation, a lot of medical equipment, and the Space Shuttle’s robotic arm. Symbian is secure enough to power loads and loads and loads of mobile phones.”
OK, a hypothetical: I’ve got an AMD64 machine. I want an OS that’ll let me browse the web, listen to music and watch video, work with various peripherals (digital camera, music player). Which microkernel based operating system should I install?
I’m genuinely curious, most of the ones I hear about seem to be for embedded devices rather than for use on a general purpose machine.
My take: do some research.
Thom, you obviously don’t know what you’re talking about. Please do some research before stating such things. I don’t know about QNX stability, but at least it’s not about security at all (have a look at some iDefense advisories about QNX not long ago).
Furthermore, stability and security are not necessarily the same goals. In order to mitigate the impact of a vulnerability, it’s often a better strategy to let an application crash (or even the whole system), instead of leaving a running system to the attacker. That’s the idea behind most OpenBSD exploit mitigation techniques. In the end, security of operating systems depends on code quality and there always will be bugs. A bloated and overengineered microkernel design won’t help.
This whole discussion about the security of microkernels is mostly academic. Every system is secure as long as nobody tries to attack it (maybe simply because it’s not interesting enough…).
I’m really interested in Matt Dillon’s take on this story. I’m not a developer, but to me it seems that the concepts Dillon introduces in Dragonfly are similar in some respects to the concepts behind microkernels. He also comes from an Amiga background, so it would be interesting to see what he thinks of this debate…
Thom?
Is there a working example of a highly-robust and highly-secure mu-kernel?
XTS-400 is basically a microkernel (they call it a “security kernel”): http://niap.nist.gov/cc-scheme/st/ST_VID3012a-ST.pdf
Calling XTS-400 a uKernel would be entirely incorrect. Really, it’s a monolithic kernel with hardware-enforced 4 ring layering. It doesn’t share many of the common architectural features of microkernels; i.e,. servers that perform primary system functions, doesn’t focus heavily on message passing.
OSS is different from a server how? And microkernels don’t necessarily have to use message passing — L4 is built on synchronous RPC.
The total complexity argument that Linus touts is, to use his own expression, bull. What matters security-wise is the complexity at any given point, which usually is the size of its context. In a muK design the contexts (or “access spaces”) are much smaller than in a monolithic system.
However, it is of course possible to design a highly modular monolithic system, too, but that doesn’t necessarily help much unless your code is completely bug-free.
Unless there’s a real-world implementation, all of this debate is little more than a bunch of (educational, I’m sure) hot air and comparative reproductive organ waving.
The fact is, most useful operating systems being written today use hybrid kernels. Why? Because that seems to be the most pragmatic approach on mainstream desktop and server hardware.
Will that change? Sure. But it hasn’t, at least today.
Micro vs Mono kernels debate is really useless. It’s just fun for those who chime in or read the debate.
Bottom line, both designs are proven successfull.
Micro – QNX, Symbian, Minix,
Monolithic – Linux, BSDs, Solaris, AIX, VMS
Hybrid – AS400, Windows, OSX
Bottom line, both designs are proven successfull.
Micro – QNX, Symbian, Minix,
Monolithic – Linux, BSDs, Solaris, AIX, VMS
Hybrid – AS400, Windows, OSX
[couldn’t resist] Both, as in all three? [/couldn’t resist]
Yes, such a debate is useless. It’s more useless in this case because, as with many topics, people freely discuss technical matters with little education on the subjects in question as a prerequisite. One will rally behind a $FAMOUSPERSON and throw feces at the other. One will talk about one ideal oversimplifying the matter, and the other will talk about another ideal oversimplifying the matter.
This might as well be an argument about $PROGRAMMING_LANGUAGE and Java. Personally I kind of feel for the QNX, because it didn’t do anything to get tossed around in a revitalized flamewar. It’s executing here and there, minding its own business, and suddenly it’s a key witness in trial of TEH CENTURY.
No one has yet commented on FreeBSD. I’m curioius – is it a microkernel or monolithic? I seem to recall reading long ago that it was a microkernel, but I could be wrong.
OK, I found the answer to my own question. FreeBSD is monolithic:
http://lists.freebsd.org/pipermail/freebsd-arch/2005-February/00349…
The fact that this issue is even being debated is a sign of the immaturity of computing. Computer science will not come of age until and unless a single software model is universally adopted. We are doing it wrong and we’ve been doing it wrong every since Lady Ada wrote the first table of instructions (algorithm) for Babbage’s hand-cranked analytical engine, a century and a half ago! Linus and everybody else are simply out to lunch. Soon, computing will change in a fundamental way and the old programmers will be left at the curb.
Software model? Do you mean architecture, process model?
Could you possibly clarify what you mean?
Software model? Do you mean architecture, process model?
Could you possibly clarify what you mean?
I mean that most people do not know the true nature of computing and I fault our infatuation with Alan Turing for this sad state of affairs. A computer is not a machine for performing sequential calculations. That’s just one of the benefits. A computer is a behavioral machine, that is, an automaton that detects changes in its environment and effects changes in it. As such, it belongs in the same class of machines as biological nervous systems and integrated circuits. A basic universal behaving machine consists, on the one hand, of a couple of elementary behaving entities (a sensor and an effector) or actors and, on the other, of an environment (a variable). Furthermore, in order for a program to act on and react to its environment, sensors and effectors must be able to communicate with each other.
The point is that, even though communication is an essential part of the nature of computing, this is not readily apparent from examining a UTM. Indeed, there are no signaling entities, no signals and no signal pathways on a Turing tape or in computer memory. The reason is that, unlike hardware objects which are directly observable, software entities are virtual and must be logically inferred.
The above argument was partially lifted from the site below. This is a shameless plug, I know. But until we come to understand the true nature of computing and change our current ways of doing things, the world will continue to pay a heavy price for buggy systems and low productivity.
Why Software Is Bad and What We Can Do to Fix It:
http://www.rebelscience.org/Cosas/Reliability.htm
What you seem to be describing is an event-driven system. Correct me if I am wrong. Such systems are already in use today. It is a system of semi-independent objects that can emit and react to signals in an asynchronous manner. An object emits a signal, and then other objects in the system respond appropriately.
Well, almost every application with a graphic user interface works this way. How well developers exploit such system is a discussion for another day. However, I try to exploit such systems when designing GUIs and they too aren’t fault tolerant or bug-free. The biggest problem in my experience stems from managing state information for each object. You can introduce all sorts of benign bugs if you don’t do it properly during error recovery.
I like event-driven systems a lot, but they too introduce their own share of problems (managing and keeping track of state changes per object).
What you seem to be describing is an event-driven system. Correct me if I am wrong. Such systems are already in use today. It is a system of semi-independent objects that can emit and react to signals in an asynchronous manner. An object emits a signal, and then other objects in the system respond appropriately.
Not exactly. What I’m describing is a non-algorithmic, signal-based, synchronous software model where every change is an event that may or may not trigger one or more synchronous events. The synchronous part is essential. There are several synchronous reactive programming languages out there (e.g., Esterel, Signal, etc…) but they don’t go far enough, i.e., down to the CPU instruction level.
What I’m describing is a non-algorithmic
If the system is non-alogrithmic, how are calculations performed?
signal-based, synchronous software model where every change is an event that may or may not trigger one or more synchronous events.
Yeap, this is exactly how “proper” GUI programming is done. On Linux, GObject provides a signal-based framework that can give you this behavior in applications. The thing I like about such systems is their proprensity for allowing concurrent programming, and also permitting rich behavior, dynamism and intelligence in applications. Not enough developers take advantage of it though from what I’ve seen. I’d wager almost every GUI/gaming toolkit provides a signal-based system.
I’ve only just started reading up on COSA, but the more I read it the more I think of it as an event driven framework. The only part that confuses me is the non-algorithmic aspect of the system.
The fact that this issue is even being debated is a sign of the immaturity of computing.
No. It is a sign of the failure of academic computing.
Debates like this come up periodically because people never seem to remember the outcome of the last time.
Something about failing to learn and repeating comes to mind here.
You’re mixing up design methodologies with theories of computing. And neither one needs to be unified for computer science to be considered a mature field. There are different theoretical models of computing (eg: lambda calculus and Turing machines), just as there are different theoretical models of physics (string theory, the standard model). There are also different design methodologies, just as there are different design methodologies in “real” sciences (eg: stable versus unstable aircraft).
That rebelscience link is really kooky.
Browser: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98; PalmSource/Palm-D050; Blazer/4.3) 16;320×320
Well, Thom, that is not the correct solution. If there is a bug in a driver, fix it (not you, but the developers). Why should I have a slower system because of bad coded drivers? If they are crash prone, the problem is not in the kernel design, but in the driver itself. QNX maybe a super-stable OS, but there are specific uses for it, like you said for ISS, power plants and the like.
But for grandma’s PC, a practical solution that works well is better. If the driver fails, then it has to be fixed. I mean, come on, how much crashes can you expect from a driver? I’ve only got one crash from raiserfs driver because of a corrupted disk. The screen went all black, and a panic message appeared. I was able to fix the problem by hand and I’m still using the same disk. How could a driver automatically solve those complex problems for me and continue to work without a farily complex logic? I prefer a crashy driver (that is fixable) and not a smart one that can make me coffee, adding more code (and probably more bugs).
QNX is secure enough to power parts of the International Spacestation, a lot of medical equipment, and the Space Shuttle’s robotic arm.
To be honest, I don’t see that as evidence of security, although I’m prepared to believe that the security of these OSes is good nonetheless…
That equipment is all air-gapped from the internet, and only accessed by trusted users, so security is a non-issue AFAICS, relative to predictability and reliability which obviously must be very good.
Hybrid kernels are clearly the best. You get the best of both types of kernels. I mean really this is so silly.
Why not have the best features of both Monolithic and Micro (speed and reliablility).
Only in the liberal open source hippie world would anyone even care. There are a lot of other things in life that are more important than silly computers guys.
Get away from the computer and live in the real world. Pretend it’s your last month of life and that you can do what you can to do good for others and enjoy life outside in the real world and not in a lousy computer monitor.
In my opinion, the philosophy monolithic kernel is simply compatible with that of micro-kernel. In a monolitic kernel, nothing stop you move as much part of a driver to userspace as possible, as long as you can make it technically. (That is, leave a stub as little as possible in kernel space.)
But in a real micro-kernel, you have no means to do the reverse thing by its definition.
Because of the distrust of one-true-way, I think it is apparent which one is better.
I think once DRM and TC (Trusted Computing) become endorsed by restrictive laws, then for mainstream purposed you’re going to have to use the kernel you are told to use.
As the monolithic Linux kernel is an open and liberal system encouraging honesty and fairness then it is unlikely that this will be endorsed, as third parties have no control over it and therefore it cannot contain Digital Restrictions Management or be Trusted + Controlled by third parties.
http://lobby4linux.com/WordPress/?p=94
DRM : Digital Restrictions Management, restricts what you are allowed to do with your own property (sometimes renamed to the more sublime Digital Rights Management).
Trusted Computing : Trusted and Controlled not by yourself, but by third parties.
Edited 2006-05-12 03:11
Cathedral and the bizarre of the linux world?
No. The cathedral relates to one group developing all the parts of the system. The bizarre is when different groups develop the different parts and then someone puts them together to create the whole system.
Cathedral and the bizarre of the linux world?
I’m serious with this question, here, but did you mean “bizarre” (as in strange and beyond belief) as a joke, or did you mean “bazaar” (as in a semi-chaotic market)?
I am not sure whether your intent was comical or not…
I’m serious with this question, here, but did you mean “bizarre” (as in strange and beyond belief) as a joke, or did you mean “bazaar” (as in a semi-chaotic market)?
I am not sure whether your intent was comical or not…
I think he is referring to the Linux/OSS book The Cathedral and The Bazaar by Eric S. Raymond:
http://www.amazon.com/gp/product/0596001088/qid=1147443946/sr=2-1/r…
Has this guy heard of vxworks ? It’s modular, but not based around a microkernel design. With the exception of
some recent supported targets, everything even runs within the sam address space.
Driving rovers around on Mars I’d say it’s a reliable OS.
http://fred.cambridge.ma.us/c.o.r.flame/msg00025.html is still worth reading too.
http://fred.cambridge.ma.us/c.o.r.flame/msg00025.html is still worth reading too.
Almost 14 years to the day later.
I enjoyed participating in that thread, and I wonder how many of us are still in the OS game now.
Well, Plan 9 is still alive and kicking -at its own pace.
Still – as 14 years ago, providing a much simpler, yet general approach obliviating spoecial cases like freebsd jails, selinux, ad hock kernel interfaces/syscalls/device config tools, networked sound/gfx/printer daemons, service distribution, and many many other things fattening other kernels or special userspace daemons as they arn’t unifying kernel/service/filesystem interfaces.
any crash of a device driver will even make a microkernel OS useless.
this is absolutely FALSE !!!
in a microkernel, if there is a bug in a device driver
1) and if that device is used in the current work then there is a chance that restarting might makes everything usable while a monolithic kernel will be down without any chances.
2) and if that device is not used in the current work then the current work is 100% not affected by the bug in that driver and the kernel will try to restart the driver while a monolithic kernel will be down even that device is not used without any chances.
in both situations microkernel BEATS the monolithic kernel.
for ex, if a floppy driver has a bug, it will bring the whole linux down. while if its a microkernel the system is perfectly usable while the floppy driver will get automatically restarted.even after getting restarted still it causes problem, it wont affect the current normal work or the kernel until the floppy is used at that time. there Microkernel Stands !!!
NASA even uses Windows which Linux guys argues as not secure ??? so NASA using Linux doesnt make that linux is highly reliable and secure.the only thing is because linux is free from license problems because of GPL and POPULAR.
everyone start developing on a microkernel whether it may be MINIX or L4.sec or Coyotos.
A microkernel can be successfully only if it has huge developers like linux. if all the linux developers turn to microkernel side then everyone will realise the power of a Microkernel OS.
BS. Again, it is impossible to prove that your code is correct (don’t have bugs).
When your code is so small that you can show empirically that it is safe, that means you still have to call it through a function, and there can lie lots of bugs (concurrency and the like).
The calling function will be running in a user space.so bugs in the calling function wont affect the kernel.
Yes, you have to be a good, provable correct algorithm before making it fast. You just forgot the part about “making it fast”.
Depending only on the hardware to be faster means that your kernel will stay in a niche at best, and can never compare to Linux
i am not saying that only the hardware controls the speed. i just said that performance reduce due to the microkernel approach since its safer, is not that much important in these days or the following days.
Incorrect. BSD were there and free already. People were waiting fir a free OS that commercial entities could not steal from them instead. Because stealing the kernel would mean also stealing their work. So I strongly believe that yes, the GPL license made a big push, but the fact that the kernel was monolithic too.
BSD was available. but the source code of BSD was having a case suit filed by AT&T.
everything is because of GPL is known by everyone. since GNU/Linux was a complete free OS which had the complete GNU utilities and since it was usable is the only reason everyone started to add codes. the monolithic kernel approach has made good programmers lazy and made them to change from HURD to Linux kernel.i can say that a test version of GNU/HURD was not released before a test version of GNU/Linux was released.
I disagree, just look at the HURD. It does not attract the load of people you talk about. Even though it’s GPL too.
HURD was GPL too, but at that time it was developed by a small group. Peoples code other than that small group were not allowed to get add in the HURD project due to the finalization of design of kernel.
Insulting great minds like those working on the Linux kernel (among which A. Cox) do not make me in your camp.
You sound like Linux is developed by no great people.
i accept that they are good programmers. Linus made them lazy by starting a fun monolithic kernel.
this is clearly proved because linus itself told in his first newsgroup message about the release of linux that the next year HURD will be ready. until that everyone can play with my kernel.he never planned to create a very good futuristic kernel.he just created a easy playable kernel and due to GPL and everyone are allowed to add codes to Linux kernel and not like HURD at that time which didnt allow anyone to add codes to the main project, everyone added codes to the linux kernel. since its slowly gained popular , Linus CUNNINGLY changed his voice and he saw many programmers turned their faces to easy monolithic kernel. he then started to act like as if he designed the linux kernel to be highly reliable.
BS. Nothing guarantees that an OS or kernel is 100 % perfection, portable and secured. You live in a dream world.
And management from Linus was very important to make Linux what it is now. Tanenbaum would have bowed when IBM and others wanted to put their things in the Linux kernel, or would have bowed before proprietary drivers. I say this because of how he licensed Minix
all linus wants now is that no programmers should turn to microkernel side as he fears that microkernel might change the future and it will.
BS. RMS has the HURD, and it has already been redesigned at least once. You talk like all this is possible like that. You’re dreaming, wake up !
nowadays system programmers and researchers have become very less. thats the big problem. all good programmers please turn to microkernel side and start growing your creativity.
well linus only wanted to make an os for _himself_ that “just works”. in the beginning it was not meant to be running on thousands of architectures.
it was his own personal choice and his hobby project, nothing more.
so if you wanna advertise microkernels, stop debating with linus and create a new well designed (e.g. with proper api’s lol) microkernel based operatingsystem and promote that. start some project and invite smart people to join…
Well, I guess there is one basic problem with this heated debate. Microkernel advocates tend to assume that uK are “component-structured systems” and monolitic are not. That is false argument – monolithic kernels are as “component-structured” as microkernels.
Just a side note – even statically linked userland program can be “component-structured”, whereas binary using a lot of separate .so libraries can be unstructured mess.
From software maintainace point, what really matters is how look sources, not how “components” composition is implemented.
In the end, the main difference between real uK and monolithic kernel is whether VM and process scheduler share the single adress space with “hardware abstraction layer”, or are isolated.
I do not see ANY advantage of separating VM and scheduler.
I found Shapiro’s comments and Tannenbaum’s article interesting.
I think Linus should be carefuk – he’s really rather inexperienced. Sure, he has deep experience of *his* system, but he has relatively narrw experience of computing in general (hence historically bizarre attitudes to DBMS systems and raw IO and aio, and the OOM issue), and rather limited experience of operating systems.
Tannenbaum and Shapiro *are* widely experienced and well informed, and I think off-handedly writing off such experience is bizarre, particularly when they are talking about operating system research topics. After all, if Linus wants to use deployed numbers as an indicator of universal truth, then he’s not in an ideal situation himself.
One point I would not is that in a microkernel design such as Tannenbaum describes with device drivers isolated and specific communications allowed, many of the objections that are levelled at BLOB drivers are rendered a lot less powerful. If the 10% performance hit from such isolation means a practical way to manage such drivers, then I’m all for it.
Well Linus still managed to make a successful OS while Shapiro tried to make EROS and finally rejected it to start another one CoyotOS, so be careful when talking about experience..
Well Linus still managed to make a successful OS
Nit: Linus copied a successful OS.
In that sense Windows is copied from VMS, OS X from Unix, etc. Almost all modern OSes are copied under your definition.
Linux did a hell of a lot more than just copy an OS, and the fact that corporations, governments and individuals are using testifies to that. Afterall, they could be using BSD instead since it was first.
> Linux did a hell of a lot more than just copy an OS, and the fact
> that corporations, governments and individuals are using testifies
> to that. Afterall, they could be using BSD instead since it was
> first.
The adoption of Linux over BSD was based on political issues in the
early development of *BSD. If you look at a timeline, you will see
that the 386 BSD was the contender against Linux at the time, (I have
heard that the project was moving very slowly due to a lack of
consensus on technical issues, and many of the developers wanted it to
stay as a research operating system) and that FreeBSD and NetBSD were
not available until 1993. Additionally, adoption of all BSD derived
operating systems, (and the respective communities of 386 BSD,
FreeBSD, and NetBSD) were devastated by the USL lawsuit. By contrast,
the SCO lawsuit occured after the Linux community was extremely large,
and had political support from companies. So it’s not surprising that
the issue was quickly resolved. The situations in my view were quite
different.
I’m not saying that Linux is bad, but I do not think technical issues
were the reasons why the BSD’s did not become the main open source
*nix, but rather poor timing and the USL lawsuit.
Sources:
http://en.wikipedia.org/wiki/Berkeley_Software_Distribution
for the timeline.
http://en.wikipedia.org/wiki/386BSD
Edited 2006-05-12 23:19
Linux did a hell of a lot more than just copy an OS, and the fact that corporations, governments and individuals are using testifies to that. Afterall, they could be using BSD instead since it was first.
It did one thing better than BSD: advertising. I can remember, at the Seattle OSDI, trying to explain to McKusick why so many people prefered Linux when, at that time, BSD was far superior technically.
Er, no, I lie. It did two things better than BSD: it copied, rather than innovating. GNU was always meant to be a copy of Unix, and the only place the FSF tried to innovate was the hurd…
Surely you cannot argue that Tannenbaum is inexperienced?
I’d not be too concerned about researchers deciding they’ve gone up a blind alley either – they are after all trying out unusual approaches, while Linux is more ‘me to’.
Most interesting I’d think will be Dragonfly, which has some of the microkernel-esque message passing but in the same address space, at least currently.
I see it as the usual clash between academic purity, correctness and the real-world pragmatism.
Microkernels try to have nice, simple designs. Linux, BSDs get stuff done, for a lot of people, on a daily basis. Though they do not have a state-of-the-art design.
It’s similar as databases, or software development lifecycles. In DB class, they teach you to normalize your data. Then you start working in a company, bulid a warehouse and the first thing you do to increase performance is to de-normalize the data. In SW dev. class, they teach you about development lifecycles. Requirements, specifications, tests, reviews, etc. Then you join a software company and notice that first-to-market and marketshare are much more important.
The core of this micro vs. monolithic debate seems to be, to me at least, different goals and viewpoints. Both sides should try to comprehend this and then play along nicely again.
“So, the trouble is that with muK people seem to be thinking it will be acceptable to have buggy user-space kernel apps: This won’t be acceptable. ”
I don’t agree with that! I want my drivers and kernel modules to all be perfect?! I like uKernels because they are (or SHOULD be) simple and clean with a well defined method for obtaining basic system resources so that “add-ons” can be comfortably coded.
I want ULTRA-stability and ULTRA-security.
Richard Stallman is the man!
“The performance loss [of microkernels] is (guess) less than a factor of two. If you worry about factors of two, I humbly suggest you write all your code in assembler henceforth.”
—
Andrew Tanenbaum (1992)
If robustness, security, and abstraction are the primary concerns,
then wouldn’t it be more beneficial to push for operating systems
being written in garbage collected languages than to stress a specific
kernel organization methodology? Of course, garbage collection has
issues all it’s own, but it would lower the bar for developers to get
involved in operating system development, and it would be easier to
implement features more quickly, without memory leaks. This does not
mean that existing operating systems lack these qualities, but that
developers of new operating systems should consider this approach.