Linked by Thom Holwerda on Mon 21st Nov 2011 11:25 UTC, submitted by moondevil
OSNews, Generic OSes You all know MINIX - a microkernel operating system project led by Andrew Tanenbaum. The French Linux magazine LinuxFr.org has an interview with Andrew Tanenbaum about MINIX' current state and future. There's some interesting stuff in there.
Order by: Score:
reliability argument
by orsg on Mon 21st Nov 2011 11:50 UTC
orsg
Member since:
2011-02-09

I know, Microkernels definately are the nicer architectures, but it's horrifying to read peoples attempts, to find a practial advantage for the cleaner structure.

Reliability is one, that is named the most. Something like "we can replace everything during runtime, so our computer never has to be rebooted, which is why it can run forever"...So what? Can MINIX deal with burning CPUs or RAM banks? Every sane person, who has to deliver extraordinary uptimes will go for a distributed system, where nodes can go up and down dynamically without impacting availability of the whole cluster. And when you have this ability, it doesn't matter at all if you have to reboot that one node for an upgrade or not. In such environments failing nodes are not an exception, but the rule.

Reply Score: 4

RE: reliability argument
by reez on Mon 21st Nov 2011 12:16 UTC in reply to "reliability argument"
reez Member since:
2006-06-28

Hmm, maybe you think too much about servers in certain environments. I am defiantly not into this topic, but what about for example embedded systems that for example need a rapid update and can't simply/cheaply be taken offline.

For example everything that is space based, but also robots/drones or some bigger infrastructure (be it for telecommunication or measuring <something>) where you don't want to physically visit (or even restart) everything when you need to update. I don't really think Minix targets the server market and with Linux, BSD and to a certain degree Solaris and even Windows there are more than enough options available. However, they all develop in a certain general-purpose direction, that may be fitting in most situations, but certainly not in all of them. It can be a huge relieve to find something that "just fits" in a certain situation and in some cases this may be Minix.

In some situations lots of backup systems can be too costly.

In other words I am a huge fan of diversity. ;)

Edited 2011-11-21 12:18 UTC

Reply Score: 5

RE[2]: reliability argument
by Bill Shooter of Bul on Mon 21st Nov 2011 16:43 UTC in reply to "RE: reliability argument"
Bill Shooter of Bul Member since:
2006-07-14

You can do online kernel updates without requiring a microkernel arch. However, its obviously more difficult.


http://www.ksplice.com/

Reply Score: 3

RE[2]: reliability argument
by phoenix on Mon 21st Nov 2011 18:03 UTC in reply to "RE: reliability argument"
phoenix Member since:
2005-07-11

The only market MINIX targets is education. It's sole purpose in life is to make teaching OS internals, micro-kernel internals, and similar topics. It's small, easy-to-understand, and teachable. Nothing more.

There's virtually no software available for it.

Reply Score: 2

RE[3]: reliability argument
by cmchittom on Mon 21st Nov 2011 18:29 UTC in reply to "RE[2]: reliability argument"
cmchittom Member since:
2011-03-18

The only market MINIX targets is education. It's sole purpose in life is to make teaching OS internals, micro-kernel internals, and similar topics. It's small, easy-to-understand, and teachable. Nothing more.


Somebody obviously hasn't looked at the MINIX web page[1]. Your contention was true for version 2 (and presumably version 1), but:

MINIX 3 is initially targeted at the following areas:

<ul><li>Applications where very high reliability is required</li>
<li>Single-chip, small-RAM, low-power, $100 laptops for Third-World children</li>
<li>Embedded systems (e.g., cameras, DVD recorders, cell phones)</li>
<li>Applications where the GPL is too restrictive (MINIX 3 uses a BSD-type license)</li>
<li>Education (e.g., operating systems courses at universities)</li></ul>


And as for where you say

There's virtually no software available for it.


Except that it's POSIX compliant, so well-written Linux/BSD software should (theoretically) be just a compile away. (I'm guessing that Your Mileage May Vary, though.) In particular, the site lists Emacs, which is certainly 75% of what I need. ;)

Don't get me wrong, I won't be switching to MINIX anytime soon. But the reasons you brought up aren't valid ones for not switching.

[1] http://www.minix3.org

Edited 2011-11-21 18:29 UTC

Reply Score: 3

RE[4]: reliability argument
by jessesmith on Mon 21st Nov 2011 22:59 UTC in reply to "RE[3]: reliability argument"
jessesmith Member since:
2010-03-11

In _theory_ MINIX should be able to run the same software as Linux or BSD if the user is willing to compile. In practice that's far from the truth. A lot of software, even trivial software, won't compile and run "as is" on MINIX. A while back I tried to port some small apps from Linux to MINIX. Eventually I got them to compile, but they wouldn't run properly. A lot of little things are different enough to make porting a hassle.

MINIX is an interesting little system, but it doesn't really offer anything over Linux or FreeBSD, except as a learning tool.

Reply Score: 4

v MicroKernel's
by hackus on Mon 21st Nov 2011 12:01 UTC
RE: MicroKernel's
by Thom_Holwerda on Mon 21st Nov 2011 12:08 UTC in reply to "MicroKernel's"
Thom_Holwerda Member since:
2005-06-29

So in my opinion, if the research community really thinks MicroKernel's are better, there would emerge a consensus on how to do it.


As opposed to all the consensus on how to design a monolithic kernel...?

Reply Score: 4

RE[2]: MicroKernel's
by Valhalla on Mon 21st Nov 2011 19:00 UTC in reply to "RE: MicroKernel's"
Valhalla Member since:
2006-01-24

Microkernels offers the possibility of system stability should one of it's components fail. The cost is performance. Despite of Thom's claims that the performance degradation is 'slight' anyone who knows how micro-kernels operate understands that there's nothing 'slight' about this loss of performance.

Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data.

As to wether or not this stability is worth the performance loss it all depends on how important this stability is and of course how unstable the more performant non-micro-kernel designs are.

Now obviously the market has shown that the non-micro-kernel based operating system are stable enough that they rather have the performance. There are certainly cases where extreme stability is of outmost importance and in those areas micro-kernels certainly has alot to offer, but for general os demands it's obviously not worth the loss in performance as the demand for micro-kernels is very low.

Now, from a purely architectural standpoint I find micro kernels more elegant, from a practical standpoint I prefer the performance since my non-micro kernel operating system isn't prone to crash. And it seems so does the vast majority of the computing world.

Reply Score: 4

RE[3]: MicroKernel's
by Neolander on Mon 21st Nov 2011 21:47 UTC in reply to "RE[2]: MicroKernel's"
Neolander Member since:
2010-03-08

Who said that microkernel-based OSs cannot use shared memory ?

As long as shared memory blocks are handled with an appropriate amount of care (it is untrusted data from the outside world, it should be used in a thread-safe way, etc...), and as long as shared blocks can survive the crash of a subset of their owners without being lost for other owners, I have no problem with it myself.

Edited 2011-11-21 21:47 UTC

Reply Score: 1

RE[3]: MicroKernel's
by Alfman on Mon 21st Nov 2011 22:05 UTC in reply to "RE[2]: MicroKernel's"
Alfman Member since:
2011-01-28

Valhalla,


"Microkernels offers the possibility of system stability should one of it's components fail. The cost is performance."

"Having to communicate through messaging is MUCH slower than communicating through shared memory. Passing the actual data is MUCH slower than passing a pointer (address) to that data."

It depends on the IPC mechanisms used. Writing into pipes is slow, but who says we couldn't use shared memory to do IPC, allocated specifically for that purpose?


Also, in one of Neolander's OS articles, we talked about using a managed language to provide microkernel semantics within a shared kernel address space. The language would enforce isolation, but objects could be explicitly transferred between modules without inuring any of the traditional context switching costs.


So I do think microkernels have a chance to be competitive on performance, but I doubt they can make any inroads into the already crowded market since the established macrokernels are good enough.

Reply Score: 3

RE: MicroKernel's
by B. Janssen on Mon 21st Nov 2011 12:21 UTC in reply to "MicroKernel's"
B. Janssen Member since:
2006-10-11

It is important, because what they don't tell you in a lot of these articles, is that without an agreement of how to do MicroKernel's, hardware manufacturers like Intel, won't invest the billions in hardware to speed them up.

Which is why MicroKernels can't hold a stick to Monolithic ones at the moment.


That's all well until you consider that XNU (Mac OS X) and NTOSKRN (among others Windows 7) are both NOT monolithic kernels. They are not microkernels either, they are what some call macrokernels. But by your logic that would just mean that both, monolithic and microkernels should get the shaft. They don't because the hardware does not care. Micro, macro, mono -- it's all the same to your garden variety AMD64 CPU, and you know why.

Reply Score: 3

RE[2]: MicroKernel's
by smashIt on Mon 21st Nov 2011 14:35 UTC in reply to "RE: MicroKernel's"
smashIt Member since:
2005-07-06

That's all well until you consider that XNU (Mac OS X) and NTOSKRN (among others Windows 7) are both NOT monolithic kernels.


you have to be more precise with the nt-kernel
in the beginning it was not a real microkernel, but pretty close to one
after nt4 they went more monolithic
and since vista they are moving back to the micro-side

today win 7 even survives a crash of the graphics-driver

Edited 2011-11-21 14:35 UTC

Reply Score: 4

RE[3]: MicroKernel's
by lucas_maximus on Mon 21st Nov 2011 17:39 UTC in reply to "RE[2]: MicroKernel's"
lucas_maximus Member since:
2009-08-18

Tbh, it survives it most of the time. I had google maps crash a intel display driver yesterday ... first time I have seen a graphics driver crash take down Win 7.

For somereason the laptop had a Windows Vista Driver on a Win 7 machine ... updating seemed to fix it.

Edited 2011-11-21 17:50 UTC

Reply Score: 2

RE[4]: MicroKernel's
by JAlexoid on Mon 21st Nov 2011 23:56 UTC in reply to "RE[3]: MicroKernel's"
JAlexoid Member since:
2009-05-19

Tbh, it survives it most of the time. I had google maps crash a intel display driver yesterday ... first time I have seen a graphics driver crash take down Win 7.

For somereason the laptop had a Windows Vista Driver on a Win 7 machine ... updating seemed to fix it.

Lucky you, I get those crashes twice per day.

Reply Score: 3

RE[5]: MicroKernel's
by lucas_maximus on Tue 22nd Nov 2011 13:10 UTC in reply to "RE[4]: MicroKernel's"
lucas_maximus Member since:
2009-08-18

Get better hardware.

Edited 2011-11-22 13:17 UTC

Reply Score: 1

RE[6]: MicroKernel's
by JAlexoid on Tue 22nd Nov 2011 23:48 UTC in reply to "RE[5]: MicroKernel's"
JAlexoid Member since:
2009-05-19

Yeah... WindowsXP works like a charm. Hardware it is then...

Reply Score: 3

RE[7]: MicroKernel's
by lucas_maximus on Wed 23rd Nov 2011 08:17 UTC in reply to "RE[6]: MicroKernel's"
lucas_maximus Member since:
2009-08-18

Yeah... WindowsXP works like a charm. Hardware it is then...


That is because Windows XP isn't using you card for acceleration like DWM display manager does.

Anyway I am pretty convinced you have made this up anyway.

Reply Score: 1

RE[7]: MicroKernel's - GPU benchmarking?
by jabbotts on Wed 23rd Nov 2011 13:06 UTC in reply to "RE[6]: MicroKernel's"
jabbotts Member since:
2007-09-06

If you figure it's graphics card related, you could try a good gpu benchmarking utility and see if the heavy load or full range of function use finds a crash. You might also confirm if the GPU manufacturer has a solid Win7 driver. If win7 is that crashy using more of the GPU than WinXP's 2D GPU needs then it could very well be the manufacturer's driver.

Not saying I don't have my own win7 issues but they are not related to crashy hardware.

Reply Score: 2

RE[3]: MicroKernel's
by galvanash on Mon 21st Nov 2011 19:11 UTC in reply to "RE[2]: MicroKernel's"
galvanash Member since:
2006-01-25

I've posted this numerous times before on this board, but here it goes again...

1. According to Dave Cutler (NT Chief Architect), quoted in numerous interviews, NT is not, was not, and was never intended to be a microkernel. If anything it was based loosely on VMS, which was certianly not a microkernel. That label got thrown about by marketing people for totally invalid reasons (the quotes are hard to find because they were in print journals, but I have seen at least 2 myself).

2. Tanenbaum himself has stated unequivocally that NT was never a microkernel: http://www.cs.vu.nl/~ast/brown/followup/

"Microsoft claimed that Windows NT 3.51 was a microkernel. It wasn't. It wasn't even close. Even they dropped the claim with NT 4.0."

3. By the commonly accepted definition of a microkernel, it simply doesn't come close and never did. The VM subsystem, the file systems, and numerous other subsystems are kernel mode, and always were kernel mode. The do not run in userland, never did run in userland, and were never intended to run in userland. It was in no significant way different from linux or any other monolithic kernel from the point of view of memory seperation.

4. In 3.51, the VDM (video display manager) DID run in userland, along with its drivers. This was done to protect the kernel from driver faults. In practice this had 2 problems. First it was slow. Second it more often than not didn't work - if a fault put the VDM in a state where it could not be restarted the whole system had to be rebooted. They reversed this in 4.0 and moved the VDM back to the kernel. Regardless, this does not make it a microkernel - they simply chose to run this one subsystem this way. Moving it back to kernel mode required pretty massive changes - if it were designed as a microkernel it would have been simple...

I post this because Microsoft marketing was so successful at calling NT something it was not that even 16 years later this misinformation still manages to propogate around. There is nothing wrong with NT - it is a well designed monolithic kernel. But it is not a microkernel and never was.

Edited 2011-11-21 19:14 UTC

Reply Score: 9

RE[4]: MicroKernel's
by DeepThought on Mon 21st Nov 2011 21:06 UTC in reply to "RE[3]: MicroKernel's"
DeepThought Member since:
2010-07-17

To your 3rd point: I never saw a definition that says, all "modules" attached to a micro-kernel have to run in user mode.

Reply Score: 1

RE[5]: MicroKernel's
by Thom_Holwerda on Mon 21st Nov 2011 21:21 UTC in reply to "RE[4]: MicroKernel's"
Thom_Holwerda Member since:
2005-06-29

To your 3rd point: I never saw a definition that says, all "modules" attached to a micro-kernel have to run in user mode.


This is a common misconception. People think the definition of a microkernel hinges on "everything must run in userland". This is not true. You can have a microkernel plus ALL of its modules running in kernelspace, and it'd still be a microkernel.

Reply Score: 1

RE[6]: MicroKernel's
by DeepThought on Mon 21st Nov 2011 21:28 UTC in reply to "RE[5]: MicroKernel's"
DeepThought Member since:
2010-07-17

Exactly my point :-)

Reply Score: 1

RE[6]: MicroKernel's
by galvanash on Mon 21st Nov 2011 21:37 UTC in reply to "RE[5]: MicroKernel's"
galvanash Member since:
2006-01-25

This is a common misconception. People think the definition of a microkernel hinges on "everything must run in userland".


Everything must be capable of running in userland... Not the same thing.

If you don't have code isolation and memory protection (when possible) you do not have a microkernel. If code in your file system can step directly on kernel memory then what is the point?

Reply Score: 2

RE[7]: MicroKernel's
by Thom_Holwerda on Mon 21st Nov 2011 21:39 UTC in reply to "RE[6]: MicroKernel's"
Thom_Holwerda Member since:
2005-06-29

"This is a common misconception. People think the definition of a microkernel hinges on "everything must run in userland".


Everything must be capable of running in userland... Not the same thing.
"

Exactly.

Reply Score: 1

RE[7]: MicroKernel's
by DeepThought on Mon 21st Nov 2011 21:45 UTC in reply to "RE[6]: MicroKernel's"
DeepThought Member since:
2010-07-17

You can have memory protection even if you run in supervisor mode. Only supervisor code is "capable" of changing the MMU/MPU to enhance its rights.
But if the supervisor code is proven (big word I know) to be correct (either mathematically or by design/review: Check IEC61508), then there is no problem.

But for sure, the more software is in userland the easier to protect the kernel and other parts of the system.

Reply Score: 1

RE[5]: MicroKernel's
by galvanash on Mon 21st Nov 2011 21:34 UTC in reply to "RE[4]: MicroKernel's"
galvanash Member since:
2006-01-25

To your 3rd point: I never saw a definition that says, all "modules" attached to a micro-kernel have to run in user mode.


You are confusing the issue. In simple laymen's terms a microkernel is simply a kernel which implements only the minimum needed functionality needed to build an OS on top of it. Generally, this includes low-level memory management and IPC. All of the other pieces parts (device drivers, file systems, network stacks, etc.) are implemented so that they communicate with the microkernel and each other through IPC (or a functional equivalent abstraction).

The point is that the other pieces parts do NOT interact with a microkernel directly - they do so through some form of IPC - separation of concerns and all that...

Technically you do not have to run these other parts in usermode - it may in fact be desirable to run them in kernel mode. But it should be possible to run them in usermode with very little or no change - that is kind of the entire point.

So no, all modules do not have to run in usermode. But if your kernel runs virtually _everything_ in a shared address space with function calls intertwining between all the various subsystems and no protections between them then you do not have anything close to a microkernel.

Reply Score: 3

RE[6]: MicroKernel's
by DeepThought on Mon 21st Nov 2011 21:51 UTC in reply to "RE[5]: MicroKernel's"
DeepThought Member since:
2010-07-17

Just to make it clear:

supervisor mode != shared memory

I just designed a system where processes run in supervisor mode and have _no_ way to interact with other processes memory or even the OS memory.

A customer of us even increased separation such, that some (supervisor) code does not even see any memory/code other than his.

Reply Score: 1

RE: MicroKernel's
by jack_perry on Mon 21st Nov 2011 18:44 UTC in reply to "MicroKernel's"
jack_perry Member since:
2005-07-06

Really? QNX is crap?

Reply Score: 2

Wow
by peteo on Mon 21st Nov 2011 12:32 UTC
peteo
Member since:
2011-10-05

Wow, Tanenbaum is pretty arrogant actually. Linux not a success because less than 5% of this web site visitors use Linux?

Wow.

Check out the server market, andy.

Edited 2011-11-21 12:32 UTC

Reply Score: 11

RE: Wow
by bouhko on Mon 21st Nov 2011 12:59 UTC in reply to "Wow"
bouhko Member since:
2010-06-24

And the smartphones =)

Reply Score: 2

RE: Wow
by Sodki on Mon 21st Nov 2011 14:44 UTC in reply to "Wow"
Sodki Member since:
2005-11-10

Wow, Tanenbaum is pretty arrogant actually.

Actually, Tanenbaum is a really nice guy and very approachable. His talks are very interesting.

Reply Score: 8

RE: Wow
by mantrik00 on Mon 21st Nov 2011 17:53 UTC in reply to "Wow"
mantrik00 Member since:
2011-07-06

Add to it the Android devices that are nothing but derivatives of Linux.

Reply Score: 1

RE: Wow
by ebasconp on Mon 21st Nov 2011 19:36 UTC in reply to "Wow"
ebasconp Member since:
2006-05-09

If you set Andrew and Linus Torvalds and all arrogant guys in the world, Linus would win for a lot!!

Reply Score: 2

RE: Wow
by melkor on Thu 24th Nov 2011 03:51 UTC in reply to "Wow"
melkor Member since:
2006-12-16

You took the words right out of my mouth. He is sounding more and more like a sourpuss to this reader, sour that his baby, or the horse he betted on didn't finish first. wah! I want my bsd! wah! I want my bsd. Time to put the baby to bed.

Dave

Reply Score: 2

RE: Wow
by Hussein on Sat 26th Nov 2011 01:20 UTC in reply to "Wow"
Hussein Member since:
2008-11-22

That's irrelevant to the end user.

Reply Score: 1

Comment by peteo
by peteo on Mon 21st Nov 2011 12:35 UTC
peteo
Member since:
2011-10-05

But as we sadly know, the best doesn't always win. Ah, it rarely does.

The best isn't microkernels, but OS's running managed code (Single address space OS's). All the benefits from microkernels, without any of the many downsides (intramodule complexity being #1)

Not that it is likely to "win" in the short run, but it's clearly the way of the future.

Edited 2011-11-21 12:36 UTC

Reply Score: 1

RE: Comment by peteo
by kragil on Mon 21st Nov 2011 13:20 UTC in reply to "Comment by peteo"
kragil Member since:
2006-01-04

BS, the perfect system would only use verified code.
But as with microkernels, "it just doesn't work in the real world"(tm) for most use cases.

And Tannenbaum is just a bitter old man that is full of it. Dog slow Minix on embedded systems .. yeah right.
Head over to LWN to read the other side of the story ( http://lwn.net/Articles/467852/#Comments )

Reply Score: 7

v RE[2]: Comment by peteo
by peteo on Mon 21st Nov 2011 17:15 UTC in reply to "RE: Comment by peteo"
RE[3]: Comment by peteo
by kragil on Mon 21st Nov 2011 17:32 UTC in reply to "RE[2]: Comment by peteo"
kragil Member since:
2006-01-04

If you want to see a moron look into a mirror.
http://en.wikipedia.org/wiki/Formal_verification
http://en.wikipedia.org/wiki/Managed_code
Maybe you will grok the difference but I doubt it.

Reply Score: 5

RE[2]: Comment by peteo
by ebasconp on Mon 21st Nov 2011 20:56 UTC in reply to "RE: Comment by peteo"
ebasconp Member since:
2006-05-09

Please respect the old guys; you are walking on the roads they built for us!

About microkernels: XNU (the Mac OS X microkernel base) shows they are completely viable; QNX is also a viable option.

Reply Score: 3

RE[3]: Comment by peteo
by kragil on Mon 21st Nov 2011 23:35 UTC in reply to "RE[2]: Comment by peteo"
kragil Member since:
2006-01-04

XNU is not a microkernel and a million posts by AFs will not change that. Sure some parts of XNU were based on MACH (http://en.wikipedia.org/wiki/Mach_kernel) long ago, but combined with all the FreeBSD stuff Apple ended up with something that is defintely not a microkernel. It is even more a monolith than it is a hybrid. The difference between KFreeBSD and XNU is not that great.

Reply Score: 3

RE: Comment by peteo
by Neolander on Mon 21st Nov 2011 17:48 UTC in reply to "Comment by peteo"
Neolander Member since:
2010-03-08

Sure, managed OSs which run in a single address space are great, until the day your interpreter of choice starts to exhibit a bug that gives full system access to a chunk of arbitrary code.

Consider the main sources of vulnerabilities in the desktop world, and you will find the JRE, Adobe Reader, Flash Player, and Internet Explorer near the top of the list. All of these software are interpreters, dealing with a form of managed code (Java, PDF, SWF, HTML, CSS, and Javascript in these examples).

Interpreters are great for portability across multiple architectures, but I would be much more hesitant to consider their alleged security benefits as well-established.

Reply Score: 4

RE[2]: Comment by peteo
by Alfman on Mon 21st Nov 2011 21:36 UTC in reply to "RE: Comment by peteo"
Alfman Member since:
2011-01-28

Neolander,

"Consider the main sources of vulnerabilities in the desktop world, and you will find the JRE, Adobe Reader, Flash Player, and Internet Explorer near the top of the list. All of these software are interpreters, dealing with a form of managed code (Java, PDF, SWF, HTML, CSS, and Javascript in these examples)."

Well, to be fair, these are all internet facing technologies which have been tasked with running arbitrary untrusted code. Non network facing tools, such as GCC, bison, libtool, etc could also have vulnerabilities (such as stack/heap overflows), but these are far less consequential because these tools aren't automatically run from the internet.

An apples to apples comparison would have web pages serve up C++ code to be compiled with G++ and then executed. In this light the security of JRE, JS, flash all come out far ahead of GCC because it has no defensive mechanisms at all.


I think highly optimized managed languages would do very well in an OS. Even if there are some exploits caused by running untrusted code, it's not like a responsible admin should go around injecting untrusted code into their kernel.

There are other reasons a managed kernel would be nice, I know we've talked about it before.

Reply Score: 2

Some introduction to MINIX
by edogawaconan on Mon 21st Nov 2011 13:02 UTC
edogawaconan
Member since:
2006-10-10

Or something. I'll just leave this here. From EuroBSDcon 2011.

http://tar-jx.bz/EuroBSDcon/minix3.html

Reply Score: 2

XNU != microkernel
by tidux on Mon 21st Nov 2011 14:39 UTC
tidux
Member since:
2011-08-13

It's Mach with a bunch of other stuff running in kernelspace, which means you get the downsides of the Mach architecture and the failure proneness of a monolithic kernel. Add in the fact that Mac drivers are often an afterthought (in some cases even moreso than Linux drivers!), and you have a recipe for kernel panics.

Reply Score: 4

RE: XNU != microkernel
by frderi on Mon 21st Nov 2011 19:31 UTC in reply to "XNU != microkernel"
frderi Member since:
2011-06-17

How do you mean "afterthought"? I/O Kit has been around for longer than Mac OS X itself. It provides common code for device drivers, provides power management, dynamic loading, ...

Reply Score: 0

RE[2]: XNU != microkernel
by tidux on Tue 22nd Nov 2011 23:31 UTC in reply to "RE: XNU != microkernel"
tidux Member since:
2011-08-13

I mean that the OS X driver is often an afterthought compared to the Windows driver (or Linux driver, if we're lucky) for a given peripheral. Are you being deliberately obtuse?

Reply Score: 2

On line patching
by JAlexoid on Mon 21st Nov 2011 15:06 UTC
JAlexoid
Member since:
2009-05-19

"There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can't do this."

Doesn't Linux have on-line patching via Ksplice? So the question isn't about can't, it's about not in the main development plans.

Reply Score: 4

Best is pretty subjective here
by renox on Mon 21st Nov 2011 16:29 UTC
renox
Member since:
2005-07-06

I find amusing that the poster would have such an unbalanced opinion: Linus claims that microkernels adds complexity so they wouldn't be the best here, of course as the author of a monolithic kernel he could be biased, but given the not-so-successful history of micro-kernels, he may also be right..

Reply Score: 1

DeepThought Member since:
2010-07-17

*hehe* not always the best technique makes the race.
Writing code for a microkernel with a clear message based interface is for most programmer a very different paradigm from what they are used to.
So, you get more guy working the old way, but it does not prove it is the better way.
BTW: Most embedded RTOSs could be seen as micro-kernels and there are a lot around. Far more then Linux installations !

Edited 2011-11-21 21:16 UTC

Reply Score: 1

allanregistos Member since:
2011-02-10

*hehe* not always the best technique makes the race.
Writing code for a microkernel with a clear message based interface is for most programmer a very different paradigm from what they are used to.
So, you get more guy working the old way, but it does not prove it is the better way.
BTW: Most embedded RTOSs could be seen as micro-kernels and there are a lot around. Far more then Linux installations !

Examples?

Reply Score: 1

DeepThought Member since:
2010-07-17

"*hehe* not always the best technique makes the race.
Writing code for a microkernel with a clear message based interface is for most programmer a very different paradigm from what they are used to.
So, you get more guy working the old way, but it does not prove it is the better way.
BTW: Most embedded RTOSs could be seen as micro-kernels and there are a lot around. Far more then Linux installations !

Examples?
"

For what now ?
* "best technique" Classic example: betamax <-> VHS
* embedded RTOS: well known ┬ÁKernel: QNX (Neutrino as kernel), others like OSE (RTOS + middle ware) or SCIOPTA (RTOS + middle ware) can IMHO also be seen as OS with ┬ÁKernel.

All those (can) have memory protection and use message passing as IPC.

Reply Score: 1

Comment by ajtgarber
by ajtgarber on Mon 21st Nov 2011 20:08 UTC
ajtgarber
Member since:
2011-08-17

Something has always bothered me when people say that modern hardware has essentially taken away the performance hit, the idea is to get all you can out of the system. Sometimes I do agree that you need to take a hit do get a better system but it just bothers me that "modern hardware" is being used as an excuse. Anyway I've never used a microkernel system before, definitely something I should look into. One of the things I'm worried about is trying to interact with one, but I'll figure this out sometime soon.

Reply Score: 1

theosib
Member since:
2006-03-02

The microkernel proponents want to argue that on modern systems, the overhead of message passing isn't very much (because CPUs are fast now), and moreover, people have gotten cleverer with the design of message passing interfaces so as to make the relative overhead smaller as well.

If so, why do we keep seeing poor performance numbers for microkernels?

One possibility is that the message-passing overhead is higher than they think.

But I think a bigger factor has to do with optimizations elsewhere in the kernel. Linux has so many people working on it, thinking up smarter ways to optimize every little thing that it's kicking the crap out of less-supported OS's in areas having nothing to do with communcation. Linux has really clever process and I/O schedulers.

FreeBSD also has a lot of developers, and as such, they too have optimized the heck out of things, and this is why it and Linux are in the same league.

But something like Minix is a toy project. It's something written by academics as a teaching tool, and as a result, it lacks many of the optimizations that would obfuscate what they're trying to teach. Thus, when you do comparative benchmarks, it sucks. But this has nothing to do with it being a microkernel, and they're not trying to make a fast OS. They're trying to make something whose design is simple and transparent.

Comparisons between microkernels and monoliithic kernels are all much too abstract, and when you do benchmarks, it's not fair, because you're comparing too many things not related to this particular architectural choice. I ASSUME that, if all other things were equal, message passing adds enough overhead compared function calls that we would notice it in benchmarks. But that is just a guess, and not a very well-informed guess.

Really, the argument here isn't microkernel vs. monoithic. That's a red herring. The debate stems from a more deeply-rooted philosophical difference between academics and industry engineers. Engineers are willing to do things that work, even if they're ugly (to a purist of some sort) that academics won't touch because it's not how they think people should be trained. That isn't to say that Linux has a lot of hacks (although I'm sure it has some), but there are cases where the KISS principle is violated for pragmatic reasons, while the academics want to start from an elegant theory and produce an implementation that maps from that 1-to-1.

I've worked as an engineer for a long time, and I'm also working on a Ph.D., and the motivating philosophies are night-and-day different.

Reply Score: 6

Neolander Member since:
2010-03-08

Well, a problem with performance discussion is the multitude of performance metrics.

As an example, in my WIP OS project, I would not claim to beat mainstream OSs on pure number-crunching tasks. If that happened, it would be an incident. But I bet that I can beat any of those on reactivity, foreground vs background prioritization, glitch-free processing of I/O with real-time constraints...

Which is important ? It depends on the use cases. If you want to build a render farm or a supercomputer, then Linux or something even more throughput-optimized like BareMetalOS would be best. But if you want to install something on your desktop/tablet for personal use, what I want to prove is that there are different needs which may require different approaches.

Recently, Mozilla have been bragging about how they're back in the JS performance race. But they have quickly realized that the reason why people say Firefox is slow is its UI response times. And now they work on it.

Reply Score: 1

Mimics, is the key :)
by dionicio on Tue 22nd Nov 2011 00:31 UTC
dionicio
Member since:
2006-07-12

If MINIX people is successful
in the mimecry
of the 48-core Intel SCC chip
then we all be very happy
of having
a new kid in the block.

micro-kernels wedding multi-core
are a natural.

Quite amazing work Andrew,
you will need a bigger team.

:) ;) ;)

Reply Score: 1

Linux is practical
by zimbatm on Tue 22nd Nov 2011 04:24 UTC
zimbatm
Member since:
2005-08-22

There is more than a reason why Linux is successful, but one of them is for being practical. Microkernel design took much longer to crystallize so that it wouldn't have race conditions and be efficient. Linux got implemented much faster and gained component separation later, and where it matters, that is on the driver side. By the way, Linux supports replacing the kernel on the fly since many years, and it's called kexec.

Reply Score: 3

RE: Linux is practical
by Alfman on Tue 22nd Nov 2011 06:46 UTC in reply to "Linux is practical"
Alfman Member since:
2011-01-28

zimbatm,

"Microkernel design took much longer to crystallize so that it wouldn't have race conditions and be efficient."

I think you are right that early on in a kernel's development, a macrokernel takes less work. As it gets more and more complex though, a microkernel should theoretically pull out ahead by being easier to manage.

Microkernels are a natural fit for contract based programming where independent developers can work on different components without stepping on each other. This is absolutely a shortcoming of linux today, where each kernel causes new things to break for out of tree developers and modules have to be routinely recompiled in unison or they'll break.

"By the way, Linux supports replacing the kernel on the fly since many years, and it's called kexec."

I don't believe this is what was meant by not rebooting. What was meant was updating the kernel in place without loosing state such that applications won't notice the upgrade. So for instance, all the running applications and all their in-kernel handles and sockets need to be saved and restored right back where they left off after being patched. Supposedly ksplice does it.

Reply Score: 1

RE[2]: Linux is practical
by zimbatm on Tue 22nd Nov 2011 17:55 UTC in reply to "RE: Linux is practical"
zimbatm Member since:
2005-08-22

Well said.

It's exactly what I meant. It's feasible to build an efficient and robust micro-kernel but contracts are hard and should put in the right place to not impact performance too much.

Another aspect was that personal computers didn't have hot-swappable components (even today except for SATA and USB). Once the bugs are ironed out of the drivers, there is little use of compartmentalization if you need to reboot your computer anyways. Moreover, if the CPU, RAM, bus or disk fail there is little you can do.

In the end I believe that micro or macro, all practical kernels (as in not for reasearch) tend to go in the same direction even if they didn't start at the same point. Darwin for example has a micro-kernel (Mach 3) base but got augmented with some BSD code. Linux adds compartmentalization where needed.

That said, I'm not an expert so what I'm saying might be bullshit ;)

Reply Score: 1

Andrew Tanenbaum
by tuma324 on Tue 22nd Nov 2011 05:47 UTC
tuma324
Member since:
2010-04-09

I read this article and he sounds jealous because of Linux success.

The BSD lawsuits had nothing to do with Linux success. That's just an excuse for saying "BSD didn't succeed".

Linux did succeed for its own merits, and that doesn't have anything to do with any lawsuits.

Andrew Tanenbaum is just a jealous man.

Reply Score: 4

RE: Andrew Tanenbaum
by FreeGamer on Thu 24th Nov 2011 17:48 UTC in reply to "Andrew Tanenbaum"
FreeGamer Member since:
2007-04-13

I beg to differ.

One of the reasons Linux took off is because it was commercially friendly. With lawsuits hanging over the *BSDs, business became wary of them.

That and companies could hire people to work on Linux and there was little or no boundary to getting their work included for the most part.

Linux development is so rapid because an army of people get paid to work on it. There's nothing like a wage to help accelerate the amount of work one can do on a project.

Reply Score: 2

v My system is better than yours..
by Yehoodi on Tue 22nd Nov 2011 08:50 UTC
Alfman Member since:
2011-01-28

Yehoodi,

"Too much fuss for stuff people don't care about..."

Nobody said normal people care about it, but then many of us are here on osnews because *we* do.

I like the idea of an OS that blurs the distinction between kernel code and user code, which is kind of what microkernels do - in theory there's no need for userspace and kernel space development to be foreign to one another.

For example, I like "fuse" file systems under linux, and I like file system kernel modules under linux, but I find it rather unfortunate that the code needs to be implemented twice to do the exact same thing from user or kernel space.

Reply Score: 3

Yehoodi Member since:
2011-11-22

I know what you mean, I love computer science too and that is why I am here, reading this page regularly.

But it was Tanenbaum in that article who talked about success stories giving all sorts of other OS usage ratios just to justify his own point of view. As far as I know, a successful software is one that is widely used, otherwise I would just call it a nice proof of concept but practical failure.

Either way this is my own opinion and, of course, yours may differ...

Reply Score: 1

24/7/365
by YALoki on Wed 23rd Nov 2011 01:40 UTC
YALoki
Member since:
2008-08-13

Tannenbaum is quoted as saying:
"There are a lot of applications in the world that would love to run 24/7/365 and never go down, not even to upgrade to a new version of the operating system. Certainly Windows and Linux can't do this."

I say that anybody who says 24/7/365 is innumerate.
24 hours a day
7 days a week
365 weeks a WHAT?

Oh shit, I know... a 0.7 decade!

Reply Score: 2

Tanenbaum again is wrong
by allanregistos on Wed 23rd Nov 2011 02:41 UTC
allanregistos
Member since:
2011-02-10

The single biggest issue with microkernels - slight performance hits - has pretty much been negated with today's hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.

But as we sadly know, the best doesn't always win. Ah, it rarely does.


Thom, I need an evidence: Give me an example of a pure, microkernel OS (not hybrid: as per Tanenbaum's design)that was in use in production systems.

If you can't provide that, then Linus' stance on microkernel is true: "Good in paper, rarely usable in practice." We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it.

I will bookmark this date, and then wait for five years or more if the Minix will become the next big thing in smart devices.

Reply Score: 1

RE: Tanenbaum again is wrong
by Alfman on Wed 23rd Nov 2011 06:08 UTC in reply to "Tanenbaum again is wrong"
Alfman Member since:
2011-01-28

allanregistos,

"If you can't provide that, then Linus' stance on microkernel is true: 'Good in paper, rarely usable in practice.' We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it."

Linus may or may not be right, but it is a fallacy to suggest that just because microkernels have a small market share, then microkernels are unusable.

The biggest reason independent operating systems out of academia don't have much to offer in general usability is because they don't receive billions of dollars in investment every single year. It's somewhat of a catch 22, but it really doesn't mean the technological underpinnings are bad, some of them may be genius.


Now I can't deny that Tanenbaum appears to be extremely jealous, but I do think he is correct when he said that non-technical attributes have far more to do with a project's success than technical merit.

(For the record, I don't know anything about Minix in particular).

Reply Score: 2

RE[2]: Tanenbaum again is wrong
by allanregistos on Thu 24th Nov 2011 07:50 UTC in reply to "RE: Tanenbaum again is wrong"
allanregistos Member since:
2011-02-10

[/q]

allanregistos,

"If you can't provide that, then Linus' stance on microkernel is true: 'Good in paper, rarely usable in practice.' We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it."

Linus may or may not be right, but it is a fallacy to suggest that just because microkernels have a small market share, then microkernels are unusable.

The biggest reason independent operating systems out of academia don't have much to offer in general usability is because they don't receive billions of dollars in investment every single year. It's somewhat of a catch 22, but it really doesn't mean the technological underpinnings are bad, some of them may be genius.

Now I can't deny that Tanenbaum appears to be extremely jealous, but I do think he is correct when he said that non-technical attributes have far more to do with a project's success than technical merit.

(For the record, I don't know anything about Minix in particular).

This might be true with respect to Windows vs. Unix on servers, a success of any OS deployed in production might include the factor of non-technical attributes and ignore the importance of technical superiority. But for kernel design, I think many factors comes to play, since I am not an expert in any of this, this is just my opinion.

Yes, Linus could be wrong. But philosophically, I find Linus' stance to be more acceptable than the professor's.

Visiting minix3 site with a confusing statement:
"Ports to ARM and PowerPC are underway. Various programs and device drivers are being ported, and so on"

While there are lots of work for developers at: http://wiki.minix3.org/en/MinixWishlist
which is more important than porting the kernel to different architectures. I might be missing something here.

Reply Score: 1

RE[2]: Tanenbaum again is wrong
by allanregistos on Thu 24th Nov 2011 08:28 UTC in reply to "RE: Tanenbaum again is wrong"
allanregistos Member since:
2011-02-10

Alfman:

I considered myself an inexperience desktop developer.
I am also an audio/multimedia user and uses applications such as Ardour and jack.
If you are a microkernel expert or any of you here reading this, I have a question.
Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?

Since I believe this is where the microkernel's future holds. Regardless of the efficiency, stability and security of a microkernel system, if it isn't useful to a desktop developer doing his work, to an Ardour/jack user, or any other end user, it will become useless but a toy.

Reply Score: 1

RE[3]: Tanenbaum again is wrong
by Alfman on Thu 24th Nov 2011 18:31 UTC in reply to "RE[2]: Tanenbaum again is wrong"
Alfman Member since:
2011-01-28

allanregistos,

"Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?"

I am afraid it is out of my domain.

I know that pulse audio recently underwent a shift away from using sound card interrupts to using higher resolution sources like the APIC clock. This inevitably caused numerous problems on many systems, but never the less the goal was to get lower latencies by having the system write directly into the memory being read simultaneously a moment later by the sound card.

I don't see why any of this couldn't also be done with a micro-kernel driver. In fact I think the audio mixing for pulseaudio under linux today already occurs in a user space process using "zero-copy" memory mapping. I've never looked at it in any detail though.

Reply Score: 2

allanregistos Member since:
2011-02-10

allanregistos,

"Can a microkernel-kernel designed OS such as Minix3 be good enough to scale to real-time demands of audio apps similar to what we found in Linux kernel with -rt patches?"

I am afraid it is out of my domain.

I know that pulse audio recently underwent a shift away from using sound card interrupts to using higher resolution sources like the APIC clock. This inevitably caused numerous problems on many systems, but never the less the goal was to get lower latencies by having the system write directly into the memory being read simultaneously a moment later by the sound card.

I don't see why any of this couldn't also be done with a micro-kernel driver. In fact I think the audio mixing for pulseaudio under linux today already occurs in a user space process using "zero-copy" memory mapping. I've never looked at it in any detail though.


That is enough for me, alfman. I believe that the current monolithic structure of OS kernels will be modified in the future to scale to new innovations in hardware architectures. Thank you for the insights in Pulse audio, I am not capable to respond to you regarding the technical side of it.

As an end user and an OS hobbyist, I think I need some information in the future of what OS is the best for my desktop needs. I think today's operating systems(except for the MAC) were too focused on servers and then in the desktop as an afterthought. The fact that in the Linux kernel we have -rt patches proves that.

Reply Score: 1

RE: Tanenbaum again is wrong
by Neolander on Wed 23rd Nov 2011 08:43 UTC in reply to "Tanenbaum again is wrong"
Neolander Member since:
2010-03-08

QNX ? Symbian ?

Tanenbaum has a longer list on his website, although it takes some tricky moves to reach it : http://www.cs.vu.nl/~ast/reliable-os/ (section "Are Microkernels for Real?")

Edited 2011-11-23 09:01 UTC

Reply Score: 1

RE[2]: Tanenbaum again is wrong
by Alfman on Wed 23rd Nov 2011 12:14 UTC in reply to "RE: Tanenbaum again is wrong"
Alfman Member since:
2011-01-28

Neolander,

That's an excellent link.

I'm not entirely in agreement with everything he says, but he makes some strong points.

I disagree with him quite strongly that microkernel IPC should be limited to byte/block streams. I'd strongly prefer working with objects directly (ie being atomically transferred). Object serialization over pipes is inefficient and often difficult, particularly when the encapsulated structures need to be reassembled from reads of unknown length. I find it ironic he views IPC pipes to be the equivalent of OOP. Sure they hide structure, but they also hide a real interface.

I know Tanenbaum was merely responding to Linus' remark about how microkernels make it extremely difficult to manipulate structures across kernel boarders. In a proper OOP design, one shouldn't be manipulating structures directly. Arguably, linux components wouldn't break as often if they didn't.

There are good arguments for either approach. But I do think microkernels have more merit as systems become more and more complex.

Reply Score: 2

RE[3]: Tanenbaum again is wrong
by Neolander on Wed 23rd Nov 2011 17:58 UTC in reply to "RE[2]: Tanenbaum again is wrong"
Neolander Member since:
2010-03-08

I also take this paper with a significant grain of salt, but for different reasons. While I agree with the need for standard interfaces, I do not agree with the pure OOP vision that data structures cannot constitute an interface and that their inner information should always be hidden away like atomic weapon plans. In some situations, a good data structure is better than a thousand accessors.

I feel the same with respect to shared memory. Okay, it's easy to shoot yourself in the foot if you use it improperly, but it is also by far the fastest way to pass large amounts of data to an IPC buddy. And if you make sure that only one process is manipulating the "shared" data at a given time, as an example by temporarily marking the shared data as read-only in the caller process, it is also perfectly safe.

Edited 2011-11-23 17:59 UTC

Reply Score: 1

RE: Tanenbaum again is wrong
by allanregistos on Thu 24th Nov 2011 08:00 UTC in reply to "Tanenbaum again is wrong"
allanregistos Member since:
2011-02-10

The single biggest issue with microkernels - slight performance hits - has pretty much been negated with today's hardware, but you get so much more in return: clean design, rock-solid stability, and incredible recovery.

But as we sadly know, the best doesn't always win. Ah, it rarely does.


Thom, I need an evidence: Give me an example of a pure, microkernel OS (not hybrid: as per Tanenbaum's design)that was in use in production systems.

If you can't provide that, then Linus' stance on microkernel is true: "Good in paper, rarely usable in practice." We have an evidence for this, just download Minix and install it anywhere you like, and tell us the usability experience with it.

I will bookmark this date, and then wait for five years or more if the Minix will become the next big thing in smart devices.


I concede that I might be wrong here.
I am interested to test Minix as an OS hobbyist(I am not a OS developer or any of that low-lever language user).

Reply Score: 1