Linked by Thom Holwerda on Wed 22nd Jan 2014 13:47 UTC, submitted by fran
OSNews, Generic OSes

The Muen Separation Kernel is the world's first Open Source microkernel that has been formally proven to contain no runtime errors at the source code level. It is developed in Switzerland by the Institute for Internet Technologies and Applications (ITA) at the University of Applied Sciences Rapperswil (HSR). Muen was designed specifically to meet the challenging requirements of high-assurance systems on the Intel x86/64 platform. To ensure Muen is suitable for highly critical systems and advanced national security platforms, HSR closely cooperates with the high-security specialist secunet Security Networks AG in Germany.

The webpage contains instructions for building the kernel yourself, for installing it in a virtual machine, and for running it on real hardware.

Thread beginning with comment 581495
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: The SWISS
by hackus on Thu 23rd Jan 2014 19:09 UTC in reply to "RE[4]: The SWISS"
hackus
Member since:
2006-06-28

In order for Microkernel's to work well, lots of stuff has to be built in hardware that is not needed in a Monolithic kernel.

Partition states or context switching and message passing could be done in hardware for example to make Microkernel's much more efficient.

That IS, if researchers could ever agree on exactly how a Microkernel should work and which parts should have hardware built for them. THEN Intel and AMD or ARM etc could do a cost analysis and determine if a market exists for them.
(HINT: One doesn't exist. :-) )

So why hasn't a Microkernel evolved to the state of LINUX?

One problem is that researchers cannot even agree on a final design plan for a Microkernel. I would also like to point out that this research has been a technological cul-de-sac since the 80's.

For the sake of the argument, and to save all of those PhD's from irrelevance, :-) say Microkernel design was finished and a plan was presented to Intel, AMD and ARM engineers for large scale deployment of such hardware.

The PLAN would require large portions of silicon to be added to todays machines which do not have what I would call excessive NANNY STATE kernel's running on them. (i.e. MONOLITHIC kernels).

Microkernels (and the ideas around them) would STILL run slower than MONOLITHIC kernels, consume more power to run per watt than a Monolithic kernel requires of computer hardware.

In short, we would have a collapse of the computer industry, everyone would go broke, and we would have to build thousands of new Nuclear power plants to make it all work.

Finally, they would all melt down and well, that would be that.

:-)

-Hackus

PS: For the PhD's sending me hate mail (undoubtedly using a Monolithic Kernel to send it), please remember I do not control your department funding, I do not write your NSF grant papers and I am sure your research will turn out something...

One of these days. (Wink Wink Wink).

Reply Parent Score: 1

RE[6]: The SWISS
by ingraham on Fri 24th Jan 2014 01:51 in reply to "RE[5]: The SWISS"
ingraham Member since:
2006-05-20

In order for Microkernel's to work well, lots of stuff has to be built in hardware that is not needed in a Monolithic kernel.


There is absolutely no basis for this statement.

So why hasn't a Microkernel evolved to the state of LINUX?


I would argue that QNX has, in fact, evolved to the state of Linux. Maybe even better. But they're closed source. Of course, the kernel design is not the only reason Linux has been successful. Consider that *BSD also has a monolithic kernel, yet has not seen anywhere near the level of uptake as Linux based OSes.

Microkernels (and the ideas around them) would STILL run slower than MONOLITHIC kernels, consume more power to run per watt than a Monolithic kernel requires of computer hardware.


Again, no actual basis in fact. Yes, IPC introduces some overhead that can impact speed (and therefore performance per watt.) How large that impact is depends on a lot of factors. Monolithic kernels have their own share of issues to resolve. There are trade-offs to any design decision.

-James

Reply Parent Score: 2

RE[6]: The SWISS
by DeepThought on Fri 24th Jan 2014 06:23 in reply to "RE[5]: The SWISS"
DeepThought Member since:
2010-07-17

In order for Microkernel's to work well, lots of stuff has to be built in hardware that is not needed in a Monolithic kernel.


A simple cache is enough. At least in the environments I know.
My companies RTOS together with the TCP/IP stack could be seen as a MP-┬ÁKernel and outperforms on a 1.5GHz PowerPC Linux.
Of course, this is no general prove only personal experience. But it shows, that even the "old" Power architecture is good enough for a ┬ÁKernel.

Partition states or context switching and message passing could be done in hardware for example to make Microkernel's much more efficient.


Sure it could. Same for context-switch when entering a monolithic (btw. .. ithic => from stone age :-) ) kernel.


One problem is that researchers cannot even agree on a final design plan for a Microkernel.


Has this been done for Linux or BSD.
IMHO, the main motive for Linux in the beginning was to have a alternative to M$.
Now it is so widely spread, that big companies see there benefit from it, either because it is "free" or because they have the sources at hand.

For a "new" ┬ÁKernel which wants to be a replacement for Linux on mainframes or even on the desktop there is no more a herd of idealistic programmers.

But if you turn away from mainframe and destop towards embedded, there are a lot of micro kernels.
Some like PikeOS (has roots in L4) started as hypervisors. Some like QNX (or better Neutrino) are there for years.
Both use message passing as IPC.

Reply Parent Score: 1

RE[7]: The SWISS
by zima on Tue 28th Jan 2014 23:07 in reply to "RE[6]: The SWISS"
zima Member since:
2005-07-06

I would upvote you if not for that immature "M$"...

Reply Parent Score: 2