Linked by Norman Feske on Wed 31st Aug 2016 22:35 UTC
OSNews, Generic OSes

With the just released version 16.08, Genode makes the entirety of the framework's drivers, protocol stacks, and libraries available on the seL4 kernel. Thereby, the vision of a real general-purpose OS built upon a formally verified kernel suddenly becomes a tangible mission. Further highlights of the new version are the use of the framework to run VirtualBox 4 on the Muen separation kernel, an experimental version of VirtualBox 5 on top of the NOVA kernel, the added support for virtual networking and TOR, profound Zynq board support, and new tools for statistical profiling.

The seL4 kernel is universally regarded as the world's most advanced open-source microkernel - not by technical merits alone but by the fact that the kernel is accompanied by formal proofs of its correctness. However, to achieve this high level of assurance, the kernel's responsibilities had to be reduced to an extreme that goes even beyond traditional microkernels. In particular, the kernel leaves the problem of managing kernel memory to be solved at the user level. The problem still exists but it isn't considered the kernel's problem anymore. Consequently, this kernel design makes the creation of a scalable user land extremely challenging. For this reason, most use cases of seL4 remain solely static in nature, or combine static components with virtualization. The real potential of seL4 to scale towards dynamic systems remained untapped so far. Here is where Genode comes into play. Genode is designed as a dynamic user land for microkernels, which addresses the management of memory at the user-level via a unique resource-trading concept. It turns out that this concept is a suitable answer to the kernel-management problem posed by seL4. By completing the implementation of the framework's base mechanisms for this kernel, literally hundreds of existing Genode components become readily available to the seL4 community.

The Muen separation kernel is another take on the use of formal methods for assuring the absence of bugs in an OS kernel. In contrast to seL4, Muen applies different technologies (Ada/SPARK) and addresses static partitioned systems. A natural use case is the co-hosting of virtual machines. In a multi-level scenario, each virtual machine hosts a guest OS for editing documents at one security level. The separation kernel enforces the information-flow policy between the virtual machines. In such scenarios, the predominant guest OS is MS Windows. Consequently, Muen had to support the virtualization of such commodity OSes. Genode already solved this problem for another microkernel by making VirtualBox available on top of NOVA. So the idea was born to leverage Genode's version of VirtualBox on top of Muen - essentially using Genode as a runtime environment for VirtualBox. As crazy as it sounds - it works! The release documentation has a dedicated section that tells the full story. Speaking of VirtualBox, the ability to run VirtualBox directly on a microkernel is certainly a key feature of Genode. With Genode 16.08, a first version of VirtualBox 5 becomes available on the NOVA kernel.

The anecdotes above highlight the benefits of Genode's cross-kernel portability. The new version pushes this idea even further by attaining binary compatibility across all the supported kernels for a given CPU architecture. In fact, compiled once, an ELF binary of a regular component can be natively executed on kernels as different as seL4 and Linux as long as the component does not rely on a special feature of a particular kernel.

At a higher level, the current release extends the framework's library of ready-to-use building blocks in several areas. Most prominently, there are new network-related components for routing traffic, using TOR, and for distributing Genode over the network. Other added components are concerned with improving the use of Genode as a general-purpose OS, or to aid the optimization of components by the means of statistical profiling. Version 16.08 is further complemented with added board support for devices based on Xilinx Zynq, including drivers for GPIO, video DMA, SD cards, and I2C.

These and many more topics are covered in detail by the release documentation of version 16.08.

Order by: Score:
Wow
by hardcode57 on Wed 31st Aug 2016 23:54 UTC
hardcode57
Member since:
2014-06-02

That is the clearest exposition of a difficult subject I have ever read.

Reply Score: 1

Double Wow!
by Pro-Competition on Thu 1st Sep 2016 03:33 UTC in reply to "Wow"
Pro-Competition Member since:
2007-08-20

I was also going to title my comment "Wow", mainly because of how fast this project progresses!

But after reading the summary, I think it deserves another "Wow", because the downloadable "example scenarios" for "Genode/seL4" and "Genode/NOVA" will make it much easier for more casual followers to play around with the system. IMHO, that is the sort of thing that can really boost enthusiasm from the potential user base.

Since I already run all of my "workstations" in virtual machines, the progress with VirtualBox hosting is also really exciting. This could quickly become the cornerstone of creating a more secure daily computing environment. (How often do you get to say that?)

Of course, the expanded hardware support is always welcome too. (Someday I may get one of those Zynq boards - whether or not it has any real use, the whole FPGA / RISC-V capability gets my geeky juices flowing in a way that most things don't these days.)

Now, to try out those scenarios...

As always, keep up the great work, Genode team!

Reply Score: 2

RE: Double Wow!
by nfeske on Fri 2nd Sep 2016 09:55 UTC in reply to "Double Wow!"
nfeske Member since:
2009-05-27

Thanks for the nice feedback! No plan to slow down... :-)

Reply Score: 1

RE[2]: Double Wow!
by tidux on Fri 2nd Sep 2016 15:59 UTC in reply to "RE: Double Wow!"
tidux Member since:
2011-08-13

16.08 looks like a fantastic release! There are a few things I'm interested in that didn't make the release notes.

1. I know you guys were looking in to Nix as a package manager earlier rather than needing to compile-time bundle everything into OS components. Was that pushed off until after the cross-kernel ABI/API stabilization or is it still in the works?

2. Will noux ever get true threads or libpthread? (Either is fine, I just want the ability to use threaded software.) A GNU userland with full or nearly so POSIX compatibility and a package manager would be a HUGE boost to the usability of Genode and provide a bunch of "ready to run" software. As sort of a subquestion of this, are there plans for header files and C libraries to offer a compile time compatibility layer for noux audio applications that would normally talk to ALSA, OSS, or sndio?

3. How's the complete "Turmvilla scenario" of Genode as a day to day OS coming along in personal testing?

Reply Score: 2

RE[3]: Double Wow!
by nfeske on Fri 2nd Sep 2016 17:30 UTC in reply to "RE[2]: Double Wow!"
nfeske Member since:
2009-05-27

I consider package management as one of most important topics right now. The cross-kernel binary compatibility was very much motivated by it. We are exploring various ways forward and Nix is a big influence. The intermediate state is not release-notes-worthy yet. So we kept our lips tight.

Noux will certainly further improve but pthread support is not a priority right now. For applications that do not rely on fork, Noux is not needed. Such applications can just use our libc and pthread libraries.

As sort of a subquestion of this, are there plans for header files and C libraries to offer a compile time compatibility layer for noux audio applications that would normally talk to ALSA, OSS, or sndio?


No concrete plans. It would be intriguing to apply Genode's VFS plugin concept for this purpose. Maybe you'd like to start experimenting in this direction? ;-)

3. How's the complete "Turmvilla scenario" of Genode as a day to day OS coming along in personal testing?


I'm using Turmvilla exclusively since about one year. My actual work environment is (still) a Linux VM running on top of Turmvilla. To cultivate the use of Genode-native applications, proper package management is somehow a precondition. Right now, installing/updating/modifying Turmvilla is too labor-intensive to recommend it to broader user base.

Reply Score: 1

Comment by ddc_
by ddc_ on Thu 1st Sep 2016 12:56 UTC
ddc_
Member since:
2006-12-05

However, to achieve this high level of assurance, the kernel's responsibilities had to be reduced to an extreme that goes even beyond traditional microkernels. In particular, the kernel leaves the problem of managing kernel memory to be solved at the user level.

So in other words, formal verification of the kernel guarantees neither stability nor security. In one of the most prased microkernels. And microkernel people still don't get why mainstream operating systems are not based on microkernels...

Reply Score: 3

RE: Comment by ddc_
by Vanders on Thu 1st Sep 2016 13:02 UTC in reply to "Comment by ddc_"
Vanders Member since:
2005-07-06

Yes. If you leave out the hard parts, it becomes easy. We're back into the weirdverse where purity is more important than being practical or useful: see also pure functional languages.

Reply Score: 2

RE[2]: Comment by ddc_
by avgalen on Thu 1st Sep 2016 14:28 UTC in reply to "RE: Comment by ddc_"
avgalen Member since:
2010-09-23

You want your base as pure as possible and the hacks to be at the very last level. The incredible layering and swappability of system level components that is shown here is only possible because of the purity of their base (and the knowledge and hard work of these individuals)

This project is surely in the "weirdverse", but that is where we should expect "the next BIG thing" to come from. The "normalverse" is where "the useful iteration of the current things" come from.

Formaly verifiable isn't the important thing here. Having an incredibly flexible and well working base is! None of this makes it better or guaranteed to succeed, but it gives them the best chances that "no money" can buy ;)

Reply Score: 2

RE[3]: Comment by ddc_
by Pro-Competition on Thu 1st Sep 2016 16:08 UTC in reply to "RE[2]: Comment by ddc_"
Pro-Competition Member since:
2007-08-20

You want your base as pure as possible and the hacks to be at the very last level. The incredible layering and swappability of system level components that is shown here is only possible because of the purity of their base (and the knowledge and hard work of these individuals)


Amen! And they have gone through quite a few iterations - testing, refining and redesigning based on what they learn in real testing. This is no "ivory tower" project.


This project is surely in the "weirdverse", but that is where we should expect "the next BIG thing" to come from. The "normalverse" is where "the useful iteration of the current things" come from.


Yes, it has always worked that way, and it still does. The status quo can get very depressing without some hope on the horizon for something better (in this case, enhanced security, better reliability, more flexible virtualization, etc.).


Formaly verifiable isn't the important thing here. Having an incredibly flexible and well working base is! None of this makes it better or guaranteed to succeed, but it gives them the best chances that "no money" can buy ;)


Exactly. Genode runs on many OSes, from their own experimental minimalist "base-hw" (which is now cross-platform!), to microkernels like NOVA or seL4, to Linux. The fact that they can now run the same binaries across all of them is a testament to how skilled and adaptable the team are!

Everyone, try out the "example scenario" ISO images. It doesn't get any easier than this - I just put it in the virtual CD drive of one of my "play" VMs. In 5 minutes, you'll have Genode on seL4 and/or NOVA running on your system. Come on, people - this is real "OS news" ! ;^)

Reply Score: 2

RE[2]: Comment by ddc_
by Megol on Thu 1st Sep 2016 18:20 UTC in reply to "RE: Comment by ddc_"
Megol Member since:
2011-04-11

Yes. If you leave out the hard parts, it becomes easy. We're back into the weirdverse where purity is more important than being practical or useful: see also pure functional languages.


Nope. If you'd actually understood anything about this topic you'd also understand that securely pushing responsibility to userspace is essential for most kinds of microkernels. And that isn't a problem.

Reply Score: 1

RE[3]: Comment by ddc_
by Vanders on Thu 1st Sep 2016 19:40 UTC in reply to "RE[2]: Comment by ddc_"
Vanders Member since:
2005-07-06

Nope. If you'd actually understood anything about this topic you'd also understand that securely pushing responsibility to userspace is essential for most kinds of microkernels. And that isn't a problem.

Nice ad-hominem.

No sensible microkernel pushes kernel memory management into user space just so that it can achieve something simple enough to formally prove. There's a reason the very article you're posting in contains the text
Consequently, this kernel design makes the creation of a scalable user land extremely challenging. For this reason, most use cases of seL4 remain solely static in nature

Reply Score: 2

RE[4]: Comment by ddc_
by Alfman on Thu 1st Sep 2016 20:16 UTC in reply to "RE[3]: Comment by ddc_"
Alfman Member since:
2011-01-28

Vanders,

No sensible microkernel pushes kernel memory management into user space just so that it can achieve something simple enough to formally prove. There's a reason the very article you're posting in contains the text
Consequently, this kernel design makes the creation of a scalable user land extremely challenging. For this reason, most use cases of seL4 remain solely static in nature


It also seems very odd to me for kernel memory management to be implemented outside of the kernel. I would like to hear a justification for doing that.

Just taking a guess: maybe it has to do with the difficulty of providing guaranties under low memory conditions?

This makes the formal verification feel a bit contrived. Of course, it's not bad in and of itself, but I have to wonder what was compromised in order to design it this way. Anyways it's a very interesting topic, it'd be nice if someone more familiar with the kernel were here to discuss it.

Edited 2016-09-01 20:23 UTC

Reply Score: 2

RE[5]: Comment by ddc_
by nfeske on Fri 2nd Sep 2016 10:43 UTC in reply to "RE[4]: Comment by ddc_"
nfeske Member since:
2009-05-27

Intuitively it sounds odd but it is actually perfectly reasonable. Let me try to explain:

Each kernel needs some memory for internal data structures, e.g., for remembering the states of threads and address spaces, or for the page tables supplied to the MMU. Where does this memory come from?

The traditional answer is that the kernel maintains a private memory pool to allocate such objects from. Usually, a fixed portion the available physical memory is reserved for the internal memory pool at boot time. All remaining memory is handed over to the user land. This raises two questions:

1. What happens in the catastrophic event that the kernel memory exhausts? Most microkernels will simply panic! In contrast to a monolithic kernel that can somehow intelligently respond to memory pressure by swapping memory to disk, flushing the block cache, or sacrificing a process, a microkernel has no reasonable way to respond to such a condition. It has no notion of a block cache, no disk driver, and doesn't have any interpretation of the role of processes on top. For this reason, most microkernels (such as traditional L4) just stop the machine.

2. Who can trigger the consumption of kernel memory? The frightening answer is: any user-mode program running on top of the kernel! For example, on traditional L4 kernels, any normal program can install new memory mappings from its own memory into its virtual address space. Inside the kernel, this operation may trigger the allocation of a new page table, and the need to store metadata about the mapping. In L4, this meta data is called "mapping database". For each memory mapping, its keeps a record how the memory mapping was established in order to revoke mappings if their origin disappears.

Isn't this scary? Unfortunately, there is no easy solution for this problem. Hence, the academic research community swapped it under the carpet for more than a decade. Even NOVA as a very recent member of the L4 family left this problem un-addressed. Other kernels provided band-aid solutions in the form of kernel-resource quotas, which work reasonably well in practice but don't scale. The seL4 kernel is the first kernel that at least allows the userland to fix the problem instead of letting it suffer from the kernel's limitation. Sure, it does not magically solve the problem but it enables Genode to solve it.

Therefore, even though seL4's concept to manage kernel memory in user space is inconvenient compared to the more abstract interfaces of prior kernels, I regard it as its killer feature compared to all other kernels I have encountered in the past.

Is the seL4 way the only way? Certainly not. E.g., Genode's custom base-hw kernel successfully addresses the problem in a very different way. But in contrast to seL4, which is agnostic with respect to the user land on top, base-hw's design it inherently tied to Genode.

Reply Score: 1

RE[6]: Comment by ddc_
by Alfman on Fri 2nd Sep 2016 14:11 UTC in reply to "RE[5]: Comment by ddc_"
Alfman Member since:
2011-01-28

nfeske,

First, thanks for replying!

Each kernel needs some memory for internal data structures, e.g., for remembering the states of threads and address spaces, or for the page tables supplied to the MMU. Where does this memory come from?

The traditional answer is that the kernel maintains a private memory pool to allocate such objects from. Usually, a fixed portion the available physical memory is reserved for the internal memory pool at boot time. All remaining memory is handed over to the user land.


I question this. I see no reason the kernel and userspace can't simply share the same memory pool. To the best of my knowledge, this is what operating systems generally do at the page level (with possible quotas, dma reservations, and whatnot).

1. What happens in the catastrophic event that the kernel memory exhausts? Most microkernels will simply panic! In contrast to a monolithic kernel that can somehow intelligently respond to memory pressure by swapping memory to disk, flushing the block cache, or sacrificing a process, a microkernel has no reasonable way to respond to such a condition. It has no notion of a block cache, no disk driver, and doesn't have any interpretation of the role of processes on top. For this reason, most microkernels (such as traditional L4) just stop the machine.


I agree, this is difficult to solve cleanly. I had a similar discussion with neolander, when he was around. One of the things we talked about is how an OOM killer could be safely implemented in his microkernel at the time. (Here is a link for anyone curious, but I don't expect you to read through it.)
https://theosperiment.wordpress.com/2012/11/02/out-of-memory/#commen...


2. Who can trigger the consumption of kernel memory? The frightening answer is: any user-mode program running on top of the kernel! For example, on traditional L4 kernels, any normal program can install new memory mappings from its own memory into its virtual address space. Inside the kernel, this operation may trigger the allocation of a new page table, and the need to store metadata about the mapping. In L4, this meta data is called "mapping database". For each memory mapping, its keeps a record how the memory mapping was established in order to revoke mappings if their origin disappears.


I agree with this too. Any syscall or even page exception or other interrupt can require an allocation. However I'm not seeing why this is problem beyond the out of memory condition in #1.


Hence, the academic research community swapped it under the carpet for more than a decade. Even NOVA as a very recent member of the L4 family left this problem un-addressed. Other kernels provided band-aid solutions in the form of kernel-resource quotas, which work reasonably well in practice but don't scale. The seL4 kernel is the first kernel that at least allows the userland to fix the problem instead of letting it suffer from the kernel's limitation. Sure, it does not magically solve the problem but it enables Genode to solve it.


Here's the bit I'm stuck with, to me it doesn't seem like the problem is solved, it's just moved into a new scope. I'm I right about this or am I missing something?


Therefore, even though seL4's concept to manage kernel memory in user space is inconvenient compared to the more abstract interfaces of prior kernels, I regard it as its killer feature compared to all other kernels I have encountered in the past.


It certainly takes the microkernel concept to the extreme. How does it perform?

Reply Score: 2

RE[7]: Comment by ddc_
by nfeske on Fri 2nd Sep 2016 15:10 UTC in reply to "RE[6]: Comment by ddc_"
nfeske Member since:
2009-05-27

Thanks to the link to your previous discussion!

I question this. I see no reason the kernel and userspace can't simply share the same memory pool. To the best of my knowledge, this is what operating systems generally do at the page level (with possible quotas, dma reservations, and whatnot).


Sorry, I should have been more clear that I referred specifically to L4 kernels. You are right that other microkernels indeed provide memory objects at the kernel interface and thereby can use a single memory pool. The out-of-kernel memory problem exists regardless, doesn't it?

Here's the bit I'm stuck with, to me it doesn't seem like the problem is solved, it's just moved into a new scope. I'm I right about this or am I missing something?


seL4 pushes the problem from the kernel to the user land. Here, it is actually solved by Genode!

How does it perform?


I am unable to answer this question because it is overy generic. It ultimately depends on the workload.

There are plenty opportunities for performance improvements in several areas, in the scopes of both the kernel and Genode. It is too early to draw any conclusions at this point. Anyway, you can give it a spin right now. ;-)

Reply Score: 1

RE[4]: Comment by ddc_
by Megol on Fri 2nd Sep 2016 13:18 UTC in reply to "RE[3]: Comment by ddc_"
Megol Member since:
2011-04-11

"Nope. If you'd actually understood anything about this topic you'd also understand that securely pushing responsibility to userspace is essential for most kinds of microkernels. And that isn't a problem.

Nice ad-hominem.
[/quote]

It isn't and that you think it is only reinforces my point.

[quote]
No sensible microkernel pushes kernel memory management into user space just so that it can achieve something simple enough to formally prove. There's a reason the very article you're posting in contains the text
Consequently, this kernel design makes the creation of a scalable user land extremely challenging. For this reason, most use cases of seL4 remain solely static in nature
"

Yes it contains that text. So what? It isn't relevant to the point I am discussing but I'll add this to not understanding the basics rather than an attempt of moving the goalposts.

Exactly why would moving kernel memory management to user space not be sensible? Compare it to moving device drivers that the kernel is dependent on to work (like timers) to userspace.

L4 kernels are designed to avoid any policy instead pushing the responsibilities of creating and enforcing policy onto the user-space parts of the operating system. Instead the kernel provides the tools the rest of the operating system need to be able to function reliable. While seL4 have many differences to other L4 based kernels this is still true.

If anything one could accuse the seL4 team for being disingenuous about the validation as other required parts of an OS aren't verified. But that would be wrong too as they never claimed to have validated a whole OS.

Reply Score: 1

RE[5]: Comment by ddc_
by Vanders on Fri 2nd Sep 2016 17:49 UTC in reply to "RE[4]: Comment by ddc_"
Vanders Member since:
2005-07-06

It isn't and that you think it is only reinforces my point.

It is and your reply entirely reinforces that it is.

Exactly why would moving kernel memory management to user space not be sensible?

See all those posts you skipped past from Alfman & nfeske? Go read them; they're discussing it in detail.

Reply Score: 2

RE: Comment by ddc_
by Alfman on Thu 1st Sep 2016 14:05 UTC in reply to "Comment by ddc_"
Alfman Member since:
2011-01-28

ddc_,

So in other words, formal verification of the kernel guarantees neither stability nor security. In one of the most prased microkernels. And microkernel people still don't get why mainstream operating systems are not based on microkernels...


Even if you dismiss the benefits of formal verification for being too limited, it isn't actually a reason to dismiss microkernels in general. While there's an ongoing debate about microkernels versus microkernels, that's not really what this is about. We should be asking whether a microkernel that is formally verified in a contrived way is better than one that isn't formally verified at all.

Edited 2016-09-01 14:06 UTC

Reply Score: 2

RE[2]: Comment by ddc_
by ddc_ on Fri 2nd Sep 2016 10:38 UTC in reply to "RE: Comment by ddc_"
ddc_ Member since:
2006-12-05

No, I don't dismiss microkernels in general. What I dismiss is formal verification that influences design choices to the point when critical components are pushed from the kernel for no better reason then facilitating the verification.

Edited 2016-09-02 10:40 UTC

Reply Score: 2

RE[3]: Comment by ddc_
by Alfman on Fri 2nd Sep 2016 13:09 UTC in reply to "RE[2]: Comment by ddc_"
Alfman Member since:
2011-01-28

ddc_,

No, I don't dismiss microkernels in general. What I dismiss is formal verification that influences design choices to the point when critical components are pushed from the kernel for no better reason then facilitating the verification.


This is not at all how I read your original post, but thanks for clarifying.

Reply Score: 2

RE: Comment by ddc_
by Megol on Thu 1st Sep 2016 18:16 UTC in reply to "Comment by ddc_"
Megol Member since:
2011-04-11

"However, to achieve this high level of assurance, the kernel's responsibilities had to be reduced to an extreme that goes even beyond traditional microkernels. In particular, the kernel leaves the problem of managing kernel memory to be solved at the user level.

So in other words, formal verification of the kernel guarantees neither stability nor security. In one of the most prased microkernels. And microkernel people still don't get why mainstream operating systems are not based on microkernels...
"

Ludicrous! By having a verified trusted base security is guaranteed as much as it is theoretically possible.

Stability can't be guaranteed nor can security. Even if one builds the most secure building with 10m hardened steel walls laminated with diamond (read: physically impossible to breach) for ones bank stability can't be guaranteed - if the manager hires monkeys there will be no (bank) business being done. Security can't be guaranteed either as the manager could leave all doors open, including the vault.

Both security and stability is system-wide concepts however to be able to do them at all one need a trusted security base that is as close to fault free as possible.

Reply Score: 1

RE[2]: Comment by ddc_
by ddc_ on Fri 2nd Sep 2016 10:23 UTC in reply to "RE: Comment by ddc_"
ddc_ Member since:
2006-12-05

Ludicrous! By having a verified trusted base security is guaranteed as much as it is theoretically possible.

True, but the benefit extends as far as trusted base does. If kernel memory management is outside trusted base, every facility relying on it is not trusted any more. This design choice reduced trusted base to something of no practical significance.

Edited 2016-09-02 10:24 UTC

Reply Score: 2

Same comment as always
by avgalen on Thu 1st Sep 2016 14:18 UTC
avgalen
Member since:
2010-09-23

These guys are amazing!

The way they work, the knowledge they have, the solutions they choose. It always makes sense from a technological point of view AND they can explain AND they can document.

I have no idea why these guys aren't "absorbed" by the major players to work on "the future"

(and I am saddened that the only response I got from a coworker I tried to discuss this with was "but can I play Pokemon Go on it?")

Reply Score: 3

RE: Same comment as always
by cbass on Fri 2nd Sep 2016 16:26 UTC in reply to "Same comment as always"
cbass Member since:
2013-08-28

This is the best comment I have ever read about Genode. So thank you!

Reply Score: 1