Linked by Thom Holwerda on Wed 6th Feb 2013 12:29 UTC, submitted by Anonymous
Gnome "Some GNOME developers are planning to implement an app format that allows developers to provide their Linux programs in distribution-independent files that can be installed as easily as smartphone apps. A sandbox model is supposed to isolate the apps from each other, and from the rest of the system, in a way that goes further than the isolation in current Linux distributions. Various developers worked to conceptualise such "Linux apps" at the GNOME Developer Experience Hackfest, which was held in the run-up to FOSDEM 2013 in Brussels. At the hackfest, the GNOME developers also declared JavaScript as the de-facto standard for GNOME programming." Right, because they haven't alienated enough of their users.
Thread beginning with comment 551766
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Good in principal
by Neolander on Thu 7th Feb 2013 06:06 UTC in reply to "RE[4]: Good in principal"
Member since:

While I love the idea of a Microkernel (and trust me, I do), you can accomplish sandboxing without it.

I'm admittedly not well versed eough on the subject though.

You can sandbox user space apps without them, but the goal of microkernels is to also sandbox device drivers and the OS itself, so as to reduce the trusted computing base for a given task.

Considering how easily a buggy kernel-space driver can crash/lockup the whole OS or worse (I should know, I periodically experience that with GPU drivers and a USB piezo motor controller which we use at work), that's a Good Thing.

And this is why almost all OSs which initially went all monolithic are trying to push stuff back to user space nowadays. See WDDM drivers on Windows or FUSE on Linux as an example. I doubt that they will ever manage a full transition though, there's just too much work to do on a full-grown OS.

EDIT: As a bonus, microkernel also provide "natural" design guidelines for user app sandboxing. You can start by having security permissions that follow system component boundaries such as "Can use system components X, Y and Z", which are essentially free to implement. Then, you iterate from there towards finer-grained and coarser-grained mechanisms where needed. With monolithic architectures, on the other hand you are generally forced to derive and implement your sandboxing rules from the vacuum.

Edited 2013-02-07 06:24 UTC

Reply Parent Score: 4

RE[6]: Good in principal
by moondevil on Thu 7th Feb 2013 07:55 in reply to "RE[5]: Good in principal"
moondevil Member since:

Yes, most mobile OS are approaching micro-kernel architectures at least at user space level.

Mac OS X sandboxes with XPC communication is also quite similar to how micro-kernels work.

QNX and Symbian have showed that it is possible to achieve a high throughput when communicating between tasks, by moving pointer/handle ownership instead of copying data.

Another approach would be Microsoft's research to use the operating system as a library on top of an hypervisor, Drawbridge. From the same research group that created Singularity.

Reply Parent Score: 4

RE[7]: Good in principal
by Nelson on Thu 7th Feb 2013 21:31 in reply to "RE[6]: Good in principal"
Nelson Member since:


Do either of you know if the same drawbacks (alleged or otherwise) of Microkernels on x86 are the same on ARM? I know things have different associated costs across CPUs which is why I ask.

Also, was the potential performance drawbacks on x86 ever translated into real life performance issues (to outweigh the stability gains?), can it be quantified in any way?

I'm quite intrigued by the idea for the reasons you both state.

I believe I read that Singularity ran even user processes in kernel mode because of its verified compiler. That also seems like a good (more dramatic) idea, but its in the same vain

Reply Parent Score: 2