“Some GNOME developers are planning to implement an app format that allows developers to provide their Linux programs in distribution-independent files that can be installed as easily as smartphone apps. A sandbox model is supposed to isolate the apps from each other, and from the rest of the system, in a way that goes further than the isolation in current Linux distributions. Various developers worked to conceptualise such “Linux apps” at the GNOME Developer Experience Hackfest, which was held in the run-up to FOSDEM 2013 in Brussels. At the hackfest, the GNOME developers also declared JavaScript as the de-facto standard for GNOME programming.” Right, because they haven’t alienated enough of their users.
enough said…
Nothing will prevent you from installing debs or whatever packages you want the old way though.
Do the GNOME developers use Linux on the desktop, or are they actually using Macs and trying to develop for tablets and phones? Because they seem to be unaware that the problems they’re trying to solve have already been solved (like with chroot and dpkg), which would be understandable if they weren’t using Linux.
Edited 2013-02-06 13:13 UTC
Or perhaps they just want to be distro neutral instead of being tied to a distro-specific package system like dpkg? You would have known that if you had read the article.
That was my first thought when they made 3D acceleration a requirement and hid the power-off button so people would use suspend exclusively.
Maybe they launch Linux from a VM from time to time.
The idea of sandboxing apps is a novel idea. I just don’t think they’ll pull it off correctly.
I’d have more faith in their ideas were they not trying to use JavaScript for apps. The amount of concentrated stupid in that idea is appalling.
I wouldn’t say novel but I would say nice to have. Honestly, I hope they use Google’s Native Client as the basis for it.
That way, they get a tested codebase for the runtime and compiler toolchain, both capable of working with any language that’ll compile under NaCl’s patched GCC, and they only need to focus their security work on the stuff they modify (eg. the non-Google APIs they expose to apps to allow a native experience)
Of course, given their focus on Javascript, I suspect they’ll consider it overkill to sandbox the native code rather than redoing all the security work to squash bugs in the JS runtime’s sandbox.
Edited 2013-02-06 13:28 UTC
Micro-kernel operating system?
Probably overkill.
Why?
Symbian and QNX are quite fast.
While I love the idea of a Microkernel (and trust me, I do), you can accomplish sandboxing without it.
I’m admittedly not well versed eough on the subject though.
You can sandbox user space apps without them, but the goal of microkernels is to also sandbox device drivers and the OS itself, so as to reduce the trusted computing base for a given task.
Considering how easily a buggy kernel-space driver can crash/lockup the whole OS or worse (I should know, I periodically experience that with GPU drivers and a USB piezo motor controller which we use at work), that’s a Good Thing.
And this is why almost all OSs which initially went all monolithic are trying to push stuff back to user space nowadays. See WDDM drivers on Windows or FUSE on Linux as an example. I doubt that they will ever manage a full transition though, there’s just too much work to do on a full-grown OS.
EDIT: As a bonus, microkernel also provide “natural” design guidelines for user app sandboxing. You can start by having security permissions that follow system component boundaries such as “Can use system components X, Y and Z”, which are essentially free to implement. Then, you iterate from there towards finer-grained and coarser-grained mechanisms where needed. With monolithic architectures, on the other hand you are generally forced to derive and implement your sandboxing rules from the vacuum.
Edited 2013-02-07 06:24 UTC
Yes, most mobile OS are approaching micro-kernel architectures at least at user space level.
Mac OS X sandboxes with XPC communication is also quite similar to how micro-kernels work.
QNX and Symbian have showed that it is possible to achieve a high throughput when communicating between tasks, by moving pointer/handle ownership instead of copying data.
Another approach would be Microsoft’s research to use the operating system as a library on top of an hypervisor, Drawbridge. From the same research group that created Singularity.
http://research.microsoft.com/en-us/projects/drawbridge/default.asp…
Hi,
Do either of you know if the same drawbacks (alleged or otherwise) of Microkernels on x86 are the same on ARM? I know things have different associated costs across CPUs which is why I ask.
Also, was the potential performance drawbacks on x86 ever translated into real life performance issues (to outweigh the stability gains?), can it be quantified in any way?
I’m quite intrigued by the idea for the reasons you both state.
I believe I read that Singularity ran even user processes in kernel mode because of its verified compiler. That also seems like a good (more dramatic) idea, but its in the same vain
Come on, all the “cool kids” use JavaScript. GNOME is always hot for the latest fad (remember Corba?).
Don’t worry, MSFT has shiny object syndrome too. This fad needs to go. JavaScript is the exact opposite of what we should be aiming for.
sorry.. what?
I think Google Translator can help if you’re hard at English, I don’t understand how else you could question that statement.
It’s about time. Apt-get is fine for open source projects, but for the Linux family of OS’s (and not just one distro) to gain serious traction, you really need a convenient binary distribution system that does not depend on a given distro’s choice of libraries and/or package managers.
Edited 2013-02-06 14:21 UTC
Sandboxing is also one of the reasons i prefer Google-chrome-beta besides Mozilla-Firefox for web-browsing.Especially with a Grsecurity hardened-kernel that imposes further restrictions with regard to chroot-jails (sandboxen),preferrably in addition to Apparmor enforced policies.What’s overkill? The longer you are productive without having to worry about whatever site to enter or not with what options enabled is a plus.
What you say, is right, no doubt about that.
On the other hand, I think people are making systems way too complex for added security, while just using a bit of common sense when using a computer would already go a very long way.
Any decent OS already provides separation between users and separation between processes. That is one of the core jobs of your OS. Nowadays, people are tempted to run every service on that OS in a separate VM inside another OS. And the other thing is the sandboxing. Of course, if you run something in a sandbox, then that application cannot access your homedir, and you have to explicitly give it permission. What do you think John Doe does when he gets a popup “app x needs access to your homedir”? In my experience a lot of people will click on *anything* if they get the “promise” that they can download something for free.
So, is it safer? Yes… but in my opinion, it’s also a bit of overkill. With some common sense, you would already go a long way.
Right, because they haven’t alienated enough of their users.
Look, the mac boy attacks the Linux desktop, what a surprise.
This Linux boy will tell you the same thing. Gnome has done a horrible job in the user/devs relations area since Gnome 3 came out. When you tell your users and devs that what they want doesn’t mean crap, they simply go somewhere else. Heck, Gnome spawned not one, but two full fledged desktop forks. Mate and Cinnamon. In my opinin, its pretty damn ballsy to claim to be an open source project and yet go out of your way to prevent people from making changes to their systems. I was a long time Gnome user, but if I want a panel on the bottom and no bar on top, then by god I will and anyone who says otherwise can suck it.
And you base that opinion in what facts?
Oh really? Then how come they’ve designed the extensions system that, among other things, allows you to add a panel? Even hosting it on gnome.org (https://extensions.gnome.org/extension/3/bottom-panel)? And even planing to release a number of officially supported extensions that will enable a classic desktop (http://lwn.net/Articles/526082/)? I understand if gnome devs ignores posts such as yours though.
Edited 2013-02-07 16:42 UTC
You don’t count Unity?
Right, because they haven’t alienated enough of their users.
WTF?!? Could you please explain this comment. I would love to understand the thought process that leads a sane person to this conclusion.
It’s been a well known problem with Linux desktops that there’s no separation between system components and user applications.
“GNOME developers are planning to implement an app format that allows developers to provide their Linux programs in distribution-independent files that can be installed as easily as smartphone apps.”
I wonder what this will do that http://0install.net doesn’t? Anyone know if they rejected it for some reason?
Portals sound interesting:
“When opening a file, the main system’s open file dialog would act as the portal provider; the app will only be given access to the selected file once the user has pressed the “open” button.”
In 0install, this is called a power-box. There’s a demo here:
http://0install.net/ebox.html
Hopefully there can be some sharing of ideas/code between these projects…
Yeah, I was thinking the same (actually, I think that every time someone comes up with yet another contained distribution method but I digress..). 0install is rather neat and is by now pretty mature and stable.
NIH maybe?
0install is not using cgroups feature of the Linux kernel. The new one is using cgroups. This does make some major differences in implementation.
cgroups the filesystem namespace allows /opt/bundle to contain each individual application files. Yes each application bundle what is in the directory /opt/bundle is different and owns to them. So packages applications can use static paths to their resources and libraries. Oinstall applications are forced to use dynamic paths.
Cgroups also it allows live tracking by default if an application is running and absolute termination of program and every program it started. Also provide resource access limits even hide other running processes on the system from the application. All with quite min overheads.
Yes the fact each running application in this package management cannot see each other. This is again different to 0install.
Oinstall does have higher overhead than using the kernel built in feature of cgroups.
http://people.gnome.org/~alexl/glick2/ yes have a good read Soulbender. It does quite a few things better than 0install in design.
Thing is this gnome option is never going to be portable to other platforms out side something Linux..
The cgroup file system alteration features did not exist when Oinstall was invented.
You might call this a tech updated Oinstall that has the option of integrating better and support building most existing Linux applications without require source code alteration. Again lot of applications with 0install requirement for dynamic prefix require quite major alterations to work with 0install.
In fact the warping that Glick2 is doing in theory could allow you to cgroup a completely different distribution to install there applications.
Thanks for the comments! I think these might fit together quite nicely though…
0install’s job is to download the various packages needed to run an application, check digital signatures, handle dependencies, upgrades, roll-back, etc. The output is the set of downloaded packages and how to link them together. Normally, that’s by setting environment variables, because that works everywhere. For example, a Python library might ask for its directory to be added to the application’s PYTHONPATH.
But 0install can also output other kinds of bindings, such as the “overlay” binding (which says that a package should appear at a particular point in the file-system, such as /opt).
If you have a cgroups-based sandbox that can support this then you can just connect them together something like this:
0install download http://example.com/app.xml –xml | cgroups-sandbox
(that selects and downloads all packages needed to run this app, and then outputs the required bindings as XML, ready for use by other tools)
The EBox demo I linked above, for example, does something like this at the level of the programming language, making the required modules appear in the application’s namespace.
Of course, a system based on cgroups won’t work on the BSDs, on Windows, on OS X, etc. I’d imagine that most GNOME applications would want to support at least one of those. So if hard coded paths are unavoidable, you’d need to implement an alternative to the cgroups thing for each platform (but it GNOME will need to do that anyway).
That’s pretty neat (for Linux users) but I don’t see why this can’t be implemented as an optional feature of 0install.
So basically, they will create some kind of zip that bundles the executable (for 1 architecture, or multiple architectures?) together with dependent libraries and data. And then they’ll provide some kind of container to execute those files. The container will then check the requirements and setup a sandbox.
And, this is just my guess, they will probably make those “apps” available in some kind of gnome app store and the user will be able to download them and install them in the user’s home directory after download.
If that is the case, it wouldn’t really have any impact on traditional linux package managers.
Or am I thinking too much here?
Don’t really know what to think of it, but if that’s their idea, it doesn’t sound too bad. In that case, I would hope they don’t make it too gnome specific (but that would be wishful thinking). On the other hand, I’ve stopped using Gnome, and went back to a more barebones linux desktop.
This is perhaps the only good idea I’ve seen come out of GNOME 3. Hey wait a minute, didn’t I propose that GNOME do exactly that a few years ago here on OS News?
I can only hope that other desktops take to this wonderful idea. Finally, we can begin to move away from “The Unix Way” and really start pushing things forward. Now if they would only do something about that bug with case sensitivity on the filesystem…
Edited 2013-02-07 06:06 UTC