The key difference between regular Ubuntu and Ubuntu Core is the underlying architecture of the system. Traditional Linux distributions rely mostly on traditional package systems—
deb
, in Ubuntu’s case—while Ubuntu Core relies almost entirely on Canonical’s relatively newsnap
package format.Ubuntu Core also gets a full 10 years of support from Canonical rather than the five years traditional Ubuntu LTS releases get. But it’s a bit more difficult to get started with, since you need an Ubuntu SSO account to even log in to a new Ubuntu Core installation in the first place.
It looks like SSO is not a hard requirement:
https://blog.plip.com/2018/10/09/bootstrap-ssh-on-ubuntu-core-with-out-ubuntu-sso-credentials/
Overall the distribution seems to be geared for Internet connected devices (IoT), which you would *not* login. So there is an automated install and upgrade mechanism, and everything* is a read only snap (unless you configure otherwise).
I think this can be compared to a more modern version of puppet.
An all-snap distro … who thought that would be a good idea? Especially on IoT devices, performance is going to be atrocious.
Not to mention the security nightmare. Snaps are served from closed source, Canonical-owned servers with no way to verify that the files you get are what Canonical intended. One bad actor gains access to their servers and it’s game over for every snap-enabled installation out there.
They offer custom “stores”:
https://ubuntu.com/core/smartstart/guide/app-store-commissioning
Basically you can upload the snaps yourself, and authorize with your digital signature. However Canonical still does the hosting and authentication.
sukru,
This is exactly the sort of centralization of control that many of us are critical of on other platforms, be it apple, microsoft, google, whoever. Technology should not become tethered to a single store or privileged vendor. Rather than restate it, I’m just going to quote someone else who’s already said it…
https://blog.linuxmint.com/?p=3766
I think the concerns over centralization and giving competitors control have merit.
That aside, I think sandboxing applications is good. We definitely need more effective ways for owners to control processes in linux. However I do question the snap implementation. Although snaps make it easy to just do away with dependencies altogether, this design is highly inefficient.
I find it awkward, unnecessary and undesirable to have a mount point for each application and I despise the rapid pollution of /proc/mount, although admittedly with better tooling it might be improved. The other gripe is the massive bloat. Ordinarily we’d justify bloat in shared libraries because they are shared by applications, but with snap this is not the case and resource bloat gets multiplied by every application, which adds up quickly when you start using a lot of snaps. Not only do shared resources need to be loaded repeatedly, they consume more memory and it results in more thrashing of the CPU cache at runtime. Consequently snap’s method of bundling will never be as efficient as traditional packages.
So, given the choice I’d stick to traditional packages and flatpak seems a bit better for 3rd party software.
In response to the Fedora Spotify question: https://flathub.org/apps/details/com.spotify.Client
The community has done a great job of repackaging Snap apps into flatpaks.
The problem is library versioning, and how most OSes don’t have a way to deal with it. The traditional way was to bundle everything together in it’s own directory tree. Linux gets around it by having most software go through a central distribution system, but this limits the libraries to the lowest common denominator version.
Nix and the OSTree stuff is the latest attempt to fix this without having lots of application bloat.
https://ostreedev.github.io/ostree/
I want to start ranting about how bad containers are on Linux, and how they should be more like Jails. However, I’m going to go write some code and enjoy my Friday.
Alfman,
Once again it is convenience vs openness, with a touch of revenue seeking.
I actually find using containers and read only mounts for IoT an nice compromise. They waste space, but makes everything more secure. And for embedded devices there will most likely one or a few containers on each device, so not much waste after all.
On the other hand, there needs to be a central location to configure all those devices. Ideally it would be your own server, hosted privately, or in a cloud. But Ubuntu does not work that way.
For the enterprise they offer their own SSO system, and basically say: if you want to have user management, you have to go through us, and pay a per-user fee. That is the same in other products, for example MaaS will allow you control physical racks of servers as if they were virtual machine hosts. But you cannot delegate responsibilities to local accounts, only Ubuntu SSO ones.
Here, there is a choice:
– Fork Ubuntu’s code, and maintain your own server
– Pay Ubuntu for commercial subscription, and receive professional service
There is no third option. Well, ideally there would be a open source fork, but nobody takes that on. (Who would want to deploy a very large scale IoT automation infrastructure, but does not have the money to buy Ubuntu licenses?)