I’ve read this article several times now, and I’m still not entirely sure how to properly summarise the main points without leaving important details out. If you really boil it down to the very bare essentials, which packages get updates on which Ubuntu release is a confusing mess that most normal users will never be able to understand, potentially leaving them vulnerable to security flaws that have already been widely patched and are available on Ubuntu – just not your specific Ubuntu version, your specific customer type, or the specific package type in question.
So, in the case of McPhail here, they needed a patched version of tomcat 9 for Ubuntu 22.04. This patched version was available for Ubuntu 18.04 users because not only is 18.04 an LTS release – meaning five years of support – Canonical also offers a commercial Extended Security Maintenance (ESM) subscription for 18.04, so if you’re paying for that, you get the patched tomcat9. On Ubuntu 20.04, another LTS release, the patched version of tomcat9 is available for everyone, but for the version McPhail is running, the newer LTS release 22.04, it’s only available for Ubuntu Pro subscribers (24.04 is not affected, so not relevant for this discussion). Intuitively, this doesn’t make any sense.
The main cause of the weird discrepancy between 20.04 and 22.04 is that Canonical’s LTS support only covers the packages in main (about 10% of the total amount of packages), whereas tomcat9 lives in universe (90% of packages). LTS packages in universe are only supported on a “best effort” basis, and one of the ways a patched universe package can be made available to non-paying LTS users is if it is inhereted from Debian, which happens to be the case for tomcat9 in 20.04, while in 22.04, it’s considered part of an Ubuntu Pro subscription.
So, there’s a fixed package, but 22.04 LTS users, who may expect LTS to truly mean LTS, don’t get the patched version that exists and is ready to go without issues. Wild.
This is incredibly confusing, and would make me run for the Debian hills before my next reboot. I understand maintaining packages is a difficult, thankless task, but the nebulousness here is entirely of Canonical’s own making, and it’s without a doubt leaving users vulnerable who fully expect to be safe and all patched up because they’re using an LTS release.
I’m unsure what running to Debian to solve. If you install packages from repos outside Debian’s supported repos they are not covered by the LTS either. Eg Debian Unstable branch.
Ubuntu pro is free for desktop machines (up to a point of scale) and offers a wider range of support than the base LTS, which includes universe.
This feels equivalent to saying “if I refuse to turn on windows update, Microsoft don’t give me updates. How could Microsoft leave me vulnerable?? And why don’t they give me updates after support ends??” Feel just clickbait.
This aspect of centralized repos is difficult to solve. Even for debian. Obviously backporting fixes on older unsupported code is very labor intensive. Moreover it can be hard to synchronize all projects using the same dependency versions. You may want to run LTS for stability, but if you need a newer version of some program, this can be hard to accomplish.
While acknowledging there are cons too, I do see this as a huge advantage for flatpaks: install whatever versions you want/need independently of anything else. Furthermore as independent packages flatpaks are much easier to support. Many people are happy with the repos and I say use whatever works for you, but in the long term I think flatpaks will win out because the central repo model is just so much work (not for users, but for distros)
Well the goal with using older versions is to maintain consistency and compatibility, so that your existing configuration will work without changes and not introduce any new unexpected functionality. If you’re going to update to the latest version then you lose this and may need to update your configuration to match.
Distributions often provide backports allowing some newer packages to run on an otherwise stable base.
The problem with containers is that they bundle together all the dependent libraries usually in quite an opaque fashion. Even when your host system has up to date versions of various libraries, the container may bundle an out of date vulnerable one and it can be difficult to keep track of. This also encourages developers to tie things to particular versions of libs, rather than updating to newer versions or having a robust build system that can handle newer versions.
bert64,
Yes, I understand and see the appeal of LTS. But the problem is when a distro commits to LTS support, meanwhile upstream developers are working on the latest versions, fixing bugs and vulnerabilities on different code. The distro now has to take responsibility for maintenance and security patches for the LTS code. This is a ton of work and I don’t know how much falls through the cracks. It does seem like a huge duplication of work and I’m not sure how sustainable it is given the many thousands of packages involved, hence the push for containers.
The main issue I’ve experienced (on many occasions) is that you want to use LTS for stability except for one package where a newer version is required for some reason. For example, when a newer version of nginx had support for websockets, which I needed, but support was absent in the LTS repos. This is a problem with the repo model and a huge advantage for self contained packages (appimage/flatpak/snap).
Not often enough. What I’d like to see is a system that lets you select a default baseline (stable/testing/etc) and let you override the setting for any arbitrary package as needed. This would be a killer feature, but of course is really hard to do because of dependency reasons. Flatpak can get us there but at the cost of inefficiency by running several different versions of the same dependencies.
I agree. This is genuinely hard to solve especially given that not everyone has the same needs,
How does this flow downstream?
I.e., how well (or badly) do distro’s based on ubuntu (such as mint, pop!, kde neon, etc) handle those updates.
Mote,
Downstream distros like Mint use the exact same packages and I think they even point to the same repos.
https://forums.linuxmint.com/viewtopic.php?p=2279570#p2279570
https://easylinuxtipsproject.blogspot.com/p/sources.html
Of course they have their own local additions and possibly custom modifications, but I expect the vast majority of packages are totally identical.
It’s kind of how Ubuntu value add. They provide those updates to paying customers (even if the licence is free smaller scale).
So far they haven’t enforced it like Redhat have, but it’s their strategy to stop clones competing directly without the associated costs.
So right now, Mint provide the same updates as they are able to use the same repos, but if Canonical’s buisness sales start being affected or undercut (eg a Rocky/Alma equivalent) then they can enforce quite easily.
Remember though, desktop doesn’t make Canonical money, server support does.
In addition to .deb packaging format Debian should support Snap/Flatpak/AppImage packaging formats. That would help to centralise FOSS packaging efforts and to vastly reduce maintenance work needed to support GNU/Linux ecosystem.
You’d be surprised how many packages in Debian Stable don’t have security patches applied. Ubuntu at least covers those pretty quickly. But you’re right, there’s a false sense of security in running Ubuntu, I’ve been looking at openSUSE more and more in the last couple of years and I like what I see.
Sodki,
Out of curiosity, are you aware of any data on this?
If you check the Debian’s Security Bug Tracker (https://security-tracker.debian.org/tracker/), especially the one for Stable, you’ll see that there are a lot of unfixed vulnerable packages in there. I’ve also noticed that there’s a big chance that the CVE does not apply to Unstable or Testing, either because it has been patched or because it simply doesn’t apply to new versions. As a concerned SysAdmin, I simply can’t trust Debian to keep Stable secure, which is a shame. My GitLab CI pipelines that build and validate container images will just fail if I base my images on Debian stable, it’s as simple as that.
Unlike Debian, Ubuntu and other distributions that have an eye on the enterprise, like SUSE and it’s derivatives, will make an effort to have their security trackers clean, for obvious reasons. But, just like Thom mentioned, “Canonical’s LTS support only covers the packages in main”, so you do get a false sense of security.
I find that a combination of openSUSE (both Leap and Tumbleweed) and Alpine Linux hits the sweet spot all around.
(Just to clarify, I’m very open regarding GNU/Linux distributions: Gentoo is my favourite one, Debian is my favourite project, my main workstations run Fedora, I’ve ran Ubuntu in one form or another since 4.10, love me some Alpine Linux for containers and for the past couple of years have been experimenting, using and appreciating openSUSE in increasingly different scenarios.)
Sodki,
You really came through with that link, thank you.
I did hand check a few of the CVE entries listed for debian and to be fair they also say “needs triage” on ubuntu as well. It’s just a few random samples and in no way a comprehensive comparison. We might need a spreadsheet for this type of comparison.
https://ubuntu.com/security/CVE-2024-2199
https://ubuntu.com/security/CVE-2024-3657
https://ubuntu.com/security/CVE-2023-52168