Graphics drivers in Flatpak have been a bit of a pain point. The drivers have to be built against the runtime to work in the runtime. This usually isn’t much of an issue but it breaks down in two cases:
[…]
- If the driver depends on a specific kernel version
- If the runtime is end-of-life (EOL)
How we deal with this is rather primitive: keep updating apps, don’t depend on EOL runtimes. This is in general a good strategy. A EOL runtime also doesn’t receive security updates, so users should not use them. Users will be users though and if they have a goal which involves running an app which uses an EOL runtime, that’s what they will do. From a software archival perspective, it is also desirable to keep things working, even if they should be strongly discouraged.
In all those cases, the user most likely still has a working graphics driver, just not in the flatpak runtime, but on the host system. So one naturally asks oneself: why not just use that driver?
↫ Sebastian Wick
The solution the Flatpak team is looking into is to use virtualisation for the graphics driver, as the absolute last-resort option to keep things working when nothing else will. It’s a complex and interesting solution to a complex and interesting problem.

It’s just so ironic. it sounds like Flatpak wants to go in the direction of the distro’s native apt/rpm updates. But the problem there has always been dependency hell. If you merely install software from repos you might never even notice that there are sharks in the water, but as a developer compiling software I absolutely hate dependency hell. I’ve been “sideloading” a lot more appimage/flatpak packages recently to avoid such dependency issues. Sometimes this comes with very substantial bloat, but given my systems are overprovisioned on ram I still find it worth it over dealing with dependency issues.
I don’t typically install software that depends on specific kernel drivers, so I’m not sure if I’d benefit from the Virtualization being proposed here, but it’s an interesting idea. It’s kind of like distributing software as virtual machines. All of these package managers have found different solutions to the problem of library dependencies and all have trade offs. I wish we had smarter more fundamental solutions for these problems. Perhaps at the language level we could rethink/redesign C interfaces in a way that could solve the problems that downstream users & package managers are experiencing without having to make as many compromises.
A source code repository like git contains all revisions of a dependency and in principal all the information about what changed is there. I’m thinking off the cuff here, but hypothetically it could be possible to use this information to create a new higher level shared object linking mechanism with more advanced capabilities when it comes to solving DLL hell.
This is a very common problem with linux kernel models too (due to unstable ABI). Perhaps we’ve been solving these problems the wrong way and we could evolve a higher abstraction beyond standard C interfaces to help solve it. Naturally this would be controversial, but if it could solve longstanding problems for which we don’t have a great solution, then it might be a worthwhile development to pursue.
IMHO, the dependency hell is a solved problem, but everybody ignores the solution. GoboLinux is the most known of the distros that implement that solution (versioned directories for all libraries/programs, and every program has links for the libraries they use in its own program directory). Elegant and very pleasant to work with.
I stopped using GoboLinux as my primary system long ago, but stealing packages from debian or arch was easy (although I sometimes resorted to changing paths in the binaries using sed and using some links with shortened names in the disk when the old path was too short to be able to craft a fixed equal-length path…).
The big problem with that approach is that you have to patch vulnerabilities in all the versions of the libraries you have installed, and some of then can be quite old.
Antartica_,
I am such a fan of Gobo linux! I tried some of the same ideas in my own distro even before Gobo but I couldn’t keep up with the workload of patching software to use my distro’s conventions. When I learned of Gobo linux I was thrilled to see others were just as interested as I was in these types of improvements. You are right, it is unfortunate that mainstream linux distros haven’t taken the cue to do something similar because what they have now is kind of ugly.
I really do like Gobolinux and it’s an insightful reference. I can see how that makes it easier to manage many versions of dependencies, and that has it’s benefits, although I was thinking of a solution where we could have better tooling to actually bridge the gaps between versions rather than having to rely on a multitude of versions in the first place.
Keeping several versions of library dependencies around for every package technically works and is more or less what flatpak effectively does (albeit in a different way than gobolinux would), but the con is that having all these versions balloons storage and runtime memory requirements. And in my experience this overhead can be very significant compared to the distro’s native repos. The problem is it takes a ton of work and coordination to maintain a distro and get to the point where software dependencies can be shared. So I’m trying to think of ways we might solve dependency hell using more sophisticated software interface mechanisms, and to do so in a way that does not create a large burden for developers. This can’t be done without more information about library interfaces, but that information is technically available in git repositories so the idea is that maybe automated tools could detect those changes and use them to automatically bridge the gap between library versions. I realize this is an unconventional approach, but if it could be perfected it could be extremely valuable in solving the problems downstream maintainers are having.
The only library I know that has managed to do what you suggest is libSDL. You can use newer libraries with older binaries, and you have sdl-compat and sdl2-compat to use latest sdl3 with binaries requiring libSDL-1.2/libSDL2 (yes, I know that the emulation is not perfect, but it is more than what is available for most libraries).
Antartica_,
I’m not really referring to completely different APIs but rather trivial changes within one API that cause library dependencies to break.
Some libraries are better than others depending on their own policies. Many projects don’t put in the effort to maintain compatibility and I don’t think they should have to. However if the mapping could be automated I think that would be extremely valuable in allowing more software to share libraries that would otherwise require multiple versions.
Keep thinking and you’ll invent a versioned IPC protocol.
Serafean,
That’s an interesting comparison. Perhaps IPC could work, but I was thinking more of an in-process solution like function wrappers. Ideally these would be zero cost, but that would require the help of the language & compiler to apply the changes inline. That might be technically doable, but it seems far less likely to see widespread acceptance if it required non-standard compilers. So low cost interface wrappers might be more achievable.
@Alfman
I do not have nearly these problems on my distro. That may be a factor.
That said, my approach to this problem is Distrobox. I often install a Distrobox just to do software dev for a single project. I am able to tailor the environment to that project’s needs. When the Distrobox is not running, it is not polluting anything. When I am done (if ever), the Distrobox can be blown away with no local noise.
Of course, you can still get dependency hell inside the container. Using a Distrobox distro image that has up-to-date packages solves most of that. Arch Linux makes a great Distrobox to dev in. Almost everything you can think of will come from the repos or AUR with zero dependency hell. You can also use Distrobox to match where your app will be deployed. I have a RHEL Distrobox where I can do dev that is destined to run on RHEL (it is even fully legal to use their official container images for commercial work).
LeFantome,
Are you talking about the flatpak code amplification problem or dependency hell? Dependency hell isn’t a problem if you stick to your repos, and many users do so they don’t experience a problem. But the more software you try installing from 3rd party sources, the more likely these problems are to creep up.
In my case it’s usually when I need to install software that’s not in my distro or else I need a different version. An example I run into is LibAV/ffmpeg breaking API and software/code that used to work is no longer compatible.
I love when I can just grab the dependency from the distro and it just works. But some projects don’t have stable APIs and don’t coordinate breakages with distros. Maybe they use bleeding edge libraries with 0% chance of being in the distro or the opposite, they may update a few times a year and be behind the distro. Ether way it’s a problem. In ffmpeg’s case they love to add new parameters to existing function prototypes. No single version of the library is simultaneously compatibile with all the software. 🙁
Repo maintainers have to do work to ensure the software they distribute builds against the dependencies they provide. But it doesn’t solve sideloaded software. I think it would be interesting to try solving this with auto-generated function wrappers inferred from source code commits. This could be easy for some trivial cases, but it makes me wonder what proportion of breakages can be rectified by small trivial changes versus more major incompatibilities.