Linux Archive

Using (only) a Linux terminal for my personal computing in 2024

A month and a bit ago, I wondered if I could cope with a terminal-only computer. The only way to really find out was to give it a go. My goal was to see what it was like to use a terminal-only computer for my personal computing for two weeks, and more if I fancied it. ↫ Neil’s blog I tried to do this too, once. Once. Doing everything from the terminal just isn’t viable for me, mostly because I didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows 3.1 installation we never used), and I’m pretty sure the experience of using MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t start forming properly until Windows 95 came out, and as such, computing is inherently graphical for me, and no matter how many amazing CLI and TUI applications are out there – and there are many, many amazing ones – my brain just isn’t compatible with it. There are a few tasks I prefer doing with the command line, like updating my computers or editing system files using Nano, but for everything else I’m just faster and more comfortable with a graphical user interface. This comes down to not knowing most commands by heart, and often not even knowing the options and flags for the most basic of commands, meaning even very basic operations that people comfortable using the command line do without even thinking, take me ages. I’m glad any modern Linux distribution – I use Fedora KDE on all my computers – offers both paths for almost anything you could do on your computer, and unless I specifically opt to do so, I literally – literally literally – never have to touch the command line.

What’s in a Steam Deck kernel anyway?

Valve, entirely going against the popular definition of Vendor, is still actively working on improving and maintaining the kernel for their Steam Deck hardware. Let’s see what they’re up to in this 6.8 cycle. ↫ Samuel Dionne-Riel Just a quick look at what, exactly, Valve does with the Steam Deck Linux kernel – nothing more, nothing less. It’s nice to have simple, straightforward posts sometimes.

Linux to lose support for Apple and IBM’s failed PowerPC Common Hardware Reference Platform

Ah, the Common Hardware Reference Platform, IBM’s and Apple’s ill-fated attempt at taking on the PC market with a reference PowerPC platform anybody could build and expand upon while remaining (mostly) compatible with one another. Sadly, like so many other things Apple was trying to do before Steve Jobs returned, it never took off, and even Apple itself never implemented CHRP in any meaningful way. Only a few random IBM and Motorola computers ever fully implemented it, and Apple didn’t get any further than basic CHRP support in Mac OS 8, and some PowerPC Macs were based on CHRP, without actually being compatible with it. We’re roughly three decades down the line now, and pretty much everyone except weird nerds like us have forgotten CHRP was ever even a thing, but Linux has continued to support CHRP all this time. This support, too, though, is coming to an end, as Michael Ellerman has informed the Linux kernel community that they’re thinking of getting rid of it. Only a very small number of machines are supported by CHRP in Linux: the IBM B50, bplan/Genesi’s Pegasos/Pegasos2 boards, the Total Impact briQ, and maybe some Motorola machines, and that’s it. Ellerman notes that these machines seem to have zero active users, and anyone wanting to bring CHRP support back can always go back in the git history. CHRP is one of the many, many footnotes in computing history, and with so few machines out there that supported it, and so few machines Linux’ CHRP support could even be used for, it makes perfect sense to remove this from the kernel, while obviously keeping it in git’s history in case anyone wants to work with it on their hardware in the future. Still, it’s always fun to see references to such old, obscure hardware and platforms in 2024, even if it’s technically sad news.

Linux 6.12 released

Version 6.12 of the Linux kernel has been released. The main feature consists of the merger of the real-time PREEMPT_RT scheduler, most likely one of the longest-running merger sagas in Linux’ history. This means that Linux now fully supports both soft and hard real-time capabilities natively, which is a major step forward for the platform, especially when looking at embedded development. It’s now no longer needed to draw in real-time support from outside the kernel. Linux 6.12 also brings a huge number of improvements for graphics drivers, for both Intel and AMD’s graphics cards. With 6.12, Linux now supports the Intel Xe2 integrated GPU as well as Intel’s upcoming discrete “Battlemage” GPUs by default, and it contains more AMD RDNA4 support for those upcoming GPUs. DRM panics messages in 6.12 will show a QR code you can scan for more information, a feature written in Rust, and initial support for the Raspberry Pi 5 finally hit mainline too. Of course, there’s a lot more in here, like the usual LoongArch and ARM improvements, new drivers, and so on. and if you’re a regular Linux user you’ll see 6.12 make it to your distribution within a few weeks or months.

Improving Steam Client stability on Linux: setenv and multithreaded environments

Speaking of Steam, the Linux version of Valve’s gaming platform has just received a pretty substantial set of fixes for crashes, and Timothee “TTimo” Besset, who works for Valve on Linux support, has published a blog post with more details about what kind of crashes they’ve been fixing. The Steam client update on November 5th mentions “Fixed some miscellaneous common crashes.” in the Linux notes, which I wanted to give a bit of background on. There’s more than one fix that made it in under the somewhat generic header, but the one change that made the most significant impact to Steam client stability on Linux has been a revamping of how we are approaching the setenv and getenv functions. One of my colleagues rightly dubbed setenv “the worst Linux API”. It’s such a simple, common API, available on all platforms that it was a little difficult to convince ourselves just how bad it is. I highly encourage anyone who writes software that will run on Linux at some point to read through “RachelByTheBay”‘s very engaging post on the subject. ↫ Timothee “TTimo” Besset This indeed seems to be a specific Linux problem, and due to the variability in Linux systems – different distributions, extensive user customisation, and so on – debugging information was more difficult to parse than on Windows and macOS. After a lot of work grouping the debug information to try and make sense of it all, it turned out that the two functions in question were causing issues in threads other than those that used them. They had to resort to several solutions, from reducing the reliance on setenv and refactoring it with exevpe, to reducing the reliance on getenv through caching, to introducing “an ‘environment manager’ that pre-allocates large enough value buffers at startup for fixed environment variable names, before any threading has started”. It was especially this last one that had a major impact on reducing the number of crashes with Steam on Linux. Besset does note that these functions are still used far too often, but that at this point it’s out of their control because that usage comes from the libraries of the operating system, like x11, xcb, dbus, and so on. Besset also mentions that it would be much better if this issue can be addressed in glibc, and in the comments, a user by the name of Adhemerval reports that this is indeed something the glibc team is working on.

Torvalds thinks “AI” is 90% marketing, and Google claims 25% of its code is “AI”-generated

Torvalds said that the current state of AI technology is 90 percent marketing and 10 percent factual reality. The developer, who won Finland’s Millennium Technology Prize for the creation of the Linux kernel, was interviewed during the Open Source Summit held in Vienna, where he had the chance to talk about both the open-source world and the latest technology trends. ↫ Alfonso Maruccia at Techspot Well, he’s not wrong. “AI” definitely feels like a bubble at the moment, and while there’s probably eventually going to be useful implementations people might actually want to actively use to produce quality content, most “AI” features today produce a stream of obviously fake diarrhea full of malformed hands, lies, and misinformation. Maybe we’ll eventually work out these serious kinks, but for now, it’s mostly just a gimmick providing us with an endless source of memes. Which is fun, but not exactly what we’re being sold, and not something worth destroying the planet for even faster. Meanwhile, Google is going utterly bananas with its use of “AI” inside the company, with Sundar Pichai claiming 25% of code inside Google is now “AI”-generated. ↫ Sundar Pichai We’re also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. So much here feels wrong. First, who wants to bet those engineers care a whole lot less about the generated code than they do about code they write themselves? Second, who wants to bet that generated code is entirely undocumented? Third, who wants to bet what the additional costs will be a few years from now when the next batch of engineers tries to make sense of that undocumented generated code? Sure, Google might save a bit on engineers’ salaries now, but how much extra will they have to spend to unspaghettify that diarrhea code in the future? It will be very interesting to keep an eye on this, and check back in, say, five years, and hear from the Google engineers of the future how much of their time is spent fixing undocumented “AI”-generated code. I can’t wait.

A deep dive into Linux’s new mseal syscall

If you love exploit mitigations, you may have heard of a new system call named mseal landing into the Linux kernel’s 6.10 release, providing a protection called “memory sealing.” Beyond notes from the authors, very little information about this mitigation exists. In this blog post, we’ll explain what this syscall is, including how it’s different from prior memory protection schemes and how it works in the kernel to protect virtual memory. We’ll also describe the particular exploit scenarios that mseal helps stop in Linux userspace, such as stopping malicious permissions tampering and preventing memory unmapping attacks. ↫ Alan Cao The goal of mseal is to, well, literally seal a part of memory and protect its contents from being tampered with. It makes regions of memory immutable so that while a program is running, its memory contents cannot be modified by malicious actors. This article goes into great detail about this new feature, explains how it works, and what it means for security in the Linux kernel. Excellent light reading for the weekend.

Arch Linux and Valve deepen ties with direct collaboration

When Valve took its second major crack at making Steam machines happen, in the form of the Steam Deck, one of the big surprises was the company’s choice to base the Linux operating system the Steam Deck uses on Arch Linux, instead of the Debian base it was using before. It seems this choice is not only benefiting Valve, but also Arch. We are excited to announce that Arch Linux is entering into a direct collaboration with Valve. Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers. ↫ Levente Polyak This is great news for Arch, but of course, also for Linux in general. The work distributions do to improve their user experience tend to be picked up by other distributions, and it’s clear that Valve’s contributions have been vast. With these collaborations, Valve is also showing it’s in it for the long term, and not just interested in taking from the community, but also in giving, which is good news for the large number of people now using Linux for gaming. The Arch team highlights that these projects will follow the regular administrative and decision-making processes within the distribution, so we’re not looking at parallel efforts forced upon everyone else without a say.

Slowly booting full Linux on the Intel 4004 for fun, art, and absolutely no profit

Can you run Linux on the Intel 4004, the first commercially produced microprocessor, released to the world in 1971? Well, Dmitry Grinberg, the genius engineer who got Linux to run on all kinds of incredibly underpowered hardware, sought to answer this very important question. In short, yes, you can run Linux on the 4004, but much as with other extremely limited and barebones chips, you have to get… Creative. Very creative. Of course, Linux cannot and will not boot on a 4004 directly. There is no C compiler targeting the 4004, nor could one be created due to the limitations of the architecture. The amount of ROM and RAM that is addressable is also simply too low. So, same as before, I would have to resort to emulation. My initial goal was to fit into 4KB of code, as that is what an unmodified unassisted 4004 can address. 4KB of code is not much at all to emulate a complete system. After studying the options, it became clear that MIPS R3000 would be the winner here. Every other architecture I considered would be harder to emulate in some way. Some architectures had arbitrarily-shifted operands all the time (ARM), some have shitty addressing modes necessitating that they would be slow (RISCV), some would need more than 4KB to even decode instructions (x86), and some were just too complex to emulate in so little space (PPC). … so … MIPS again… OK! ↫ Dmitry Grinberg This is just one very small aspect of this massive undertaking, and the article and videos accompanying his success are incredibly detailed and definitely not for the faint of heart. The amount of skill, knowledge, creativity, and persistence on display here is stunning, and many of us can only dream of being able to do stuff like this. I absolutely love it. Of course, the Linux kernel had to be slimmed down considerably, as a lot of stuff currently in the kernel are of absolutely no use on such an old system. Boot time is measured in days, still, but it helped a lot. Grinberg also turned the whole setup into what is effectively an art piece you can hang on the wall, where you can have it run and, well, do things – not much, of course, but he did include a small program that draws mandelbrot set on the VFD and serial port, which is a neat trick. He plans on offering the whole thing as a kit, but a lot of it depends on getting enough of the old chips to offer a complete, ready-to-assemble kit in the first place.

Linux 6.11 released

Linus Torvalds just tagged the Linux 6.11 kernel as stable. There are many changes and new features in Linux 6.11 including numerous AMD CPU and GPU improvements, preparations for upcoming Intel platforms, initial block atomic write support for NVMe and SCSI drives, the DRM Panic infrastructure can now display a monochrome logo if desired, easier support for building Pacman kernel packages for Arch Linux, DeviceTree files for initial Snapdragon X1 laptops, and much more. ↫ Michael Larabel Especially the Snapdragon stuff interests me, as I really want to move to ARM for my laptop needs at some point, and I’m obviously not going to be using Windows or macOS. I hope the bringup for the Snapdragon laptop chips is smooth sailing from here and picks up pace, because I’d hate for Linux to miss out on this transition. Qualcomm talked big game about supporting Linux properly, but it feels like they’re – what a surprise – not backing those words up with actions so far.

Linux’s bedtime routine

How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated. ↫ Jacob Adams So this is a lot deeper of a dive than I expected, and it blows my mind just how complex sleep, hibernating, and waking a computer really is. Instinctively you know this, but seeing it spelled out like this really drives that point home – and this only covers going into hibernation. It also highlights how hard it must be for the developers involved to keep this working at all, especially on the wide variety of machines and hardware combinations Linux runs on. It wasn’t too log ago that pretty much the only platform where sleeping and waking worked reliably was Mac OS X with its controlled, small hardware selection, so it’s kind of remarkable this works at all on Linux now. I haven’t had to worry about sleeping and waking with Linux for quite a while now, and it’s one of those things that “just works” so I never have to think about it. This definitely wasn’t always the case, though, and on both Linux and Windows I would just turn the whole feature off since it rarely worked reliably, especially on desktops. I’m sure it still breaks for people, but for me, it’s been rock solid, and reading through the linked article, I’m even more amazed about this than I already was.

Porting systemd to musl libc-powered Linux

A. Wilcox, the original creator of Adélie Linux, has ported systemd to musl, the glibc alternative. I have completed an initial new port of systemd to musl. This patch set does not share much in common with the existing OpenEmbedded patchset. I wanted to make a fully updated patch series targeting more current releases of systemd and musl, taking advantage of the latest features and updates in both. I also took a focus on writing patches that could be sent for consideration of inclusion upstream. The final result is a system that appears to be surprisingly reliable considering the newness of the port, and very fast to boot. ↫ A. Wilcox I absolutely adore Adélie Linux as a project, even if I don’t run it myself, since they have a very practical approach to software. Systemd is popular for a reason – it’s fast and capable – and it only makes sense for Adélie to offer it as a potential option, even when using musl. Choice is a core value of the open source and Linux world, and that includes the choice to use systemd, even for a distribution that has traditionally used something else. The port is already quite capable, and Wilcox managed to replace OpenRC on her system with systemd in-place, and it booted up just fine, and it also happened to boot in about a third of the time OpenRC did. It’s not ready for prime time yet, though, and most services are not yet packaged for systemd, an effort for which Adélie Linux intends to rely on upstream and cooperation with systemd experts from Gentoo and Fedora. They’re also working together with systemd, musl, and others to make any switching a user might want to do as easy as possible. A beta or anything like that is still a ways off, but it’s an impressive amount of progress already.

Linux scores a surprising gaming victory against Windows 11

The conversation around gaming on Linux has changed significantly during the last several years. It’s a success story engineered by passionate developers working on the Linux kernel and the open-source graphics stack (and certainly bolstered by the Steam Deck). Many of them are employed by Valve and Red Hat. Many are enthusiasts volunteering their time to ensure Linux gaming continues to improve. Don’t worry, this isn’t going to be a history lesson, but it’s an appropriate way to introduce yet another performance victory Linux is claiming over Windows. I recently spent some time with the Framework 13 laptop, evaluating it with the new Intel Core Ultra 7 processor and the AMD Ryzen 7 7480U. It felt like the perfect opportunity to test how a handful of games ran on Windows 11 and Fedora 40. I was genuinely surprised by the results! ↫ Jason Evangelho I’m not surprised by these results. At all. I’ve been running exclusively Linux on my gaming PC for years now, and gaming has pretty much been a solved issue on Linux for a while now. I used to check ProtonDB before buying games on Steam without a native Linux version, but I haven’t done that in a long time, since stuff usually just works. In quite a few cases, we’ve even seen Windows games perform better on Linux through Proton than they do on Windows. An example that still makes me chuckle is that when Elden Ring was just released, it had consistent stutter issues on Windows that didn’t exist on Linux, because Valve’s Proton did a better job at caching textures. And now that the Steam Deck has been out for a while, people just expect Linux support from developers, and if it’s not there on launch, Steam reviews will reflect that. It’s been years since I bought a game that I’ve had to refund on Steam because it didn’t work properly on Linux. The one exception remains games that employ Windows rootkits for their anticheat functionality, such as League of Legends, which recently stopped working on Linux because the company behind the game added a rootkit to their anticheat tool. Those are definitely an exception, though, and honestly, you shouldn’t be running a rootkit on your computer anyway, Windows or not. For my League of Legends needs, I just grabbed some random spare parts and built a dedicated, throwaway Windows box that obviously has zero of my data on it, and pretty much just runs that one stupid game I’ve sadly been playing for like 14 years. We all have our guilty pleasures. Don’t kink-shame. Anyway, if only a few years ago you had told me or anyone else that gaming on Linux would be a non-story, a solved problem, and that most PC games just work on Linux without any issues, you’d be laughed out of the room. Times sure have changed due to the dedication and hard work of both the community and various companies like Valve.

So you want to build an embedded Linux system?

This article is targeted at embedded engineers who are familiar with microcontrollers but not with microprocessors or Linux, so I wanted to put together something with a quick primer on why you’d want to run embedded Linux, a broad overview of what’s involved in designing around application processors, and then a dive into some specific parts you should check out — and others you should avoid — for entry-level embedded Linux systems. ↫ Jay Carlson Quite the detailed guide about embedded Linux.

Override xdg-open behavior with xdg-override

Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. ↫ xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it – I really don’t – but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser – which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org’s xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko’s issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn’t work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn’t work either, of course, since that would affect all other applications, too. So, what’s the actual solution? We’d like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we’ll “inject” into Slack by adding it to PATH. ↫ Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won’t generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it’ll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. ↫ Dmytro Kostiuchenko I don’t fully understand how it works, but I get the overall gist of what it’s doing. I think it’s quite clever, and solves a very specific issue in a non-destructive way. While it’s not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.

Serpent OS prealpha0 released

Serpent OS, a new Linux distribution with a completely custom package management system written in Rust, has released its very very rough pre-alpha release. They’ve been working on this for four years, and they’re making some interesting choices regarding packaging that I really like, at least on paper. This will of course appear to be a very rough (crap) prealpha ISO. Underneath the surface it is using the moss package manager, our very own package management solution written in Rust. Quite simply, every single transaction in moss generates a new filesystem tree (/usr) in a staging area as a full, stateless, OS transaction. When the package installs succeed, any transaction triggers are run in a private namespace (container) before finally activating the new /usr tree. Through our enforced stateless design, usr-merge, etc, we can atomically update the running OS with a single renameat2 call. As a neat aside, all OS content is deduplicated, meaning your last system transaction is still available on disk allowing offline rollbacks. ↫ Ikey Doherty Since this is only a very rough pre-alpha release, I don’t have much more to say at this point, but I do think it’s interesting enough to let y’all know about it. Even if you’re not the kind of person to dive into pre-alphas, I think you should keep an eye on Serpent OS, because I have a feeling they’re on to something valuable here.

NVIDIA transitions fully towards open-source GPU Linux kernel modules

It’s a bit of a Linux news day today – it happens – but this one is good news we can all be happy about. After earning a bad reputation for mishandling its Linux graphics drivers for years, almost decades, NVIDIA has been turning the ship around these past two years, and today they made a major announcement: from here on out, the open source NVIDIA kernel modules will be the default for all recent NVIDIA cards. We’re now at a point where transitioning fully to the open-source GPU kernel modules is the right move, and we’re making that change in the upcoming R560 driver release. ↫ Rob Armstrong, Kevin Mittman and Fred Oh There are some caveats regarding which generations, exactly, should be using the open source modules for optimal performance. For NVIDIA’s most cutting edge generations, Grace Hopper and Blackwell, you actually must use the open source modules, since the proprietary ones are not even supported. For GPUs from the Turing, Ampere, Ada Lovelace, or Hopper architectures, NVIDIA recommends the open source modules, but the proprietary ones are compatible as well. Anything older than that is restricted to the proprietary modules, as they’re not supported by the open source modules. This is a huge milestone, and NVIDIA becoming a better team player in the Linux world is a big deal for those of us with NVIDIA GPUs – it’s already paying dividend in vastly improved Wayland support, which up until very recently was a huge problem. Do note, though, that this only covers the kernel module; the userspace parts of the NVIDIA driver are still closed-source, and there’s no indication that’s going to change.

Linux patch to disable Snapdragon X Elite GPU by default

Not too long ago it seemed like Linux support for the new ARM laptops running the Snapdragon X Pro and Elite processors was going to be pretty good – Qualcomm seemed to really be stepping up its game, and detailed in a blog post exactly what they were doing to make Linux a first-tier operating system on their new, fancy laptop chips. Now that the devices are in people’s hand, though, it seems all is not so rosy in this new Qualcomm garden. A recent Linux kernel DeviceTree patch outright disables the GPU on the Snapdragon X Elite, and the issue is, as usual, vendor nonsense, as it needs something called a ZAP shader to be useful. The ZAP shader is needed as by default the GPU will power on in a specialized “secure” mode and needs to be zapped out of it. With OEM key signing of the GPU ZAP shader it sounds like the Snapdragon X laptop GPU support will be even messier than typically encountered for laptop graphics. ↫ Michael Larabel This is exactly the kind of nonsense you don’t want to be dealing with, whether you’re a user, developer, or OEM, so I hope this gets sorted out sooner rather than later. Qualcomm’s commitments and blog posts about ensuring Linux is a first-tier platform are meaningless if the company can’t even get the GPU to work properly. These enablement problems should’ve been handled well before the devices entered circulation, so this is very disheartening to see. So, for now, hold off on X Elite laptops if you’re a Linux user.

Unified kernel image

UKIs can run on UEFI systems and simplify the distribution of small kernel images. For example, they simplify network booting with iPXE. UKIs make rootfs and kernels composable, making it possible to derive a rootfs for multiple kernel versions with one file for each pair. A Unified Kernel Image (UKI) is a combination of a UEFI boot stub program, a Linux kernel image, an initramfs, and further resources in a single UEFI PE file (device tree, cpu µcode, splash screen, secure boot sig/key, …). This file can either be directly invoked by the UEFI firmware or through a boot loader. ↫ Hugues If you’re still a bit unfamiliar with unified kernel images, this post contains a ton of detailed practical information. Unified kernel images might become a staple for forward-looking Linux distributions, and I know for a fact that my distribution of choice, Fedora, has been working on it for a while now. The goal is to eventually simplify the boot process as a whole, and make better, more optimal use of the advanced capabilities UEFI gives us over the old, limited, 1980s BIOS model. Like I said a few posts ago, I really don’t want to be using traditional bootloaders anymore. UEFI is explicitly designed to just boot operating systems on its own, and modern PCs just don’t need bootloaders anymore. They’re points of failure users shouldn’t be dealing with anymore in 2024, and I’m glad to see the Linux world is seriously moving towards negating the need for their existence.

No more boot loader: please use the kernel instead

Most people are familiar with GRUB, a powerful, flexible, fully-featured bootloader that is used on multiple architectures (x86_64, aarch64, ppc64le OpenFirmware). Although GRUB is quite versatile and capable, its features create complexity that is difficult to maintain, and that both duplicate and lag behind the Linux kernel while also creating numerous security holes. On the other hand, the Linux kernel, which has a large developer base, benefits from fast feature development, quick responses to vulnerabilities and greater overall scrutiny. We (Red Hat boot loader engineering) will present our solution to this problem, which is to use the Linux kernel as its own bootloader. Loaded by the EFI stub on UEFI, and packed into a unified kernel image (UKI), the kernel, initramfs, and kernel command line, contain everything they need to reach the final boot target. All necessary drivers, filesystem support, and networking are already built in and code duplication is avoided. ↫ Marta Lewandowska I’m not a fan of GRUB. It’s too much of a single point of failure, and since I’m not going to be dual-booting anything anyway I’d much rather use something that isn’t as complex as GRUB. Systemd-boot is an option, but switching over from GRUB to systemd-boot, while possible on my distribution of choice, Fedora, is not officially supported and there’s no guarantee it will keep working from one release to the next. The proposed solution here seems like another option, and it may even be a better option – I’ll leave that to the experts to discuss. It seems like to me that the ideal we should be striving for is to have booting the operating system become the sole responsibility of the EUFI firmware, which usually already contains the ability to load any operating system that supports UEFI without explicitly installing a bootloader. It’d be great if you could set your UEFI firmware to just always load its boot menu, instead of hiding it behind a function key or whatever. We made UEFI more capable to address the various problems and limitations inherent in BIOS. Why are we still forcing UEFI to pretend it still has the same limitations?