A short while ago, we talked about the hellish hiring process at a Silicon Valley startup, and today we’ve got another one. Apparently, it’s an open secret that the hiring process at Canonical is a complete dumpster fire. I left Google in April 2024, and have thus been casually looking for a new job during 2024. A good friend of mine is currently working at Canonical, and he told me that it’s quite a nice company with a great working environment. Unfortunately, the internet is full of people who had a poor experience: Glassdoor shows that only 15% had a positive interview experience, famous internet denizens like sara rambled on the topic, reddit, hackernews, indeed and blind all say it’s terrible, … but the idea of being decently paid to do security work on a popular Linux distribution was really appealing to me. ↫ Julien Voisin What follows is Byzantine and ridiculous, and all ultimately unnecessary since it turns out Mark Shuttleworth interviews applicants at the end of this horrid process and yays or nays people on vibes alone. You have to read it to believe it. One interesting note that I do appreciate is that Voisin used their rights under the GDPR to force Canonical to hand over the feedback about his application since the GDPR considers it personal information. Delicious.
What is a Cappa Magna? The Cappa Magna is a wide ecclesiastical cloak with a long train, traditionally worn by canons, bishops, and cardinals of the Catholic Church on solemn occasions and processions. Its use dates back to the Middle Ages and symbolizes the dignity and authority of its wearer. Origins and History of the Cappa Magna The origins of the Cappa Magna are lost in the history of the Church. It is believed to derive from similar cloaks worn by Roman officials. Over the centuries, the Cappa Magna has evolved, becoming a distinctive symbol of the high-ranking clergy. Initially, it was a practical garment to protect from the cold, but gradually it took on a ceremonial meaning. Symbolic Meaning of the Cappa Magna The Cappa Magna is not only a garment but a symbol rich in meaning. Its wide train represents the extension of the authority of its wearer and his responsibility towards the Church and the faithful. The color of the Cappa Magna varies depending on the rank of its wearer and the liturgical period. Red is traditionally reserved for cardinals, while purple is worn by bishops. When is the Cappa Magna Worn? The Cappa Magna is reserved for solemn occasions and processions. Some examples include: In general, the Cappa Magna is worn when it is desired to emphasize the solemnity and importance of the event. The Different Types of Cappa Magna There are different variations of the Cappa Magna, depending on the rank of its wearer and the occasion. The main differences concern the color, the fabric, and the presence or absence of fur. For example, the cardinalitial Cappa Magna is traditionally in red wool with silk lining and edged with ermine in winter. The Cappa Magna Today The use of the Cappa Magna has decreased in recent decades, especially after the liturgical reforms of the Second Vatican Council. However, it remains a significant garment for many members of the clergy and is still worn on special occasions. Some see it as a symbol of tradition and continuity, while others consider it a legacy of the past. Regardless of the different opinions, the Cappa Magna continues to fascinate and arouse interest. Where to Buy a Cappa Magna If you are interested in purchasing a Cappa Magna, you can find it in stores specializing in religious items or online. On HaftinaUSA.com, we offer a wide selection of high-quality Cappa Magne, made with the best materials and in accordance with tradition. Whether you are a member of the clergy or a passionate about history and liturgy, you will surely find the perfect Cappa Magna for your needs. The Cappa Magna and Ecclesiastical Fashion The Cappa Magna, despite being a traditional garment, has influenced ecclesiastical fashion over the centuries. Its elegant design and its imposing appearance have inspired the creation of other liturgical and ceremonial garments. The Cappa Magna is an example of how tradition and innovation can coexist in ecclesiastical fashion. How to Wear the Cappa Magna Correctly Wearing the Cappa Magna correctly requires some knowledge of ecclesiastical protocol. It is important to make sure that the cloak is well draped and that the train is arranged in an orderly manner. Furthermore, it is essential to combine the Cappa Magna with the other liturgical garments appropriate for the occasion. The Cappa Magna: A Treasure of the Catholic Tradition In conclusion, the Cappa Magna is a treasure of the Catholic tradition, a symbol of dignity, authority, and continuity. Its use in solemn occasions and processions emphasizes the importance and sanctity of these events. If you are interested in the history of the Church and the liturgy, the Cappa Magna is a fascinating topic to explore. Visit HaftinaUSA.com to discover our collection of Cappa Magne and other high-quality religious items. HaftinaUSA.com: Your Partner for Liturgical Clothing At HaftinaUSA.com, we are proud to offer a wide range of high-quality liturgical clothing, including Cappa Magne, sacred vestments, and accessories. We are committed to providing our customers with products that respect tradition and that are made with the best materials. Explore our site today and discover the difference that quality can make! Choosing the Perfect Cappa Magna for Every Occasion The choice of the ideal Cappa Magna depends on several factors, including the rank of its wearer, the occasion, and the liturgical period. On HaftinaUSA, our team of experts is available to help you choose the perfect Cappa Magna for every occasion. Contact us today for a personalized consultation!
At the Linux Application Summit (LAS) in April, Sebastian Wick said that, by many metrics, Flatpak is doing great. The Flatpak application-packaging format is popular with upstream developers, and with many users. More and more applications are being published in the Flathub application store, and the format is even being adopted by Linux distributions like Fedora. However, he worried that work on the Flatpak project itself had stagnated, and that there were too few developers able to review and merge code beyond basic maintenance. ↫ Joe Brockmeier at LWN After reading this article and the long list of problems the Flatpak project is facing, I can’t really agree that “Flatpak is doing great”. Apparently, Flatpak is in maintenance mode, while major problems remain untouched, because nobody is working on the big-ticket items anymore. This seems like a big problem for a project that’s still facing a myriad of major issues. For instance, Flatpak still uses PulseAudio instead of Pipewire, which means that if a Flatpak applications needs permission to play audio, it also automatically gets permission to use the microphone. NVIDIA drivers also pose a big problem, network namespacing in Flatpak is “kind of ugly”, you can’t specify backwards-compatible permissions, and tons more problems. There’s a lot of ideas and proposed solutions, but nobody to implement them, leaving Flatpak stagnated. Now that Flatpak is adopted by quite a few popular desktop Linux distributions, it doesn’t seem particularly great that it’s having such issues with finding enough manpower to keep improving it. There’s a clear push, especially among developers of end-user focused applications, for everyone to use Flatpak, but is that push really a wise idea if the project has stagnated? Go into any thread where people discuss the use of Flatpaks, and there’s bound to be people experiencing problems, inevitably followed by suggested fixes to use third-party tools to break the already rather porous sandbox. Flatpak feels like a project that’s far from done or feature-complete, causing normal, every-day users to experience countless problems and issues. Reading straight fromt he horse’s mouth that the project has stagnated and isn’t being actively developed anymore is incredibly worrying.
And the “copilot” branding. A real copilot? That’s a peer. That’s a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he’s shouting over your shoulder, “Let me code that bit real quick, I saw it in a Slashdot comment!” At that point, you’re not working with a copilot. You’re playing Russian roulette with a loaded dependency graph. You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit. ↫ Jj at Blogmobly The world has no clue yet that we’re about to enter a period of incredible decline in software quality. “AI” is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.
Opera Mini was first released in 2005 as a web browser for mobile phones, with the ability to load full websites by sending most of the work to an external server. It was a massive hit, but it started to fade out of relevance once smartphones entered mainstream use. Opera Mini still exists today as a web browser for iPhone and Android—it’s now just a tweaked version of the regular Opera mobile browser, and you shouldn’t use Opera browsers. However, the original Java ME-based version is still functional, and you can even use it on modern computers. ↫ Corbin Davenport I remember using Opera Mini back in the day on my PocketPC and Palm devices. It wasn’t my main browser on those devices, but if some site I really needed was acting up, Opera Mini could be a lifesaver, but as we all remember, the mobile web before the arrival of the iPhone was a trashfire. Interestingly enough, we circled back to the mobile web being a trashfire, but at least we can block ads now to make it bearable. Since Opera Mini is just a Java application, the client part of the equation will probably remain executable for a long time, but once Opera decides to close the server side of things, it will stop being useful. Perhaps one day someone will reverse-engineer the protocol and APIs, paving the way for a custom server we can all run as part of the retrocomputing hobby. There’s always someone crazy and dedicated enough.
The next Apple operating systems will be identified by year, rather than with a version number, according to people with knowledge of the matter. That means the current iOS 18 will give way to “iOS 26,” said the people, who asked not to be identified because the plan is still private. Other updates will be known as iPadOS 26, macOS 26, watchOS 26, tvOS 26 and visionOS 26. Apple is making the change to bring consistency to its branding and move away from an approach that can be confusing to customers and developers. Today’s operating systems — including iOS 18, watchOS 12, macOS 15 and visionOS 2 — use different numbers because their initial versions didn’t debut at the same time. ↫ Mark Gurman at Bloomberg OK.
If you use Unix today, you can enjoy relatively long file names on more or less any filesystem that you care to name. But it wasn’t always this way. Research V7 had 14-byte filenames, and the System III/System V lineage continued this restriction until it merged with BSD Unix, which had significantly increased this limit as part of moving to a new filesystem (initially called the ‘Fast File System’, for good reasons). You might wonder where this unusual number came from, and for that matter, what the file name limit was on very early Unixes (it was 8 bytes, which surprised me; I vaguely assumed that it had been 14 from the start). ↫ Chris Siebenmann I love these historical explanations for seemingly arbitrary limitations.
One of the ways in which Windows (and macOS) trails behind the Linux and BSD world is the complete lack of centralised, standardised application management. Windows users still have to scour the web to download sketchy installers straight from the Windows 95 days, amassing a veritable collection updaters in the process, which either continuously run in the background, or annoy you with update pop-ups when you launch an application. It’s an archaic nightmare users of supposedly modern computers should not have to be dealing with. Microsoft has tried to remedy this, but in true Microsoft fashion, it did so halfheartedly, for instance with the Windows Package Manager, better known as winget. Instead of building an actual package manager, Microsoft basically just created a glorified script that downloads the same installers you download manually, and runs them in unattended mode in the background – it’s a download manager masquerading as a proper application management framework. To complicate matters, winget is only available as a command-line tool, meaning 99% of Windows users won’t be using it. There’s no graphical frontend in Windows, and it’s not integrated into Windows Update, so even if you strictly use winget to install your applications – which will be hard, as there’s only about 1400 applications that use it – you still don’t have a centralised place to upgrade your entire operating system and all of its applications. It’s a mess, and Microsoft intends to address it. Again. This time, they’re finally doing what should have been the goal from the start: allowing applications to be updated through Windows Update. Built on the Windows Update stack, the orchestration platform aims to provide developers and product teams building apps and management tools with an API for onboarding their update(s) that supports the needs of their installers. The orchestrator will coordinate across all onboarded products that are updated on Windows 11, in addition to Windows Update, to provide IT admins and users with a consistent management plane and experience, respectively. ↫ Angie Chen on the Windows IT Pro Blog Sounds good, but hold on a minute – “orchestration platform”? So this isn’t the existing winget, but integrated into Windows Update, where it should’ve been all along? No, what we’re looking at here is Microsoft’s competitor to Microsoft’s winget inside Microsoft’s Windows Update, oh and there’s also the Windows Store. In other words, once this rolls out, it’ll be yet another way to manage applications, existing inside Windows Update, and alongside winget (and the Windows Store). They way it works is surprisingly similar to winget: application developers can register an update executable with the orchestrator, and the orchestrator will periodically run this update executable to check for updates. In other words, this looks a hell of a lot like a mere download manager for existing updaters. What it’s definitively not, however, is winget – so if you’re a Windows application developer, you now not only have to register your application to work with winget, but also register it with this new orchestrator to work with Windows Update. This thing is so incredibly Microsoft.
It’s been 9 years since we disrupted Genode’s API. Back then, we changed the execution model of components, consistently applied the dependency-injection pattern to shun global side effects, and largely removed C-isms like format strings and pointers. These changes ultimately paved the ground for sophisticated systems like Sculpt OS. Since then, we identified several potential areas for further safety improvements, unlocked by the evolution of the C++ core language and inspired by the popularization of sum types for error propagation by the Rust community. With the current release, we uplift the framework API to foster a programming style that leaves no possible error condition unconsidered, reaching for a new level of rock-solidness of the framework. Section The Great API hardening explains how we achieved that. The revisited framework API comes in tandem with a new tool chain based on GCC 14 and binutils 2.44. ↫ Genode OS Framework 25.05 release notes This new release also brings a lot of progress on the integration of the TCP/IP stacks ported from Linux and lwIP, improvements to the Intel and VESA drivers, better power management of their Intel GPU multiplexer, and more. They’ve also added support for touchscreen gestures, file modification times support milliseconds now, and support for the seL4 kernel has been improved. Many of these changes will find their way into the next SculptOS release, or, in some cases, were already added.
An incredibly primitive operating system, with just two instructions: compile (1) and execute (0). It is heavily inspired by Frank Sergeant 3-Instruction Forth and is a strip down exercise following up SectorForth, SectorLisp, SectorC (the C compiler used here) and milliForth. Here is the full OS code in 46 bytes of 8086 assembly opcodes. ↫ 10biForthOS sourcehut page Yes, the entire operating system easily fits right here, inside an OSNews quote block: 50b8 8e00 31d8 e8ff 0017 003c 0575 00ea5000 3c00 7401 eb02 e8ee 0005 0588 eb47b8e6 0200 d231 14cd e480 7580 c3f4 ↫ 10biForthOS sourcehut page How do you actually use this operating system? Once the operating system is loaded at boot, it listens on the serial port for instructions. You can then send the instruction 1 followed by a byte of an assembly opcode which will be compiled into a fixed location in memory. The instruction 0 will then execute the program. There’s also a version with keyboard support, as well as a much bigger version compiled for x86-64. Something like this inevitably raises the question what an operating system really is, and if this extremely limited and minimalist thing can be considered as one. I’m not going to deep into this existential discussion, mostly because I land firmly on the side that this is indeed just as much an operating system as, say, Windows or MorphOS. This bit of code, when booted, allows you to operate the system. It’s an operating system.
Microsoft’s Recall feature, which takes screenshots of the contents of your screen every few seconds, saves them, and then runs text and image recognition to extract information from them, has had a rocky start. Even now that it’s out there and Microsoft deems it ready for everyone to use, it has huge security and privacy gaps, and one of them is that applications that contain sensitive information, such as the Windows Signal application, cannot ‘opt out’ of having their contents scraped. Signal was rather unhappy with this massive privacy risk, and decided to do something about it. It’s called screen security, and is Windows-only because it’s specifically designed to counter Windows Recall. If you attempt to take a screenshot of Signal Desktop when screen security is enabled, nothing will appear. This limitation can be frustrating, but it might look familiar to you if you’ve ever had the audacity to try and take a screenshot of a movie or TV show on Windows. According to Microsoft’s official developer documentation, setting the correct Digital Rights Management (DRM) flag on the application window will ensure that “content won’t show up in Recall or any other screenshot application.” So that’s exactly what Signal Desktop is now doing on Windows 11 by default. ↫ Joshua Lund on the Signal blog Microsoft cares more about enforcing the rights of massive corporations than it does about respecting the privacy of its users. As such, everything is in place in Windows to ensure neither you nor Recall can take screenshots of, I don’t know, the Bee Movie, but nothing has been put in place to protect your private and sensitive messages in a service like Signal. This really tells you all you need to know about who Microsoft truly cares about, and it sure as hell isn’t you, the user. What Signal is doing is absolutely brilliant. By turning Windows’ digital rights management features against Recall to protect the privacy of Signal users, Signal has made it impossible – or at least very hard – for Microsoft to address this. Of course, this also means that taking screenshots of the Signal application on Windows for legitimate purposes is more cumbersome now, but since you can temporarily turn screen security off to take a screenshot means it’s not impossible. I almost want other Windows developers to employ this same trick, just to make Recall less valuable, but that’s probably not a great idea considering how much it would annoy users just trying to take legitimate screenshots. My uneducated guess is that this is exactly why Microsoft isn’t providing developers with the kind of fine-grained controls to let Recall know what it can and cannot take screenshots of: Microsoft must know Recall is a feature for shareholders, not for users, and that users will ask developers to opt-out of any Recall snooping if such APIs were officially available. Microsoft wants to make it has hard as possible for applications to opt out of being sucked into the privacy black hole that is Recall, but in doing so, might be pushing developers to use DRM to achieve the same goal. Just delicious. Signal also signed off with a scathing indictment of “AI” as a whole. “Take a screenshot every few seconds” legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like “How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?” — but more sophisticated threats are on the horizon. The integration of AI agents with pervasive permissions, questionable security hygiene, and an insatiable hunger for data has the potential to break the blood-brain barrier between applications and operating systems. This poses a significant threat to Signal, and to every privacy-preserving application in general. ↫ Joshua Lund on the Signal blog Heed this warning.
Windows NT 4 doesn’t virtualise well. This guide shows how to do it with Proxmox with a minimal amount of pain. ↫ Chris Jones Nothing to add, other than I love the linked website’s design.
plwm is a highly customizable X11 dynamic tiling window manager written in Prolog. Main goals of the project are: high code & documentation quality; powerful yet easy customization; covering most common needs of tiling WM users; and to stay small, easy to use and hack on. ↫ plwm GitHub page Tiling window managers are a dime-a-dozen, but the ones using a unique or uncommon programming language do tend to stand out.
Highlights of Linux 6.15 include Rust support for hrtimer and ARMv7, a new setcpuid= boot parameter for x86 CPUs, support for sched_ext to count and report internal events, x86 Intel and AMD PMU enhancements, nested virtualization support for VGICv3 on ARM, and support for emulating FEAT_PMUv3 on Apple Silicon. ↫ Marius Nestor at 9To5Linux On top of these highlights, there’s also a ton of other changes, from the usual additions of new drivers, to better support for RISC-V, and so much more.
A Linux kernel driver that turns a rotary phone dial into an evdev input device. ↫ Stefan Wiehler The year of Linux on the desktop is finally here. Thanks to Oleksandr Natalenko for pointing this gem out.
The Amiga, a once-dominant force in the personal computer world, continues to hold a special place in the hearts of many. But with limited next-gen hardware available and dwindling AmigaOS4 support, the future of this beloved platform seemed uncertain. That is, until four Dutch passionate individuals, Dave, Harald, Paul, and Marco, decided to take matters into their own hands. Driven by a shared love for the Amiga and a desire to see it thrive, they embarked on an ambitious project: to create a new, low-cost next-gen Amiga mainboard. ↫ Mirari’s Our Story page Experience has taught me to be… Careful of news of new hardware from the Amiga world, but for once I have strong reasons to believe this one is actually the real deal. The development story – from the initial KiCad renders to the first five, fully functional prototype boards – seems to be on track, software support for Amiga OS is in development, Linux is working great already, and since today, MorphOS also boots on the board. It’s called the Mirari, and it’s very Dutch. So, what are we looking at here? The Mirari is a micro-ATX board, sporting either a PowerPC T10x2 processor (2-4 e5500 cores) up to 1.5GHz or a PowerPC T2081 processor (4 dual-threaded e6500 cores with Altivec 2.0) up to 1.8GHz, both designed by NXP in The Netherlands. It supports DDR3 memory, PCIe 2.0 (3.0 for the 4x slot when using the T2081), SATA and NVMe, the usual array of USB 2.0 and 3.2 ports, audio jacks, Ethernet, and so on. No, this is not a massive powerhouse that can take on the latest x86 or ARM machines, but it’s more than enough to power Amiga OS 4 or MorphOS, and aims to be actually affordable. Being at the prototype stage means they’re not for sale quite yet, but the fact they have a 100% yield so far and are comfortable enough to send one of the prototypes to a MorphOS developer, who then got MorphOS booting rather quickly, is a good sign. I also like the focus on affordability, which is often a problem in the Amiga world. I hope they make it to production, because I want one real bad.
Who doesn’t love a bug bounty program? Fix some bugs, get some money – you scratch my back, I pay you for it. The CycloneDX Rust (Cargo) Plugin decided to run one, funded by the Bug Resilience Program run by the Sovereign Tech Fund. That is, until “AI” killed it. We received almost entirely AI slop reports that are irrelevant to our tool. It’s a library and most reporters didn’t even bother to read the rules or even look at what the intended purpose of the tool is/was. This caused a lot of extra work which is why we decided to abandon the program. Thanks AI. ↫ Lars Francke On a slightly related note, I had to do search the web today because I’m having some issues getting OpenIndiana to boot properly on my mini PC. For whatever reason, starting LightDM fails when booting the live USB, and LightDM’s log is giving some helpful error messages. So, I searched for "failed to get list of logind seats" openindiana, and Google’s automatic “AI Overview” ‘feature’, which takes up everything above the fold so is impossible to miss, confidently told me to check the status of the logind service… With systemctl. We’ve automated stupidity.
We are today officially deprecating two installation methods and three legacy CPU architectures. We always strive to have Home Assistant run on almost anything, but sometimes we must make difficult decisions to keep the project moving forward. Though these changes will only affect a small percentage of Home Assistant users, we want to do everything in our power to make this easy for those who may need to migrate. ↫ Franck Nijhof on the Home Assistant blog Home Assistant is quite popular among the kind of people who read OSNews, and this news might actually hit our little demographic particularly hard. The legacy CPU architectures they’re removing support for won’t make much of a difference, as we’re talking 32bit x86 and 32bit ARM, although that last one does include version 1 and 2 of the Raspberry Pi, which were quite popular at the time. Do check to make sure you’re not running your Home Assistant installation on one of those. The bigger hit is the deprecation of two installation methods: Home Assistant Core and Home Assistant’s Supervised installation method. In Core, you’re running it in a Python environment, and with Supervised, you’re installing the various components that make up Home Assistant manually. Supervised is used to install Home Assistant on unsupported operating systems, like the various flavours of BSD. What this means is that if you are running Home Assistant on, say, OpenBSD, you’re going to have to migrate soon. Apparently, these installation methods are not used very often, and are difficult for Home Assistant to support. These changes do not mean you can no longer perform these installation methods; it just means they are not supported, will be removed from the documentation, and new issues with these methods will not be accepted. Of course, anyone is free to take over hosting any documentation and guides, as Home Assistant is open source. Home Assistant generally wants you to use Home Assistant OS, which is basically a Linux distribution designed to run Home Assistant, either on real hardware (which is what I do, on an x86 thin client) or in a container.
Let’s check in on TrueNAS, who apparently employ “AI” to handle customer service tickets. Kyle Kingsbury had to have dealings with TrueNAS’ customer support, and it was a complete trashfire of irrelevance and obviously wrong answers, spiraling all the way into utter lies. The “AI” couldn’t generate its way out of a paper bag, and for a paying customer who is entitled to support, that’s not a great experience. Kingsbury concludes: I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives. Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers. ↫ Kyle Kingsbury This time, it’s just about an upgrade process for a NAS, and the worst possible outcome “AI” generated bullshit could lead to is a few lost files. Potentially disastrous on a personal level for the customer involved, but not exactly a massive problem. However, once we’re talking support for medical devices, medication, dangerous power tools, and worse, this could – and trust me, will – lead to injury and death. TrueNAS, for its part, contacted Kingsbury after his blog post blew up, and assured him that “their support process does not normally incorporate LLMs”, and that they would investigate internally what, exactly, happened. I hope the popularity of Kingsbury’s post has jolted whomever is responsible for customer service at TrueNAS that farming out customer service to text generators is a surefire way to damage your reputation.
On numerous occasions, we’ve talked about the issue facing non-GNOME GTK desktops, like Xfce, MATE, and Cinnamon: the popularity of Libadwaita. With more and more application developers opting for GNOME’s Libadwaita because of the desktop environment’s popularity, many popular GTK applications now look like GNOME applications instead of GTK applications, and they just don’t mesh well with traditional GTK desktops. Since Libadwaita is not themeable, applications that use it can’t really be made to feel at home on non-GNOME GTK desktops, unless said desktops adopt the entire GNOME design language, handing over control ovr their GUI design to outsiders in the process. The developers of Libadwaita, as well as the people behind GNOME, have made it very clear they do not intend to make Libadwaita themeable, and they are well within their rights to make that decision. I think it’s a bad decision – themeing is a crucial accessibility feature – but it’s their project, their code, and their time, and I fully respect their decision, since it’s really not up to GNOME to worry about the other GTK desktops. So, what are the developers of Xfce, MATE, and Cinnamon supposed to do? Well, how about taking matters into their own hands? Clement Lefebvre, the lead developer of Linux Mint and its Cinnamon desktop environment, has soft-forked Libadwaita to add theme support to the library. They’re calling it LibAdapta. libAdapta is libAdwaita with theme support and a few extra. It provides the same features and the same look as libAdwaita by default. In desktop environments which provide theme selection, libAdapta apps follow the theme and use the proper window controls. ↫ LibAdapta’s GitHub page The reason they consider libAdapta a “soft-fork” is that all it does is add theme support; they do not intended to deviate from Libadwaita in any other way, and will follow Libadwaita’s releases. It will use the current GTK3 theme, and will fallback to the default Libadwaita look and feel if the GTK3 theme in question doesn’t have a libadapta-1.0 directory. This seems like a transparent and smart way to handle it. I doubt it will be long before libAdapta becomes a default part of a lot of user instructions online, GTK theme developers will probably add support for it pretty quickly, and perhaps even of a lot of non-GNOME GTK desktop environments will add it by default. It will make it a lot easier for, say, the developers of MATE to make use of the latest Libadwaita applications, without having to either accept a disjointed, inconsistent user experience, or adopt the GNOME design language hook, line, and sinker and lose all control over the user experience they wish to offer to their users. I’m glad this exists now, and hope it will prove to be popular. I appreciate the pragmatic approach taken here – a relatively simple fork that doesn’t burden upstream, without long feature request threads where everybody is shouting at each other that needlessly spill over onto Fedi. This is how open source is supposed to work.