No, Southwest Airlines is not still using Windows 3.1

A story that’s been persistently making the rounds since the CrowdStrike event is that while several airline companies were affected in one way or another, Southwest Airlines escaped the mayhem because they were still using windows 3.1. It’s a great story that fits the current zeitgeist about technology and its role in society, underlining that what is claimed to be technological progress is nothing but trouble, and that it’s better to stick with the old. At the same time, anybody who dislikes Southwest Airlines can point and laugh at the bumbling idiots working there for still using Windows 3.1. It’s like a perfect storm of technology news click and ragebait. Too bad the whole story is nonsense. But how could that be? It’s widely reported by reputable news websites all over the world, shared on social media like a strain of the common cold, and nobody seems to question it or doubt the veracity of the story. It seems that Southwest Airlines running on an operating system from 1992 is a perfectly believable story to just about everyone, so nobody is questioning it or wondering if it’s actually true. Well, I did, and no, it’s not true. Let’s start with the actual source of the claim that Southwest Airlines was unaffected by CrowdStrike because they’re still using Windows 3.11 for large parts of their primary systems. This claim is easily traced back to its origin – a tweet by someone called Artem Russakovskii, stating that “the reason Southwest is not affected is because they still run on Windows 3.1”. This tweet formed the basis for virtually all of the stories, but it contains no sources, no links, no background information, nothing. It was literally just this one line. It turned out be a troll tweet. A reply to the tweet by Russakovskii a day later made that very lear: “To be clear, I was trolling last night, but it turned out to be true. Some Southwest systems apparently do run Windows 3.1. lol.” However, that linked article doesn’t cite any sources either, so we’re right back where we started. After quite a bit of digging – that is, clicking a few links and like 3 minutes of searching online – following the various reference and links back to their sources, I managed to find where all these stories actually come from to arrive at the root claim that spawned all these other claims. It’s from an article by The Dallas Morning News, titled “What’s the problem with Southwest Airlines scheduling system?” At the end of last year, Southwest Airlines’ scheduling system had a major meltdown, leading to a lot of cancelled flights and stranded travelers just around the Christmas holidays. Of course, the media wanted to know what caused it, and that’s where this The Dallas Morning News article comes from. In it, we find the paragraphs that started the story that Southwest Airlines is still using Windows 3.1 (and Windows 95!): Southwest uses internally built and maintained systems called SkySolver and Crew Web Access for pilots and flight attendants. They can sign on to those systems to pick flights and then make changes when flights are canceled or delayed or when there is an illness. “Southwest has generated systems internally themselves instead of using more standard programs that others have used,” Montgomery said. “Some systems even look historic like they were designed on Windows 95.” SkySolver and Crew Web Access are both available as mobile apps, but those systems often break down during even mild weather events, and employees end up making phone calls to Southwest’s crew scheduling help desk to find better routes. During periods of heavy operational trouble, the system gets bogged down with too much demand. ↫ Kyle Arnold at The Dallas Morning News That’s it. That’s where all these stories can trace their origin to. These few paragraphs do not say that Southwest is still using ancient Windows versions; it just states that the systems they developed internally, SkySolver and Crew Web Access, look “historic like they were designed on Windows 95”. The fact that they are also available as mobile applications should further make it clear that no, these applications are not running on Windows 3.1 or Windows 95. Southwest pilots and cabin crews are definitely not carrying around pocket laptops from the ’90s. These paragraphs were then misread, misunderstood, and mangled in a game of social media and bad reporting telephone, and here we are. The fact that nobody seems to have taken the time to click through a few links to find the supposed source of these claims, instead focusing on cashing in on the clicks and rage these stories would illicit, is a rather damning indictment of the state of online (tech) media. Many of the websites reporting on these stories are part of giant media conglomerates, have a massive number of paid staff, and they’re being outdone by a dude in the Arctic with a small Patreon, minimal journalism training, and some common sense. This story wasn’t hard to debunk – a few clicks and a few minutes of online searching is all it took. Ask yourself – why do these massive news websites not even perform the bare minimum?

A brief history of Dell UNIX

“Dell UNIX? I didn’t know there was such a thing.” A couple of weeks ago I had my new XO with me for breakfast at a nearby bakery café. Other patrons were drawn to seeing an XO for the first time, including a Linux person from Dell. I mentioned Dell UNIX and we talked a little about the people who had worked on Dell UNIX. He expressed surprise that mention of Dell UNIX evokes the above quote so often and pointed out that Emacs source still has #ifdef for Dell UNIX. Quick Googling doesn’t reveal useful history of Dell UNIX, so here’s my version, a summary of the three major development releases. ↫ Charles H. Sauer I sure had never heard of Dell UNIX, and despite the original version of the linked article being very, very old – 2008 – there’s a few updates from 2020 and 2021 that add links to the files and instructions needed to install, set up, and run Dell UNIX in a virtual machine; 86Box or VirtualBox specifically. What was Dell UNIX? in the late ’80s, Dell started a the Olympic project, an effort to create a completely new architecture spanning desktops, workstations, and servers, some of which would be using multiple processors. When searching for an operating system for this project, the only real option was UNIX, and as such, the Olympic team set out to developer a UNIX variant. The first version was based on System V Release 3.2, used Motif and the X Window System, a DOS virtual machine to run, well, DOS applications called Merge, and compatibility with Microsoft Xenix. It might seem strange to us today, but Microsoft’s Xenix was incredibly popular at the time, and compatibility with it was a big deal. The Olympic project turned out to be too ambitious on the hardware front so it got cancelled, but the Dell UNIX project continued to be developed. The next release, Dell System V Release 4, was a massive release, and included a full X Window System desktop environment called X.desktop, an office suite, e-mail software, and a lot more. It also contained something Windows wouldn’t be getting for quite a few years to come: automatic configuration of device drivers. This was apparently so successful, it reduced the number of support calls during the first 90 days of availability by 90% compared to the previous release. Dell SVR4 finally seemed like real UNIX on a PC. We were justifiably proud of the quality and comprehensiveness, especially considering that our team was so much smaller than those of our perceived competitors at ISC, SCO and Sun(!). The reviewers were impressed. Reportedly, Dell SVR4 was chosen by Intel as their reference implementation in their test labs, chosen by Oracle as their reference Intel UNIX implementation, and used by AT&T USL for in house projects requiring high reliability, in preference to their own ports of SVR4.0. (One count showed Dell had resolved about 1800 problems in the AT&T source.) I was astonished one morning in the winter of 1991-92 when Ed Zander, at the time president of SunSoft, and three other SunSoft executives arrived at my office, requesting Dell help with their plans to put Solaris on X86. ↫ Charles H. Sauer Sadly, this would also prove to be the last release of Dell UNIX. After a few more point release, the brass at Dell had realised that Dell UNIX, intended to sell Dell hardware, was mostly being sold to people running it on non-Dell hardware, and after a short internal struggle, the entire project was cancelled since it was costing them more than it was earning them. As I noted, the article contains the files and instructions needed to run Dell UNIX today, on a virtual machine. I’m definitely going to try that out once I have some time, if only to take a peek at that X.desktop, because that looks absolutely stunning for its time.

OpenBSD workstation for the people

This is an attempt at building an OpenBSD desktop than could be used by newcomers or by people that don’t care about tinkering with computers and just want a working daily driver for general tasks. Somebody will obviously need to know a bit of UNIX but we’ll try to limit it to the minimum. ↫ Joel Carnat An excellent, to-the-point, no-nonsense guide about turning a default OpenBSD installation into a desktop operating system running Xfce. You definitely don’t need intimate, arcane knowledge of OpenBSD to follow along with this one.

OpenBSD gets hardware accelerated video decoding/encoding

Only yesterday, I mentioned one of the main reasons I decided to switch back to Fedora from OpenBSD were performance issues – and one of them was definitely the lack of hardware acceleration for video decoding/encoding. The lack of such technology means that decoding/encoding video is done using the processor, which is far less efficient than letting your GPU do it – which results in performance issues like stuttering and tearing, as well as a drastic reduction in battery life. Well, that’s changed now. Thanks to the work of, well, many, a major commit has added hardware accelerated video decoding/encoding to OpenBSD. Hardware accelerated video decode/encode (VA-API) support is beginning to land in #OpenBSD -current. libva has been integrated into xenocara with the Intel userland drivers in the ports tree. AMD requires Mesa support, hence the inclusion in base. A number of ports will be adjusted to enable VA-API support over time, as they are tested. ↫ Bryan Steele This is great news, and a major improvement for OpenBSD and the community. Apparently, performance in Firefox is excellent, and with simply watching video on YouTube being something a lot of people do with their computers – especially laptops – anyone using OpenBSD is going to benefit immensely from this work.

1989 networking: NetWare 386

NetWare 386 or 3.0 was a very limited release, with very few copies sold before it was superseded by newer versions. As such, it was considered lost to time, since it was only sold to large corporations – for a massive almost 8000 dollar price tag – who obviously didn’t care about software preservation. There are no original disks left, but a recent “warez” release has made the software available once again. As always, pirates save the day.

Managing Classic Mac OS resources in ResEdit

The Macintosh was intended to be different in many ways. One of them was its file system, which was designed for each file to consist of two forks, one a regular data fork as in normal file systems, the other a structured database of resources, the resource fork. Resources came to be used to store a lot of standard structured data, such as the specifications for and contents of alerts and dialogs, menus, collections of text strings, keyboard definitions and layouts, icons, windows, fonts, and chunks of code to be used by apps. You could extend the types of resource supported by means of a template, itself stored as a resource, so developers could define new resource types appropriate to their own apps. ↫ Howard Oakley And using ResEdit, a tool developed by Apple, you could manipulate the various resources to your heart’s content. I never used the classic Mac OS when it was current, and only play with it as a retro platform every now and then, so I ever used ResEdit when it was the cool thing to do. Looking back, though, and learning more about it, it seems like just another awesome capability that Apple lost along the way towards modern Apple. Perhaps I should load up on my old Macs and see with my own eyes what I can do with ResEdit.

Google URL Shortener links will no longer be available

In 2018, we announced the deprecation and transition of Google URL Shortener because of the changes we’ve seen in how people find content on the internet, and the number of new popular URL shortening services that emerged in that time. This meant that we no longer accepted new URLs to shorten but that we would continue serving existing URLs. Today, the time has come to turn off the serving portion of Google URL Shortener. Please read on below to understand more about how this will impact you if you’re using Google URL Shortener. ↫ Sumit Chandel and Eldhose Mathokkil Babu It should cost Google nothing to keep this running for as long as Google exists, and yet, this, too, has to be killed off and buried in the Google Graveyard. We’ll be running into non-resolving Google URL Shortener links for decades to come, both on large, popular websites a well as on obscure forums and small websites. You’ll find a solution to some obscure problem a decade from now, but the links you need will be useless, and you’ll rightfully curse Google for being so utterly petty. Relying on anything Google that isn’t directly serving its main business – ads – is a recipe for disaster, and will cause headaches down the line. Things like Gmail, YouTube, and Android are most likely fine, but anything consumer-focused is really a lottery.

Why I like NetBSD, or why portability matters

All that to say, I find that NetBSDs philosophy aligns with mine. The OS is small and cozy, and compared to many minimal Linux distributions, I found it faster to setup. Supported hardware is automatically picked up, for my Thinkpad T480s almost everything (except the trackpad issue I solved above) worked out of the box, and it comes with a minimal window manager and display manager to get you started. It is simple and minimal but with sane defaults. It is a hackable system that teaches you a ton. What more could you want? ↫ Marc Coquand I spent quite some time using OpenBSD earlier this year, and I absolutely, positively loved it. I can’t quite put into words just how nice OpenBSD felt, how graspable the configuration files and commands were, how good and detailed the documentation, and how welcoming and warm the community was over on Mastodon, with even well-known OpenBSD developers taking time out of their day to help me out with dumb newbie questions. The only reason I eventually went back to Fedora on my workstation was performance. OpenBSD as a desktop operating system has some performance issues, from a slow file system to user interface stutter to problematic Firefox performance, that really started to grind my gears while trying to get work done. Some of these issues stem from OpenBSD not being primarily focused on desktop use, and some of them simply stem from lack of manpower or popularity. Regardless, nobody in the OpenBSD community was at all surprised or offended by me going back to Fedora. NetBSD seems to share a lot of the same qualities as OpenBSD, but, as the linked article notes, with a focus on different things. Like I said yesterday, I’m looking to building and testing a system entirely focused on tiled terminal emulators and TUI applications, and I’ve been pondering if OpenBSD or NetBSD would be a perfect starting point for that experiment.

Introduction to NanoBSD

This document provides information about the NanoBSD tools, which can be used to create FreeBSD system images for embedded applications, suitable for use on a USB key, memory card or other mass storage media. It can be used to build specialized install images, designed for easy installation and maintenance of systems commonly called “computer appliances”. Computer appliances have their hardware and software bundled in the product, which means all applications are pre-installed. The appliance is plugged into an existing network and can begin working (almost) immediately. ↫ FreeBSD documentation Some of the primary features of NanoBSD are exactly what you’d expect out of a tool like this, such as the system being entirely read-only at runtime, so you don’t have to worry about shutdowns or data loss, and of course, the entire creation process of NanoBSD images using a simple shell script with any arbitrary set of requirements. For the rest, it remains a FreeBSD system, so ports and packages work just as you’d expect, and assuming your specific settings for the NanoBSD image didn’t remove it, anything that works in FreeBSD, works in a NanoBSD image, too. The documentation is, as is often the case in the BSD world, excellent, and very easy to follow, even for someone not at all specialised in things like this. Reading through it, I’m pretty sure even I could create a customised NanoBSD image and run it, since it very much looks like you’re just creating a custom installation script, adding just the things you need. I don’t have a use for something like this, but I’m not sure how well-known NanoBSD is, and I feel like there’s definitely some among you who would appreciate this.

CrowdStrike issue is causing massive computer outages worldwide

Well, this sure is something to wake up to: a massive worldwide outage of computer systems due to a problem with CrowdStrike software. Payment systems, airlines, hospitals, governments, TV stations – pretty much anything or anyone using computers could be dealing with bluescreens, bootloops, and similar issues today. Open-heart surgeries had to be stopped mid-surgery, planes can’t take off, people can’t board trains, shoppers can’t pay for their groceries, and much, much more, all over the world. The problem is caused by CrowdStrike, a sort-of enterprise AV/monitoring software that uses a Windows NT kernel driver to monitor everything people do on corporate machines and logs it for… Security purposes, I guess? I’ve never worked in a corporate setting so I have no experience with software like this. From what I hear, software like this is deeply loathed by workers the world over, as it gets in the way and slows systems down. And, as can happen with a kernel driver, a bug can cause massive worldwide outages which is costing people billions in damages and may even have killed people. There is a workaround, posted by CrowdStrike: This is a solution for individually fixing affected machines, but I’ve seen responses like “great, how do I apply this to 70k endpoints?”, indicating that this may not be a practical solution for many affected customers. Then there’s the issue that this may require a BitLocker password, which not everyone has on hand either. To add insult to injury, CrowdStrike’s advisory about the issue is locked behind a login wall. A shitshow all around. Do note that while the focus is on Windows, Linux machines can run CrowdStrike software too, and I’ve heard from Linux kernel engineers who happen to also administer large numbers of Linux servers that they’re seeing a huge spike in Linux kernel panics… Caused by CrowdStrike, which is installed on a lot more Linux servers than you might think. So while Windows is currently the focus of the story, the problems are far more widespread than just Windows. I’m sure we’re going to see some major consequences here, and my – misplaced, I’m sure – hope is that this will make people think twice about one, using these invasive anti-worker monitoring tools, and two, employing kernel drivers for this nonsense.

NVIDIA transitions fully towards open-source GPU Linux kernel modules

It’s a bit of a Linux news day today – it happens – but this one is good news we can all be happy about. After earning a bad reputation for mishandling its Linux graphics drivers for years, almost decades, NVIDIA has been turning the ship around these past two years, and today they made a major announcement: from here on out, the open source NVIDIA kernel modules will be the default for all recent NVIDIA cards. We’re now at a point where transitioning fully to the open-source GPU kernel modules is the right move, and we’re making that change in the upcoming R560 driver release. ↫ Rob Armstrong, Kevin Mittman and Fred Oh There are some caveats regarding which generations, exactly, should be using the open source modules for optimal performance. For NVIDIA’s most cutting edge generations, Grace Hopper and Blackwell, you actually must use the open source modules, since the proprietary ones are not even supported. For GPUs from the Turing, Ampere, Ada Lovelace, or Hopper architectures, NVIDIA recommends the open source modules, but the proprietary ones are compatible as well. Anything older than that is restricted to the proprietary modules, as they’re not supported by the open source modules. This is a huge milestone, and NVIDIA becoming a better team player in the Linux world is a big deal for those of us with NVIDIA GPUs – it’s already paying dividend in vastly improved Wayland support, which up until very recently was a huge problem. Do note, though, that this only covers the kernel module; the userspace parts of the NVIDIA driver are still closed-source, and there’s no indication that’s going to change.

Linux patch to disable Snapdragon X Elite GPU by default

Not too long ago it seemed like Linux support for the new ARM laptops running the Snapdragon X Pro and Elite processors was going to be pretty good – Qualcomm seemed to really be stepping up its game, and detailed in a blog post exactly what they were doing to make Linux a first-tier operating system on their new, fancy laptop chips. Now that the devices are in people’s hand, though, it seems all is not so rosy in this new Qualcomm garden. A recent Linux kernel DeviceTree patch outright disables the GPU on the Snapdragon X Elite, and the issue is, as usual, vendor nonsense, as it needs something called a ZAP shader to be useful. The ZAP shader is needed as by default the GPU will power on in a specialized “secure” mode and needs to be zapped out of it. With OEM key signing of the GPU ZAP shader it sounds like the Snapdragon X laptop GPU support will be even messier than typically encountered for laptop graphics. ↫ Michael Larabel This is exactly the kind of nonsense you don’t want to be dealing with, whether you’re a user, developer, or OEM, so I hope this gets sorted out sooner rather than later. Qualcomm’s commitments and blog posts about ensuring Linux is a first-tier platform are meaningless if the company can’t even get the GPU to work properly. These enablement problems should’ve been handled well before the devices entered circulation, so this is very disheartening to see. So, for now, hold off on X Elite laptops if you’re a Linux user.

Ly: a TUI display manager

Ly is a lightweight TUI (ncurses-like) display manager for Linux and BSD. ↫ Ly GitHub page That’s it. That’s the description. I’ve been wanting to take a stab at running a full CLI/TUI environment for a while, see just how far I can get in my computing life (excluding games) running nothing but a few tiled terminal emulators running various TUI apps for email, Mastodon, browsing, and so on. I’m not sure I’d be particularly happy with it – I’m a GUI user through and through – but lately I’ve seen quite a few really capable and just pleasantly usable TUI applications come by, and they’ve made me wonder. It’d make a great article too.

Unified kernel image

UKIs can run on UEFI systems and simplify the distribution of small kernel images. For example, they simplify network booting with iPXE. UKIs make rootfs and kernels composable, making it possible to derive a rootfs for multiple kernel versions with one file for each pair. A Unified Kernel Image (UKI) is a combination of a UEFI boot stub program, a Linux kernel image, an initramfs, and further resources in a single UEFI PE file (device tree, cpu µcode, splash screen, secure boot sig/key, …). This file can either be directly invoked by the UEFI firmware or through a boot loader. ↫ Hugues If you’re still a bit unfamiliar with unified kernel images, this post contains a ton of detailed practical information. Unified kernel images might become a staple for forward-looking Linux distributions, and I know for a fact that my distribution of choice, Fedora, has been working on it for a while now. The goal is to eventually simplify the boot process as a whole, and make better, more optimal use of the advanced capabilities UEFI gives us over the old, limited, 1980s BIOS model. Like I said a few posts ago, I really don’t want to be using traditional bootloaders anymore. UEFI is explicitly designed to just boot operating systems on its own, and modern PCs just don’t need bootloaders anymore. They’re points of failure users shouldn’t be dealing with anymore in 2024, and I’m glad to see the Linux world is seriously moving towards negating the need for their existence.

Safari already contains ad tracking technology, and they’re now adding it to Safari’s Private Browsing mode, too

We’ve been talking a lot about sleazy ways in which the online advertising industry is conspiring with browser makers – who also happen to be in the online advertising industry – to weaken privacy features so they can still track you and the ads they serve you, but with “privacy”. They’re trying really hard to make it seem as if they’re doing us a huge favour by making tracking slightly more private, and browser makers are falling over themselves to convince us that allowing some user and ad tracking is the only way to stop the kind of total everything, everywhere, all at once tracking we have now. We’ve got Google and Chrome pushing something called “Privacy Sandbox“, and we’ve got Mozilla and Facebook pushing something called “Privacy-Preserving Attribution“, both of which are designed to give the advertising industry slightly more private tracking in the desperate hope they won’t still be doing a lot more tracking on the side. Safari users, meanwhile, have been feeling pretty good about all of this in the knowledge Apple cares about privacy, so surely Safari won’t be doing any of this. You know where this is going, right? Today, the WebKit project published a lengthy blog post detailing all the various additional measures it’s taking to make its Private Browsing mode more, well, private, and a lot of them are great moves, very welcome, and ensure that private browsing on Safari is a little bit more private than it is on Chrome, as the blog post gleefully points out. However, not long into the blog post, the shoe drops. We also expanded Web AdAttributionKit (formerly Private Click Measurement) as a replacement for tracking parameters in URL to help developers understand the performance of their marketing campaigns even under Private Browsing. ↫ John Wilander, Charlie Wolfe, Matthew Finkel, Wenson Hsieh, and Keith Holleman A little further down, they go into more detail: Web AdAttributionKit (formerly Private Click Measurement) is a way for advertisers, websites, and apps to implement ad attribution and click measurement in a privacy-preserving way. You can read more about it here. Alongside the new suite of enhanced privacy protections in Private Browsing, Safari also brings a version of Web AdAttributionKit to Private Browsing. This allows click measurement and attribution to continue working in a privacy-preserving manner. ↫ John Wilander, Charlie Wolfe, Matthew Finkel, Wenson Hsieh, and Keith Holleman So not only does Safari already include the kind of tracking technology everyone is – rightfully – attacking Mozilla over for adding it to Firefox, Apple and the Safari team are actually taking it a step further and making this ad tracking technology available in private browsing mode. The technology is limited a bit more in Private Browsing mode, but its intent is preserved: to track you and the ads you see online. I would hazard a guess that when you enable a browser’s private browsing or incognito mode, you assume that means zero tracking. We already know that Chrome’s Incognito mode leaks data like a sieve with bullet holes in it, and now it seems Safari’s Private Browsing mode, too, is going to allow advertisers to track you and the ads you see – blog post full of fancy privacy features be damned. Do you know those “Around the web” chumboxes? Even if you’re unfamiliar with the term, you’ve most definitely seen these things all over the web, and really hate them. A major player in the chumbox business is a company called Taboola, a name that’s quite despised and reviled online. Popular Apple blogger John Gruber called Taboola a “slumlord” and the “lowest common denominator clickbait property“. Do you want to know which major technology company just signed a massive deal with Taboola? Ad tech giant Taboola has struck a deal with Apple to power native advertising within the Apple News and Apple Stocks apps, Taboola founder and CEO Adam Singolda told Axios. ↫ Sara Fischer at Axios Apple needs to find new markets to keep growing, and clearly, pestering its users with upsells and subscriptions to its services isn’t enough. The online advertising industry is massive – just look at Google’s and Facebook’s financial disclosures – and Apple seems to be interested in taking a bigger slice of that fat pie. And as Google and now Mozilla are finding out, a browser that blocks ads and ad tracking kind of gets in the way of that. Anyone who can make and sell plug-and-play Pi-Hole devices even normal people can use is going to make a killing.

I told you so: Mozilla working with Facebook to weaken Firefox’ privacy and anti-tracking features

I’ve long been warning about the dangers of relying on just one browser as the bullwark against the onslaught of Chrome, Chrome skins, and Safari. With Firefox’ user numbers rapidly declining, now stuck at a mere 2% or so – and even less on mobile – and regulatory pressure possibly ending the Google-Mozilla deal with makes up roughly 80% of Mozilla’s income, I’ve been warning that Mozilla will most likely have to start making Firefox worse to gain more temporary revenue. As the situation possibly grows even more dire, Firefox for Linux would be the first on the chopping block. I’ve received quite a bit of backlash over expressing these worries, but over the course of the last year or so we’ve been seeing my fears slowly become reality before our very eyes, culminating in Mozilla recently acquiring an online advertising analytics company. Over the last few days, things have become even worse: with the release of Firefox 128, the enshitification of Firefox has now well and truly begun. Less than a month after acquiring the AdTech company Anonym, Mozilla has added special software co-authored by Meta and built for the advertising industry directly to the latest release of Firefox, in an experimental trial you have to opt out of manually. This “Privacy-Preserving Attribution” (PPA) API adds another tool to the arsenal of tracking features that advertisers can use, which is thwarted by traditional content blocking extensions. ↫ Jonah Aragon If you have already upgraded to Firefox 128, you have automatically been opted into using this new API, and for now, you can still opt-out by going to Settings > Privacy & Security > Website Advertising Preferences, and remove the checkmark “Allow websites to perform privacy-preserving ad measurement”. You were opted in without your consent, without any widespread announcement, and if it wasn’t for so many Firefox users being on edge about Mozilla’s recent behaviour, it might not have been snuffed out this quickly. Over on GitHub, there’s a more in-depth description of this new API, and the first few words are something you never want to hear from an organisation that claims to fight tracking and protect your privacy: “Mozilla is working with Meta”. I’m not surprised by this at all – like I, perhaps gleefully, pointed out, I’ve been warning about this eventuality for a long time – but I’ve noted that on the wider internet, a lot of people were very much unpleasently surprised, feeling almost betrayed by this, the latest in a series of dubious moves by Mozilla. It’s not even just the fact they’re “working with Meta”, which is entirely disqualifying in and of itself, but also the fact there’s zero transparency or accountability about this new API towards Firefox’ users. Sure, we’re all technologically inclined and follow technology news closely, but the vast majority of people don’t, and there’s bound to be countless people who perhaps only recently moved to Firefox from Chrome for privacy reasons, only to be stabbed in the back by Mozilla partnering up with Facebook, of all companies, if they even find out about this at all. It’s right out of Facebook’s playbook to secretly experiment on users. This is what I wrote a year ago: I’m genuinely worried about the state of browsers on Linux, and the future of Firefox on Linux in particular. I think it’s highly irresponsible of the various prominent players in the desktop Linux community, from GNOME to KDE, from Ubuntu to Fedora, to seemingly have absolutely zero contingency plans for when Firefox enshittifies or dies, despite everything we know about the current state of the browser market, the state of Mozilla’s finances, and the future prospects of both. Desktop Linux has a Firefox problem, but nobody seems willing to acknowledge it. ↫ Thom Holwerda It seems my warnings are turning into reality one by one, and if, at this point, you’re still not worried about where you’re going to go after Firefox starts integrating even more Facebook technologies or Firefox for Linux gets ever more resources pulled away from it until it eventually gets cancelled, you’re blind.

The AMD Zen 5 microarchitecture: powering Ryzen AI 300 series for mobile and Ryzen 9000 for desktop

Built around the new Zen 5 CPU microarchitecture with some fundamental improvements to both graphics and AI performance, the Ryzen AI 300 series, code-named Strix Point, is set to deliver improvements in several areas. The Ryzen AI 300 series looks set to add another footnote in the march towards the AI PC with its mobile SoC featuring a new XDNA 2 NPU, from which AMD promises 50 TOPS of performance. AMD has also upgraded the integrated graphics with the RDNA 3.5, which is designed to replace the last generation of RDNA 3 mobile graphics, for better performance in games than we’ve seen before. Further to this, during AMD’s recent Tech Day last week, AMD disclosed some of the technical details regarding Zen 5, which also covers a number of key elements under the hood on both the Ryzen AI 300 and the Ryzen 9000 series. On paper, the Zen 5 architecture looks quite a big step up compared to Zen 4, with the key component driving Zen 5 forward through higher instructions per cycle than its predecessor, which is something AMD has managed to do consistently from Zen to Zen 2, Zen 3, Zen 4, and now Zen 5. ↫ Gavin Bonshor at AnandTech Not the review and deep analysis quite yet, but a first thorough look at what Zen 5 is going to bring us, straight from AnandTech.

Fusion OS: writing an OS in Nim

I decided to document my journey of writing an OS in Nim. Why Nim? It’s one of the few languages that allow low-level systems programming with deterministic memory management (garbage collector is optional) with destructors and move semantics. It’s also statically typed, which provides greater type safety. It also supports inline assembly, which is a must for OS development. Other options include C, C++, Rust, and Zig. They’re great languages, but I chose Nim for its simplicity, elegance, and performance. ↫ Fusion OS documentation website I love it when a hobby operating system project not only uses a less common programming language, but the author also details the entire development process in great detail. It’s not a UNIX-like, and the goals are a single 64 bit address space, capability-based security model, and a lot more. It’s targeting UEFI machines, and the code is, of course, open source and available on GitHub.

Google can totally explain why Chromium browsers quietly tell only its websites about your CPU, GPU usage

It’s time for Google being Google, this time by using an undocumented APIs to track resource usage when using Chrome. When visiting a *.google.com domain, the Google site can use the API to query the real-time CPU, GPU, and memory usage of your browser, as well as info about the processor you’re using, so that whatever service is being provided – such as video-conferencing with Google Meet – could, for instance, be optimized and tweaked so that it doesn’t overly tax your computer. The functionality is implemented as an API provided by an extension baked into Chromium – the browser brains primarily developed by Google and used in Chrome, Edge, Opera, Brave, and others. ↫ Brandon Vigliarolo at The Register The original goal of the API was to give Google’s various video chat services – I’ve lost count – the ability to optimise themselves based on the available system resources. Crucially, though, this API is only available to Google’s domains, and other, competing services cannot make use of it. This is in clear violation of the European Union’s Digital Markets Act, and with Chrome being by far the most popular browser in the world, and thus a clear gatekeeper, the European Commission really should have something to say about this. For its part, Google told The Register it claims to comply with the DMA, so we might see a change to this API soon. Aside from optimising video chat performance, the API, which is baked into a non-removable extension, also tracks performance issues and crashes and reports these back to Google. This second use, too, is at its core not a bad thing – especially if users are given the option to opt out of such crash analytics. Still, it seems odd to use an undocumented API for something like this, but I’m not a developer so what do I know. Mind you, other Chromium-based browsers also report this data back to Google, which is wild when you think about it. Normally I would suggest people switch to Firefox, but I’ve got some choice words for Firefox and Mozilla, too, later today.