Cassette: a POSIX application framework featuring a retro-futurist GUI toolkit

Cassette is a GUI application framework written in C11, with a UI inspired by the cassette-futurism aesthetic. Built for modern POSIX systems, it’s made out of three libraries: CGUI, CCFG and COBJ. Cassette is free and open-source software, licensed under the LGPL-3.0.

↫ Cassette GitHub page

Upon first reading this description, you might wonder what a “cassette-futurism aesthetic” really is, but once you take a look at the screenshots of what Cassette can do, you immediately understand what it means. It’s still in the alpha stage and there’s lot still to do, but what it has now is already something quite unique I don’t think the major toolkits really cater to or can even pull off.

There’s an example application that’s focused on showing some system stats, and that’s exactly the kind of stuff this seems a great fit for: good-looking, small widget-like applications showing glanceable information.

UnixWare in 2025: still actively developed and maintained

It kind of goes by under the radar, but aside from HP-UX, Solaris, and AIX, there’s another traditional classic UNIX still in active development today: UnixWare (and its sibling, OpenServer). Owned and developed by Xinuos, UnixWare and other related code and IP was acquired by them when the much-hated SCO crashed and burned about 15 years ago or so, and they’ve been maintaining it ever since. About a year ago, Xinuos released Update Pack 1 and Maintenance Pack 1 for UnixWare 7 Definitive 2018, followed by similar update packs for OpenServer 6 later in 2024.

These update packs bring a bunch of bugfixes and performance improvements, as well as a slew of updated open source components, like new versions of SAMBA, sendmail, GCC and tons of other GNU components, OpenSSH and OpenSSL, and so, so much more, enabling a relatively modern and up-to-date build and porting environment. They can be installed through the patchck update utility, and while the Maintenance Pack is free for existing registered users, the Update Pack requires a separate license. UnixWare, while fully capable as a classic UNIX for workstations, isn’t really aimed at individuals or hobbyists (sadly), and instead focuses on existing enterprise deployments, where such licensing costs are par for the course.

UnixWare runs on x86, and can be installed both on real hardware as well as in various virtualised environments. I contacted Xinuos a few days ago for a review license, and they supplied me with one so I can experiment with and write about UnixWare. I’ve currently got it installed in a Linux kvm, where it runs quite well, including the full X11R6 CDE desktop environment and graphical administration tools. Installing updates is a breeze thanks to patchck automating the process of finding, downloading, and installing the correct ones. I intend to ask Xinuos about an optimal configuration for running UnixWare on real hardware, too.

MaXX Interactive Desktop 2.2.0 released

Late last year, the MaXX Interactive Desktop, the Linux (and BSD) version of the IRIX desktop, sprung back to life with a new release and a detailed roadmap. Thanks to a unique licensing agreement with SGI, MaXX’ developer, Eric Masson, has been able to bring a lot of the SGI user experience over to Linux and BSD, and as promised, we have a new release: the final version of MaXX Interactive Desktop 2.2.0. It’s codenamed Octane, and anyone who knows their SGI history will chuckle at this and other codenames MaXX uses.

Like last year’s alpha release, 2.2.0 brings an Exposé-like overview features, initial freedesktop.org integration, tons of performance improvements and bug fixes, desktop notifications, and much more. For the next release, 2.3.0 they’re planning a new file manager, support for .desktop files, a ton of new preference panes, a quick search feature, and a whole bunch of lower-level stuff. With how serious the renewed development effort seems, I hope that some day, the project will consider building MaXX out to a full Linux distribution, to gain more control over the experience and ensure normal users don’t have to perform a manual installation.

Why Upstart from Ubuntu failed

Upstart was an event-based replacement for the traditional System V init (sysvinit) system on Ubuntu, introduced to bring a modern and more flexible way of handling system startup and service management. It emerged in the mid-2000s, during a period when sysvinit’s age and limitations were becoming more apparent, especially with regard to concurrency and dependency handling. Upstart was developed by Canonical, the company behind Ubuntu, with the aim of reducing boot times, improving reliability, and making the system initialization process more dynamic. Though at first it seemed likely to become a standard across many distributions, Upstart eventually lost mindshare to systemd and ceased to be Ubuntu’s default init system.

↫ André Machado

I think it’s safe to say systemd won the competition to become the definitive successor to sysvinit on Linux, but Canonical’s Upstart made a valiant effort, too. However, with a troublesome license, it was doomed from the start, and it didn’t help that virtually every other major distribution eventually adopted systemd. These days, systemd is the Linux init system, and I personally quite like it (and the crowd turns violent). I find it easy to use and it’s never given me any issues, but I’m not a system administrator dealing with complex setups, so my experience with systemd is probably rather limited. It just does its thing in the background on my machines.

None of this means there aren’t any other init systems still being actively developed. There’s GNU Shepard we talked about recently, runit, OpenRC, and many more. If you don’t like systemd, there’s enough alternatives out there.

The dumb reason why flag emojis aren’t working on your site in Chrome on Windows

After doing more digging than I feel like I should have needed to, I found my answer: it appears that due to concerns about the fact that acknowledging the existence of certain countries can be perceived as a nominally political stance, Microsoft has opted to just avoid the issue altogether by not including country flag emojis in Windows’ system font.

Problem solved! Can you imagine if, *gasp*, your computer could render a Taiwanese or Palestinian flag? The horror!

↫ Ryan Geyer

Silicon Valley corporations are nothing if not massive cowards, and this is just another one of the many, many examples that underline this. Firefox solves this by including the flags on its own, but Google refuses to do the same with Chrome, because, you guessed it, Google is also a cowardly organisation. There are some ways around it, as the linked article details, but they’re all clumsy and cumbersome compared to Microsoft just not being a coward and including proper flag emoji, even if it offends some sensibilities in pro-China or western far-right circles.

Your best bet to avoid such corporate cowardice is to switch to better operating systems, like any desktop Linux distribution. Fedora KDE includes both the Taiwanese and Palestinian flags, because the KDE project isn’t made up of cowards, and I’m sure the same applies to any GNOME distribution. If your delicate snowflake sensibilities can’t handle a Palestinian or Taiwanese flag emoji, just don’t type them.

Bitter sidenote: it turns out WordPress, what OSNews uses, doesn’t like emoji, either. Adding any emoji in this story, from basic ones to the Taiwanese or Palestinian flag, makes it impossible to save or publish the story. I have no idea if this is a WordPress issue, or an issue on our end, since WordPress does mention they have emoji support.

TuxTape: a kernel livepatching solution

Geico, an American insurance company, is building a live-patching solution for the Linux kernel, called TuxTape.

TuxTape is an in-development kernel livepatching ecosystem that aims to aid in the production and distribution of kpatch patches to vendor-independent kernels. This is done by scraping the Linux CNA mailing list, prioritizing CVEs by severity, and determining applicability of the patches to the configured kernel(s). Applicability of patches is determined by profiling kernel builds to record which files are included in the build process and ignoring CVEs that do not affect files included in kernel builds deployed on the managed fleet.

↫ Presentation by Grayson Guarino and Chris Townsend

It seems to me something like live-patching the Linux kernel should be a standardised framework that’s part of the Linux kernel, and not several random implementations by third parties, one of which is an insurance company. There’s a base core of functionality for live-patching in the Linux kernel since 4.0, released in 2015, but it’s extremely limited and requires most of the functionality to be implemented separately, through things like Red Hat’s kpatch and Oracle’s Ksplice.

Geico is going to release TuxTape as open source, and is encouraging others to adopt and use it. There are various other solutions out there offering similar functionality, so you’re not spoiled for choice, and I’m sure there’s advantages and disadvantages to each. I would still prefer if functionality like this is a standard feature of the kernel, not something tied to a specific vendor or implementation.

GTK announces X11 deprecation, new Android backend, and much more

Since a number of GTK developer came together at FOSDEM, the project figured now was as good a time as any to give an update on what’s coming in GTK. First, GTK is implementing some hard cut-offs for old platforms – Windows 10 and macOS 10.15 are now the oldest supported versions, which will make development quite a bit easier and will simplify several parts of the codebase. Windows 10 was released in 2015 and macOS 10.15 in 2019, which are fair cut-off points, in my book.

GTK 4.18 will also bring major accessibility improvements with the AccessKit backend, giving GTK accessibility features on Windows and macOS for the first time, which is great news. Another major new feature is the new Android backend, which, while not yet complete, will allow you to run GTK applications on Android. Do note that this is experimental, so don’t expect everything to work without any issues quite yet.

Lastly, the news that everyone was freaking out about over the weekend: the X11 backend has been deprecated, and will be removed in GTK 5. This freaked a lot of people out, but note that this doesn’t mean you magically can’t use GTK 4 applications on X11 anymore – it merely means that X11 support will be removed in GTK 5, which doesn’t even exist yet, and with GTK 4 being supported until GTK 6 is released, people using legacy windowing systems like Xorg will be fine for a long time to come.

As the GTK project notes on Fedi:

The X11 backend being deprecated mainly means that we’re not going to spend time implementing new features, like dmabuf, graphics offloading, or Vulkan support. X11 support will still exist until GTK4 is EOL, which will happen once GTK *6* is released. We’re talking about a 20 years horizon, at this point…

[…]

Of course, somebody could show up tomorrow, and implement everything that the Wayland backend does, but for X11. We can always undeprecate things. We are not holding our breath, though…

↫ The GTK project on Fedi

This is the right move, and I’m glad the GTK project is doing this, and is giving everyone ample time to prepare. A lot of people will still freak out, get mad, and scream bloody murder at certain individuals in the wider Linux community, and those people are, of course, free to start working on Xorg. Like the GTK developers, though, I’m not holding my breath, because despite years of excessive Wayland hate, not a single person has stood up to do the work required to keep Xorg going.

Run Linux inside a PDF file via a RISC-V emulator

You might expect PDF files to only be comprised of static documents, but surprisingly, the PDF file format supports Javascript with its own separate standard library. Modern browsers (Chromium, Firefox) implement this as part of their PDF engines. However, the APIs that are available in the browser are much more limited.

The full specfication for the JS in PDFs was only ever implemented by Adobe Acrobat, and it contains some ridiculous things like the ability to do 3D rendering, make HTTP requests, and detect every monitor connected to the user’s system. However, on Chromium and other browsers, only a tiny subset of this API was ever implemented, due to obvious security concerns. With this, we can do whatever computation we want, just with some very limited IO.

↫ LinuxPDF GitHub page

I’m both impressed and concerned.

The GNU Guix System

GNU Guix is a package manager for GNU/Linux systems. It is designed to give users more control over their general-purpose and specialized computing environments, and make these easier to reproduce over time and deploy to one or many devices.

↫ GNU Guix website

Guix is basically GNU’s approach to a reproducible, functional package manager, very similar to Nix because, well, it’s based on Nix. GNU also has a Linux distribution built around Nix, the GNU Guix System, which is fully ‘libre’ as all things GNU are, and also makes use of the GNU Shepard init system. Both Shepard and Guix make use of Guile. The last release of the GNU Guix System is a few years old already, but it’s a rolling release, so that’s not much of an issue. It uses the Linux kernel, but support for GNU Hurd is also being worked on, for whatever that’s worth.

There’s also a third-party distribution that is built around the same projects, called rde. It focuses on being lightweight, ready for offline use, and minimal distractions. It’s probably not suitable for most normal users, but if you’re a power user and you’re looking for something a little bit different, this could be for you. While it’s in active development, it’s considered usable and stable by its creators. I haven’t tried it yet, but I’m definitely intrigued by what it has to offer.

Nix sucks up a lot of the attention in this space, so it’s interesting to see some of the alternatives that aim for similar goals.

This Sculpt OS video walkthrough explains how to use Sculpt OS

We talk about the Genode project and Sculpt OS quite regularly on OSNews, but every time I’ve tried using Sculpt OS, I’ve always found it so different and so unique compared to everything else that I just couldn’t wrap my head around it. I assume this stems from nothing but my own shortcomings, because the Genode project often hammers on the fact that Sculpt OS is in daily-driver use by a lot of people within and without the project, so there must be something here just not clicking for me.

Well, it seems I’m actually not the only one with difficulties getting started with Sculpt OS’ unique structure and interface, because Norman Feske, co-founder of Genode Labs, has published a lengthy, detailed, but very interesting and easy to follow screencast explaining exactly how to use Sculpt OS and its unique features and characteristics.

Even though Sculpt OS has been in routine daily use for years now, many outside observers still tend to perceive it as fairly obscure because it does not follow the usual preconceptions of a consumer-oriented operating system. Extensive documentation exists, but it leaves a fairly technical impression at a cursory glance, which may scare some people away.

The screencast below aims at making the system a little bit more approachable. It walks you through the steps of downloading, installing, booting the system image, navigating the administrative user interface, and interactively extending and customizing the system. The tour is wrapped up with the steps for creating your personal sculpted OS on a bootable USB stick.

↫ Norman Feske

After watching this, I genuinely feel I have much better grasp of how to use Sculpt OS and just how powerful it really is, and that it’s really not as difficult to use as it may look at first glance. The next time I set some time aside for Sculpt OS, I feel I’ll have a much better grasp of what to do and how to use it properly.

Building a (T1D) smartwatch from scratch

If you have type 1 diabetes, you need to keep track of and manage your blood glucose levels closely, as if these levels dip too low, it can quickly spiral into a medical emergency. Andrew Childs’ 9 year old son has type 1 diabetes, and Childs was unhappy with any of the current offerings on the market for children to keep track of their blood glucose levels. Most people suggested an Apple Watch, but he found the Apple Watch “too much device” for a kid, something I personally agree with.

It ships with so many shiny features and apps and notifications. It’s beautifully crafted. It’s also way too distracting for a kid while they’re at school. Secondly, it doesn’t provide a good, reliable view of his CGM data. The Dexcom integration is often backgrounded, doesn’t show the chart, only the number and an arrow. People use hacks like creating calendar events just to see up-to-date data. And the iOS settings, Screen Time, and notification systems have ballooned into a giant ball of complexity. What we need is something simple.

↫ Andrew Childs

And so Childs set out to design and prototype a smartwatch just for his son to wear, trying to address the shortcomings of other offerings on the market along the way, and possibly even bring it to market for other people in similar situations. After six months, he managed to create several prototypes, with both the software and hardware designed from the ground up, that he and his son still wear to this day, to great satisfaction. Since Childs didn’t really know where to go from there and how to turn what he had into an actual product people could be, he decided to document his effort online.

In the process, he had to overcome a ton of hurdles, from iOS’ strict BLE limitations, difficult-to-reach soldering points that can’t be moved due to the small size of the PCB, optimising the battery life, dealing with glass manufacturing, and many other issues, big and small. Oh and also, he was a software engineer, not a hardware one, so he had to learn a lot of new skills, from working with 3D modeling to PCB design. In the end, though, he’s now got a few devices that look quite professionally made, that are incredibly easy to repair, and that are focused solely on those things he and his son need.

This project has increased the quality of life for his son, and that’s genuinely all that really matters here.

Let’s Encrypt ends support for expiration notification emails

Since its inception, Let’s Encrypt has been sending expiration notification emails to subscribers that have provided an email address to us. We will be ending this service on June 4, 2025.

↫ Josh Aas on the Let’s Encrypt website

They’re ending the expiration notification service because it’s costly, adds a ton of complexity to their systems, and constitutes a privacy risk because of all the email addresses they have to keep on file just for this feature. Considering there are other services that offer this functionality, and the fact many people automate this already anyway, it makes sense to stop sending out emails.

Anyway, just a head’s up.

The Heirloom Project

Update: there’s a fork called heirloom-ng that is actually still somewhat maintained and contains some more changes and modernisations compared to the old version.

The Heirloom Project provides traditional implementations of standard Unix utilities. In many cases, they have been derived from original Unix material released as Open Source by Caldera and Sun.

Interfaces follow traditional practice; they remain generally compatible with System V, although extensions that have become common use over the course of time are sometimes provided. Most utilities are also included in a variant that aims at POSIX conformance.

On the interior, technologies for the twenty-first century such as the UTF-8 character encoding or OpenType fonts are supported.

↫ The Heirloom Project website

I had never heard of this before, but I like the approach they’re taking. This isn’t just taking System V tools and making them work on a modern UNIX-like system as-is; they’re also improving by them adding support for modern technologies, without actually changing their classic nature and the way old-fashioned users expect them to work. Sadly, the project seems to be dead, as the code hasn’t been altered since 2008. Perhaps someone new is willing to take up this project?

As it currently stands, the tools are available for Linux, Solaris, Open UNIX, HP-UX, AIX, FreeBSD, NetBSD, and OpenBSD, but considering how long the code hasn’t been touched, I wonder if they still run and work on any of those systems today. They also come in various different versions which comply with different variants of the POSIX standard.

Android 16’s Linux Terminal will soon let you run graphical apps, so of course we ran Doom

Regardless, the fact that Android’s Linux Terminal can run graphical apps like Doom now is good news. Hopefully we’ll be able to run more complex desktop-class Linux programs in the future. I tried running GIMP, for example, but it didn’t work. Eventually, Android should be able to run Linux apps as well as Chromebooks can, as I believe one of the goals of this project is to help the transition of Chrome OS to an Android base.

↫ Mishaal Rahman at Android Authority

It was of course inevitable that someone would run Doom on Android’s new Debian container, and it’s pretty cool to see it work without much issue already, even if the new terminal and container setup are still in such heavy development. Like many other people, I love the idea of my smartphone being both my, well, smartphone, as well as a full desktop PC once you connect it to a display and some input devices. As wireless technology keeps advancing, we soon might not even need to plug anything into the phone at all, and just having it in our pocket is good enough, which would be amazing.

That being said, I would want such functionality to come from a traditional Linux setup, not Android’s idea of a Linux setup. Running a Debian virtual machine on top of Android is probably preferable for a lot of people for a variety of reasons, but I’m a Linux user and want plain, regular Linux running directly on my smartphone, not some virtual machine on Android, which, while being a Linux distribution, is not the most pleasant variant of Linux to run and use.

Apple’s macOS UNIX certification is a lie

As an online discussion grows longer, the probability of a someone mentioning macOS is a UNIX approaches 1. In fact, it was only late last year that The Open Group announced that macOS 15.0 was, once again, certified as UNIX, continuing Apple’s long-standing tradition of certifying macOS releases as “real” UNIX®. What does any of this actually, mean, though? Well, it turns out that if you actually dive into Apple’s conformance statements for macOS’ UNIX certification, it doesn’t really mean anything at all.

First and foremost, we have to understand what UNIX certification really means. In order to be allowed to use the UNIX trademark, your operating system needs to comply with the Single UNIX Specification (SUS), which specifies programming interfaces for C, a command-line shell, and user commands, more or less identical to POSIX, as well as the X/Open Curses specification. The latest version is SUS version 4, originally published in 2008, with amendments published in 2013 and 2016, which were rolled up into version 4 in 2018. The various versions of the SUS that exist, in turn, correspond to a specific UNIX trademark. In table form:

TrademarkSUS versionSUS published in:SUS last amended in:
UNIX® 93n.a.n.a.n.a.
UNIX® 95Version 11994n.a.
UNIX® 98Version 21997n.a.
UNIX® 03Version 320022004
UNIX® V7Version 420082016 (2018 for roll-up)

When you read that macOS is a certified UNIX, which of these versions and trademarks do you assume macOS complies with? You’d assume they would just target the latest trademark and SUS version, right? This would allow macOS to carry the UNIX® V7 trademark, because they would conform to version 4 of the SUS, which dates to 2016. The real answer is that macOS 15.0 only conforms to version 3 of the SUS, which dates all the way back to the ancient times of 2004, and as such, macOS is only UNIX® 03 (on both Intel and ARM). However, you can argue this is just semantics, since it’s not like UNIX and POSIX are very inclined to change.

So now, like the UNIX nerd that you are, you want to see all this for yourself. You use macOS, safe in the knowledge that unlike those peasants using Linux or one of the BSDs, you’re using a real UNIX®. So you can just download all the tests suites (if you can afford them, but that’s a whole different can of worms) and run them, replicating Apple’s compliance testing, seeing for yourself, on your own macOS 15 installation, that macOS 15 is a real UNIX®, right? Well, no, you can’t, because the version of macOS 15 Apple certifies is not the version that’s running on everyone’s supported Macs.

To gain its much-vaunted UNIX certification for macOS, Apple cheats. A lot.

The various documents Apple needs to submit to The Open Group as part of the UNIX certification process are freely available, and mostly it’s a lot of very technical questions about various very specific aspects of macOS’ UNIX and POSIX compliance few of us would be able to corroborate without extensive research and in-depth knowledge of macOS, UNIX, and POSIX. However, at the end of every one of these Conformance Statements, there’s a text field where the applicant can write down “additional, explanatory material that was provided by the vendor”, and it’s in these appendices where we can see just how much Apple has to cheat to ensure macOS passes the various UNIX® 03 certification tests.

In the first of these four documents, Internationalised System Calls and Libraries Extended V3, Apple’s “additional, explanatory material” reads as follows:

Question 27: By default, core file generation is not enabled. To enable core file generation, you can issue this command:

sudo launchctl limit core unlimited

Testing Environment Addendum: macOS version 15.0 Sequoia, like previous versions, includes an additional security mechanism known as System Integrity Protection (SIP). This security policy applies to every running process, including privileged code and code that runs out of the sandbox. The policy extends additional protections to components on disk and at run-time, only allowing system binaries to be modified by the system installer and software updates. Code injection and runtime attachments to system binaries are no longer permitted.

To run the VSX conformance test suite we first disable SIP as follows:

– Shut down the system.
– Press and hold the power button. Keep holding it while you see the Apple logo and the message “Continue holding for startup options”
– Release the power button when you see “Loading startup options”
– Choose “Options” and click “Continue”
– Select an administrator account and enter its password.
– From the Utilities menu in the Menu Bar, select Terminal.
– At the prompt, issue the following command: “csrutil disable”
– You should see a message that SIP is disabled. From the Apple menu, select “Restart”.

By default, macOS coalesces timeouts that are scheduled to occur within 5 seconds of each other. This can randomly cause some sleep calls to sleep for different times than requested (which affects tests of file access times) so we disable this coalescing when testing. To disable timeout coalescing issue this command:

sudo sysctl -w kern.timer.coalescing_enabled=0

By default there is no root user. We enable the root user for testing using the following series of steps:
– Launch the Directory Utility by pressing Command and Space, and then typing “Directory Utility”
– Click the Lock icon in Directory Utility and authenticate by entering an Administrator username and password.
– From the Menu Bar in Directory Utility:
– Choose Edit -> Enable Root User. Then enter a password for the root user, and confirm it.
– Note: If you choose, you can later Disable Root User via the same menu.

↫ Apple’s appendix to Internationalised System Calls and Libraries Extended V3

The second conformance statement, Commands and Utilities V4, has another appendix, and it’s a real doozy (the […] indicate repeat remarks from the previous appendix; I’ve removed them for brevity):

Testing Environment Addendum:

  1. […]
  2. By default, the APFS file system updates a file’s atime lazily. To run the Conformance Test Suites, or more generally to get UNIX Standard atime behavior, mount the test partitions (including /System/Volumes/Data) with the “strictatime” option: mount -o strictatime
  3. APFS file systems can be formatted as either case-sensitive or case-insensitive. Always format as case-sensitive for UNIX Conformant behavior.
  4. macOS has a file indexing service, Spotlight, that runs in the background and may affect file access times. For UNIX Conformance Testing we disable Spotlight. You can do that with this command: sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist

    Spotlight can be re-enabled with:

    sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
  1. […]
  2. […]
  3. In macOS Sequoia the root volume is authenticated and immutable. Because of this, and because of the way that you have to configure uucp, you should take the following steps before using uucp (and we do these before running the uu* tests):
    • Copy the following binaries from /usr/bin to /usr/local/bin
      uucp
      uuname
      uustat
      uux
    • Copy the following binaries from /usr/sbin to /usr/local/bin:
      uucico
      uuxqt
    • In /usr/local/bin, turn on the setuid bit for these binaries:
      sudo chmod +s /usr/local/bin/uu*
      (This is the step that you cannot perform within /usr/bin or /usr/sbin)
    • Add /usr/local/bin to your PATH preceding /usr/bin and /usr/sbin
    • Enable the uucp service:
      sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.uucp.plist
↫ Apple’s appendix to Commands and Utilities V4

The third and fourth conformance statements have no appendix.

Interestingly enough, on top of the appendices, Apple also has four “Temporary Waivers“. These are waivers granted at the sole discretion of The Open Group for a “limited number of implementation errors” that are “demonstrated to be of a minor nature, with negligible impact on interoperability or portability”. These are valid for 12 months, after which the applicant must have removed the errors from its product. These waivers, and its resolutions, must be made public, but I think they’re only made public to registered, paying customers – so I can’t download them to take a look at them. I honestly doubt these are particularly interesting, but I figured I’d mention it anyway.

So, if you want your installation of macOS 15.0 to pass the UNIX® 03 certification test suites, you need to disable System Integrity Protection, enable the root account, enable core file generation, disable timeout coalescing, mount any APFS partitions with the strictatime option, format your APFS partitions case-sensitive (by default, APFS is case-insensitive, so you’ll need to reinstall), disable Spotlight, copy the binaries uucp, uuname, uustat, and uux from /usr/bin to /usr/local/bin and the binaries uucico and uuxqt from /usr/sbin to /usr/local/bin, set the setuid bit on all of these binaries, add /usr/local/bin to your PATH before /usr/bin and /usr/sbin, enable the uucp service, and handle the mystery issues listed in the four Temporary Waivers.

Then, and only then, is your macOS 15.0 actually UNIX® 03-certified.

This is batshit insane. I can guarantee you with 100% certainly not a single macOS installation in the entire history of macOS – let alone when just counting macOS 15.0 – has implemented even half of these changes. I’m sure there is a small number of people who have System Integrity Protection disabled permanently, and an even smaller number of people who have enabled the root account, and an even smaller number of people who have done both of those things – but that’s it. All the other changes are far too obscure and specific to be of any use to anyone.

For fairness’ sake, I also took a look at the Conformance Statements for some of the other UNIX-certified operating systems. The only operating system and version that is UNIX® V7-certified is IBM’s AIX 7.2 TL5 (or later), and it has just one note from IBM, containing a single change you need to apply to AIX 7.2 TL5 pass the UNIX® V7 certification process:

Full response to Question 28: The AIX default socket listen queue length is 1024, the maximum is 32767, the value must be modified by using the “no -o somaxconn=5” command to set UNIX03 conforming length of 5.

↫ IBM’s appendix to Internationalised System Calls and Libraries Extended V4

Looking at one of the other UNIX® 03-certified operating systems, there’s HP-UX 11.31 for Itanium, which does have some remarks in its appendices, but they’re informative, and don’t specify any changes that need to be applied to HP-UX to make it pass UNIX® 03 certification testing. For Solaris, there’s a ton of remarks about the differences between Solaris for x86 and Solaris for SPARC, including differences between the 32bit and 64bit variants of those architectures, but that’s it for Solaris. AIX, HP-UX, and Solaris do not require any meaningful changes to pass UNIX certification testing.

I can only conclude that macOS 15.0’s UNIX® 03-certification is a lie. If you need to implement this many drastic changes to your operating system to make it pass the UNIX® 03-certification tests, you’re really not UNIX® 03-compliant. Let me be very clear that this is not some sort of “gotcha!”, scandal, or “-gate”; UNIX-certification for macOS is not some sort of diabolical marketing scheme devised by C-level executives at Apple, trying to lure unsuspecting customers into buying Macs because they’re UNIX-certified. I doubt Tim Cook even knows who on earth The Open Group are. The cold and harsh truth is that literally nobody but a few nerds like us care about this, and even then the level of care we display is minute.

I do think, however, that this puts some serious question marks around just how valuable the UNIX trademark really is, and what it really means for an operating system to be UNIX-certified. If macOS can be a “real” UNIX when literally not a single macOS installation in the world can even pass the certification tests to begin with, what are we really doing here?

This makes one wonder why Apple is allowed to list this many onerous caveats and still be granted the right to use the UNIX® 03 trademark, and I honestly have no idea. The Open Group and its certifications do have an air of pay-to-play, but Apple is only a silver member, which costs a measly $22000 per year – an absolute pittance for Apple. The costs for certification can add up to a bit more depending on which parts Apple uses, but at most it’ll be a few hundred thousand dollar per year, but more likely much less than that. All in all, a total pittance for Apple, and looking at the huge list of gold and silver members, as well as the massive names that are platinum members, losing Apple as a member would barely be a blip on The Open Group’s radar. The silver members alone generate several millions of dollars in revenue each year, so Apple’s contributions really don’t seem all that consequential.

I think the reality is a lot less exciting: deep inside Apple there are probably still a few hardcore UNIX people who do actually really care about this, and they clearly don’t mind spending some work time keeping the certification train going. While the certification document for ARM was written by a fairly new Apple software engineer in the CoreOS group, Mansi Agarwal (who joined Apple in June of 2023), the certification documents for macOS on Intel were written by Fred Zlotnick, who joined Apple in 2015, and has a long history working on UNIX products.

He worked at a company called Mindcraft from 1989 to 1995, which was an Accredited POSIX Testing Laboratory, then spent almost 15 years working for Sun Microsystems on the Solaris operating system, leading teams of dozens of kernel engineers. While at Sun, he worked on the core I/O subsystem of Solaris, the InfiniBand stack, things like the IP, TCP and UDP stacks, ZFS, and more. After a few short stints at other companies, including leading an Illumos kernel team at Nexenta, he ended up at Apple, where he would work until his retirement in 2023.

That’s some serious pedigree, and it’s not difficult to imagine people like that don’t mind breaking, twisting, turning, and mangling macOS to somehow still hammer it through a UNIX-shaped hole. The question is, though, for how long?

Linux 6.14 with Rust: “We are almost at the ‘write a real driver in Rust’ stage now”

With the Linux 6.13 kernel, Greg Kroah-Hartman described the level of Rust support as a “tipping point” for Rust drivers with more of the Rust infrastructure having been merged. Now for the Linux 6.14 kernel, Greg describes the state of the Rust driver possibilities as “almost at the “write a real driver in rust” stage now, depending on what you want to do.

↫ Michael Larabel

Excellent news, as there’s a lot of interest in Rust, and it seems that allowing developers to write drivers for Linux in Rust will make at least some new and upcoming drivers comes with less memory safety issues than non-Rust drivers. I’m also quite sure this will anger absolutely nobody.

OpenAI doesn’t like it when you use “their” generated slop without permission

OpenAI says it has found evidence that Chinese artificial intelligence start-up DeepSeek used the US company’s proprietary models to train its own open-source competitor, as concerns grow over a potential breach of intellectual property.

↫ Cristina Criddle and Eleanor Olcott for the FT

This is more ironic than writing a song called Ironic that lists situations that aren’t actually ironic. OpenAI claims it’s free to suck up whatever content and data it can find on the web without any form of permission or consent, but throws a tamper tantrum when someone takes whatever they regurgitate for their own use without permission or consent?

Cry me a river.

Google Maps is run by cowards

Google, on its Google Maps naming policy, back in 2008:

By saying “common”, we mean to include names which are in widespread daily use, rather than giving immediate recognition to any arbitrary governmental re-naming. In other words, if a ruler announced that henceforth the Pacific Ocean would be named after her mother, we would not add that placemark unless and until the name came into common usage.

Google, today, in 2025:

Google has confirmed that Google Maps will soon rename the Gulf of Mexico and Denali mountain in Alaska as the “Gulf of America” and “Mount McKinley” in line with changes implemented by the Trump Administration, but users in the rest of the world may see two names for these locations.

Nothing is worth less than the word of a corporation.

Reviving a dead audio format: the return of ZZM

Long-time readers will know that my first video game love was the text-mode video game slash creation studio ZZT. One feature of this game is the ability to play simple music through the PC speaker, and back in the day, I remember that the format “ZZM” existed, so you could enjoy the square wave tunes outside of the games. But imagine my surprise in 2025 to find that, while the Museum of ZZT does have a ZZM Audio section, it recommends that nobody use the format anymore; because nobody’s made a player that doesn’t require MS-DOS. Let’s fix that by making a player with way higher system requirements, using everyone’s favorite coding environment: Javascript.

↫ Nicole Branagan

ZZM’s history and Branagan’s journey to make this work without having to rely on DOS took a lot more work than I expected, and is quite interesting, too. Very niche, for sure, but that’s kind of what we’re here for.

The invalid 68030 instruction that accidentally allowed the Mac Classic II to successfully boot up

A bug in the ROM for the Macintosh II was recently discovered that causes a crash when booting in 32-bit mode. Doug Brown discovered and documented the bug while playing with the MAME debugger. Why did it never show up before? It seems a quirk in Motorola’s 68030 CPU inadvertently fixes it when executing an illegal instruction that shouldn’t have been executed in the first place.

I was starting to believe something that sounded almost too crazy to be true: Apple had an out-of-bounds jump bug in the Classic II’s ROM that should have caused a Sad Mac during boot, but they had no idea the bug was there because the 68030 was accidentally fixing the value of A1 by executing an undocumented instruction. How could I prove that my theory was correct?

By buying a Classic II and hacking the ROM in order to see exactly what is happening on hardware, of course!

↫ Doug Brown

What follows is his process for investigating the room on emulated hardware, and then testing it on actual hardware.