The hearts of the Super Nintendo

Every computer has at least one heart which beats the cadence to all the other chips. The CloCK output pin is connected to a copper line which spreads to most components, into their CLK input pin. If you are mostly a software person like me, you may have never noticed it but all kinds of processors have a CLK input pin. From CPUs (Motorola 68000, Intel Pentium, MOS 6502), to custom graphic chips (Midway’s DMA2, Capcom CPS-A/CPS-B, Sega’s Genesis VDP) to audio chips (Yamaha 2151, OKI msm6295), they all have one. ↫ Fabien Sanglard I’ve watched enough Adrian Black that I already knew all of this, and I’m assuming so did many of you. But hey, I’ll never pass up the opportunity to link to the insides of the Super Nintendo.

Open source is about more than just code

As some of the dust around the xz backdoor is slowly starting to settle, we’ve been getting a pretty clear picture of what, exactly, happened, and it’s not pretty. This is a story of the sole maintainer of a crucial building block of the open source stack having mental health issues, which at least partly contributes to a lack of interest in maintaining xz. It seems a coordinated campaign – consensus seems to point to a state actor – is then started to infiltrate xz, with the goal of inserting a backdoor into the project. Evan Boehs has done the legwork of diving into the mailing lists and commit logs of various projects and the people involved, and it almost reads like the nerd version of a spy novel. It involves seemingly fake users and accounts violently pressuring the original xz maintainer to add a second maintainer; a second maintainer who mysteriously seems to appear at around the same time, like a saviour. This second maintainer manages to gain the original maintainer’s trust, and within months, this mysterious newcomer more or less takes over as the new maintainer. As the new maintainer, this person starts adding the malicious code in question. Sockpuppet accounts show up to add code to oss-fuzz to try and make sure the backdoor won’t be detected. Once all the code is in place for the backdoor to function, more fake accounts show up to push for the compromised versions of xz to be included in Debian, Red Hat, Ubuntu, and possibly others. Roughly at this point, the backdoor is discovered entirely by chance because Andres Freund noticed his SSH logins felt a fraction of a second slower, and he wanted to know why. What seems to have happened here is a bad actor – again, most likely a state actor – finding and targeting a vulnerable maintainer, who, through clever social engineering on both a personal level as well as the project level, gained control over a crucial but unexciting building block of the open source stack. Once enough control and trust was gained, the bad actor added a backdoor to do… Well, something. It seems nobody really knows yet what the ultimate goal was, but we can all make some educated guesses and none of them are any good. When we think of vulnerabilities in computer software, we tend to focus on bugs and mistakes that unintentionally create the conditions wherein someone with malicious intent can do, well, malicious things. We don’t often consider the possibility of maintainers being malicious, secretly adding backdoors for all kinds of nefarious purposes. The problem the xz backdoor highlights is that while we have quite a few ways to prevent, discover, mitigate, and fix unintentional security holes, we seem to have pretty much nothing in place to prevent intentional backdoors placed by trusted maintainers. And this is a real problem. There are so many utterly crucial but deeply boring building blocks all over the open source stacks pretty much the entire computing world makes use of that it has become a meme, spearheaded by xkcd’s classic comic. The weakness in many of these types of projects is not the code, but the people maintaining that code, most likely through no fault of their own. There are so many things life can throw at you that would make you susceptible to social engineering – money problems, health problems, mental health issues, burnout, relationship problems, god knows what else – and the open source community has nothing in place to help maintainers of obscure but crucial pieces of infrastructure deal with problems like these. That’s why I’m suggesting the idea of setting up a foundation – or whatever legal entity makes sense – that is dedicated to helping maintainers who face the kinds of problems like the maintainer of xz did. A place where a maintainer who is dealing with problems outside of the code repository can go to for help, advice, maybe even financial and health assistance if needed. Even if all this foundation offers to someone is a person to talk to in confidence, it might mean the difference between burning out completely, or recovering at least enough to then possibly find other ways to improve one’s situation. If someone is burnt-out or has a mental health crisis, they could contact the foundation, tell their story, and say, hey, I need a few months to recover and deal with my problems, can we put out a call among already trusted members of the open source community to step in for me for a while? Keep the ship steady as she goes without rocking it until I get back or we find someone to take over permanently? This way, the wider community will also know the regular, trusted maintainer is stepping down for a while, and that any new commits should be treated with extra care, solving the problem of some unknown maintainer of an obscure but important package suffering in obscurity, the only hints found in the low-volume mailing list well after something goes wrong. The financial responsibility for such a safety net should undoubtedly be borne by the long list of ultra-rich megacorporations who profit off the backs of these people toiling away in obscurity. The financial burden for something like this would be pocket change to the likes of Google, Apple, IBM, Microsoft, and so on, but could make a contribution to open source far greater than any code dump. Governments could probably be involved too, but that will most likely open up a whole can of worms, so I’m not sure if that would be a good idea. I’m not proposing this be some sort of glorified ATM where people can go to get some free money whenever they feel like it. The goal should be to help people who form crucial cogs in the delicate machinery of computing to live healthy, sustainable lives so their code and contributions to the community don’t get compromised. This

Servo: tables, WOFF2, and more

This month, after surpassing our legacy layout engine in the CSS test suites, we’re proud to share that Servo has surpassed legacy in the whole suite of Web Platform Tests as well! ↫ Servo blog Another months, another detailed progress report from Servo, the Rust browser engine once started by Mozilla. There’s a lot of interesting reading here for web developers.

Redox: significant performance and correctness improvements to the kernel

This year, there have been numerous improvements both to the kernel’s correctness, as well as raw performance. The signal and TLB shootdown MRs have significantly improved kernel memory integrity and possibly eliminated many hard-to-debug and nontrivial heisenbugs. Nevertheless, there is still a lot of work to be done optimizing and fixing bugs in relibc, in order to improve compatibility with ported applications, and most importantly of all, getting closer to a self-hosted Redox. ↫ Jacob Lorentzon (4lDO2) I love how much of the focus for Redox seems to be on the lower levels of the operating system, because it’s something many projects tend to kind of forget to highlight, to spend more time on new icons or whatever. These in-depth Redox articles are always informative, and have me very excited about Redox’ future. Obviously, Redox is on the list of operating systems I need to write a proper article about. I’m not sure if there’s enough for a full review or if it’ll be more of a short look – we’ll see when we get there.

Windows 11 may get a highly requested Start menu redesign, here is how to try it

In October 2023, we published a recap of the top 10 features Windows 11 users want for the redesigned Start menu. Number 6 was the ability to switch from list view to grid view in the “All Apps” list, which received over 1,500 upvotes in the Feedback Hub. Six months later, Microsoft finally appears to be ready to give users what they want. PhantomOfEarth, the ever-giving source of hidden stuff in Windows 11 preview builds, discovered that Windows 11 build 22635.3420 lets you change from list to grid view in the “All Apps” section. Like other unannounced features, this one requires a bit of tinkering using the ViVeTool app until Microsoft makes it official. ↫ Taras Buria I’m still baffled Microsoft consistently manages to mess up something as once-iconic and impactful like the Start menu. It seems like Microsoft just can’t leave it well enough alone, even though it kind of already nailed it in Windows 95 – just give us that, but with a modern search function, and we’re all going to be happy. That’s it. We don’t want or need more.

Maptwin: an 80s-era automotive navigation computer

A couple of years ago, I imported a Japanese-market 4×4 van into the US; a 1996 Mitsubishi Delica. Based on the maps I found in the seat pocket and other clues, it seems to have spent its life at some city dweller’s cabin in the mountains around Fukushima, and only driven occasionally. Despite being over 25 years old, it only had 77,000 km on the odometer. The van had some interesting old tech installed in it: what appears to be a radar detector labeled “Super Eagle ✔️30” and a Panasonic-brand electronic toll collection device that you can insert a smart card into. One particularly noteworthy accessory that was available in mid-90s Delicas was a built-in karaoke machine for the rear passengers. Sadly, mine didn’t have that feature. But the most interesting accessory installed in the van was the Avco Maptwin Inter, which I immediately identified as some kind of electronic navigation aid, about which there is very little information available on the English-language internet. When I first saw the Maptwin, I had thought it might be some kind of proto-GPS that displayed latitude/longitude coordinates that you could look up on a paper map. Alas, it’s not that cool. It was not connected to any kind of antenna, and the electronics inside seem inadequate for the reception of a GPS signal. The Maptwin was, however, wired into an RPM counter that was attached between the transmission and the speedometer cable, presumably to delivery extremely accurate and convenient display of how many kilometers have been traveled since the display was last reset. What I’ve been able to learn is that the Maptwin is computer that was mostly used for rally race navigation, precursor to devices still available from manufacturers like Terra Trip. Now, the Mitsubishi Delica is about the best 4×4 minivan you can get, but it’s extremely slow and unwieldy at speed, so it would be pretty terrible for rally racing. My best guess is that the owner used this device as a navigation aid for overland exploration, as the name “Maptwin” implies, to augment the utility of a paper map. On the other hand, I found an article that indicates that some kinds of rallies were not high speed affairs, but rather accuracy-based navigation puzzles of sorts, so who knows? The Maptwin wasn’t working when I got the van, and I don’t know if it’s actually broken or just needs to be wired up correctly. If any OSNews readers have any additional information about any of the devices I’ve mentioned, please enlighten us in the comments. If anyone would like to try to get the Maptwin working and report back, please let me know.

NetBSD 10.0 released

NetBSD 10.0 has been released, and it brings a lot of improvements, new features, and fixes compared to the previous release, 9.3. First and foremost, there are massive performance improvements when it comes to compute and filesystem-bound applications on multicore and multiprocessor systems. NetBSD 10.0 also brings WireGuard support compatible with implementations on other systems, although this is still experimental. There’s also a lot of added support for various ARM SoCs and boards, including Apple’s M1 chip, and there’s new support for compat_linux on AArch64, for running Linux programs. Of course, there’s also a ton of new and updated drivers, notably the graphics drivers which are now synced to Linux 5.6, bringing a ton of improvements with them. This is just a small sliver of all the changes, so be sure to read the entire release announcement for everything else.

Ext2 filesystem driver now marked as deprecated

It’s the ext2 filesystem driver that will be marked as deprecated in the upcoming 6.9 Linux kernel. The main issue is that even if the filesystem is created with 256 byte inodes (mkfs.ext2 -I 256), the filesystem driver will stick to 32 bit dates. Because of this, the driver does not support inode timestamps beyond 03:14:07 UTC on 19 January 2038. ↫ Michael Opdenacker Kernel developer Ted T’so did state that if someone wants to add support for 64bit dates to ext2, it shouldn’t be too hard. I doubt many people still use ext2, but if someone is willing to step up, the deprecation can be made undone by adding this support.

Backdoor in upstream xz/liblzma leading to SSH server compromise

After observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors) I figured out the answer: The upstream xz repository and the xz tarballs have been backdoored. At first I thought this was a compromise of debian’s package, but it turns out to be upstream. ↫ Andres Freund I don’t normally report on security issues, but this is a big one not just because of the severity of the issue itself, but also because of its origins: it was created by and added to upstream xz/liblzma by a regular contributor of said project, and makes it possibly to bypass SSH encryption. It was discovered more or less by accident by Andres Freund. I have not yet analyzed precisely what is being checked for in the injected code, to allow unauthorized access. Since this is running in a pre-authentication context, it seems likely to allow some form of access or other form of remote code execution. ↫ Andres Freund The exploit was only added to the release tarballs, and not present when taking the code off GitHub manually. Luckily for all of us, the exploit has only made it way to the most bloodiest of bleeding edge distributions, such as Fedora Rawhide 41 and Debian testing, unstable and experimental, and as such has not been widely spread just yet. Nobody seems to know quite yet what the ultimate intent of the exploit seems to be. Of note: the person who added the compromising code was recently added as a Linux kernel maintainer.

A used ThinkPad is a better deal than a new cheap laptop

Since the technology industry and associated media outlets tend to focus primarily on the latest and greatest technology and what’s right around the corner, it sometimes seems as if the only valid option when you need a new laptop, phone, desktop, or whatever is to spend top euro on the newest, most expensive incarnations of those. But what if you need, say, a new laptop, but you’re not swimming in excess disposable income? Or you just don’t want to spend 1000-2000 euro on a new laptop? The tech media tends to have an answer for this: buy something like a cheap Chromebook or an e-waste €350 Windows laptop and call it a day – you don’t deserve a nice experience. However, there’s a far better option than spending money on a shackled Chromebook or an underpowered bottom-of-the-barrel Windows laptop: buy used. Recently, I decided to buy a used laptop, and I set it up how I would set up any new laptop, to get an idea of what’s out there. Here’s how it went. For this little experiment, I first had to settle on a brand, and to be brutally honest, that was an easy choice. ThinkPads seems to be universally regarded as excellent choices for a used laptop for a variety of reasons which I’ll get to later. After weighing some of the various models, options, and my budget, I decided to go for a Lenovo ThinkPad T450s for about €150, and about a week later, the device arrived at my local supermarket for pickup. Before I settled on this specific ThinkPad, I had a few demands and requirements. First and foremost, since I don’t like large laptops, I didn’t want anything bigger than roughly 14″, and since I’m a bit of a pixel count snob, 1920×1080 was non-negotiable. Since I already have a Dell XPS 13 with an 8th Gen Core i7, I figured going 3-4 generations older seemed like it would give me at least somewhat of a generational performance difference. An SSD was obviously a must, and as long as there were expansion options, RAM did not matter to me. The T450s delivered on all of these. It’s got the 1920×1080 14″ IPS panel (there’s also a lower resolution panel, so be sure to check you’re getting the right one), a Core i5-5300U with 2 cores and 4 threads with a base frequency of 2.30GHz and a maximum boost frequency of 2.90GHz, Intel HD 5500 graphics, a 128GB SATA SSD, and 4GB of RAM. Since 4GB is a bit on the low side for me, I ordered an additional 8GB SO-DIMM right away for €35. This brought the total price for this machine to €185, which I considered acceptable. For that price, it also came with its Windows license, for whatever that’s worth. I don’t want to turn this into a detailed review of a laptop from 2015, but let’s go over what it’s like to use this machine today. The display cover is made of carbon-reinforced plastic, and the rest of magnesium. You can clearly feel this laptop is of a slightly older vintage, as it feels a bit more dinkey than I’m used to from my XPS 13 9370 and my tiny Chuwi MiniBook X (2023). It doesn’t feel crappy or cheap or anything – just not as solid as you might expect from a modern machine. It’s got a whole load of ports to work with, though, which is refreshing compared to the trend of today. On the left side, there’s a smartcard slot, USB 3.0, mini DisplayPort, another USB 3.0, and the power connector. On the right side, there’s a headphone jack, an SD card slot, another USB 3.0 port, an Ethernet jack, and a VGA port. On the bottom of the laptop is a docking port to plug it into various docking stations with additional ports and connectors. On the inside, there’s a free M.2 slot (a small 2242 one). First, I eradicated Windows from the SSD because while I’m okay with an outdated laptop, I’m not okay with an outdated operating system (subscribe to our Patreon to ensure more of these top-quality jokes). After messing around with various operating systems and distributions for a while, I got back to business and installed my distribution of choice, Fedora, but I did opt for the Xfce version instead of my usual KDE one just for variety’s sake. ThinkPads tend to be well-supported by Linux, and the T450s is no exception. Everything I could test – save for the smartcard reader, since I don’t have a smartcard to test it with – works out of the box, and nothing required any manual configuration or tweaking to work properly. Everything from trackpad gestures to the little ThinkLight on the lid worked perfectly, without having to deal with hunting for drivers and that sort of nonsense Windows users have to deal with. This is normal for most laptops and Linux now, but it’s nice to see it applies to this model as well. Using the T450s was… Uneventful. Applications open fast, there’s no stutter or lag, and despite having just 2 cores and 4 threads, and a very outdated integrated GPU, I didn’t really feel like I was missing out when browsing, doing some writing and translating (before I quit and made OSNews my sole job), watching video, those sorts of tasks. This isn’t a powerhouse laptop for video editing, gaming, or compiling code or whatever, but for everything else, it works great. After I had set everything up the way I like, software-wise, I did do some work to make the machine a bit more pleasant to use. First and foremost, as with any laptop or PC that’s a little older, I removed the heatsink assembly, cleaned off the crusty old thermal paste, and added some new, fresh paste. I then dove into the fan management, and installed zcfan, a Linux fan control daemon for ThinkPads, using its default settings, and created a systemd

Monogon OS: a new kind of Linux operating system

Monogon OS is an open-source, secure, API-driven and minimal operating system unlike any other. It is based on Linux and Kubernetes, but with a clean userland rebuilt entirely from scratch. It is written in pure Go and eliminates decades worth of legacy code and unnecessary complexity. It runs on a fleet of bare metal or cloud machines and provides users with a hardened, production ready Kubernetes, without the overhead of traditional Linux distributions or configuration management systems. It does away with the scripting/YAML duct tape and configuration drift inherent to traditional deployments. Instead, it provides a stable API-driven platform free of vendor lock-in and with none of the drudgery. ↫ Monogon OS website This not exactly in my wheelhouse, but I’m pretty sure some of you will be all over this concept.

Intel “Family 6” CPU era coming to an end soon

Since the mid-90’s with the P6 micro-architecture for the Pentium Pro as the sixth-generation x86 microarchitecture, Intel has relied on the “Family 6” CPU ID. From there Intel has just revved the Model number within Family 6 for each new microarchitecture/core. For example, Meteor Lake is Family 6 Model 170 and Emerald Rapids is Family 6 Model 207. This CPU ID identification is used within the Linux kernel and other operating systems for identifying CPU generations for correct handling, etc. But Intel Linux engineers today disclosed that Family 6 is coming to an end “soon-ish”. ↫ Michael Larabel They should revive the ix86 family name, and call the next generation i786. It sounds so much cooler, even if these names have become rather irrelevant.

Ubuntu will manually review Snap Store after crypto wallet scams

The Snap Store, where containerized Snap apps are distributed for Ubuntu’s Linux distribution, has been attacked for months by fake crypto wallet uploads that seek to steal users’ currencies. As a result, engineers at Ubuntu’s parent firm are now manually reviewing apps uploaded to the store before they are available. The move follows weeks of reporting by Alan Pope, a former Canonical/Ubuntu staffer on the Snapcraft team, who is still very active in the ecosystem. In February, Pope blogged about how one bitcoin investor lost nine bitcoins (about $490,000 at the time) by using an “Exodus Wallet” app from the Snap store. Exodus is a known cryptocurrency wallet, but this wallet was not from that entity. As detailed by one user wondering what happened on the Snapcraft forums, the wallet immediately transferred his entire balance to an unknown address after a 12-word recovery phrase was entered (which Exodus tells you on support pages never to do). ↫ Kevin Purdy at Ars Tecnhica Cryptocurrency, or as I like to call it, MLMs for men, are a scammer’s goldmine. It’s a scam used to scam people. Add in a poorly maintained application store like Ubuntu’s Snap Store, and it’s dangerous mix of incompetence and scammers. I honestly thought Canonical already nominally checked the Snap Store – as one of its redeeming features, perhaps its only redeeming feature – but it turns out anyone could just upload whatever they wanted and have it appear in the store application on every Ubuntu installation. Excellent.

The Apple Jonathan: a very 1980s concept computer that never shipped

In the middle of the 1980s, Apple found itself with several options regarding the future of its computing platforms. The Apple II was the company’s bread and butter. The Apple III was pitched as an evolution of that platform, but was clearly doomed due to hardware and software issues. The Lisa was expensive and not selling well, and while the Macintosh aimed to bring Lisa technology to the masses, sales were slow after its initial release. Those four machines are well known, but there was a fifth possibility in the mix, named the Jonathan. In his book Inventing the Future, John Buck writes about the concept, which was led by Apple engineer Jonathan Fitch starting in the fall of 1984. ↫ Stephen Hackett So apparently, the Jonathan was supposed to be a modular computer, with a backbone you could slot all kinds of upgrades in, from either Apple or third parties. These modules would add the hardware needed to run Mac OS, Apple II, UNIX, and DOS software, all on the same machine. This is an incredibly cool concept, but as we all know, it didn’t pan out. The reasons are simple: this is incredibly hard to make work, especially when it comes to the software glue that would have to make it all work seamlessly. On top of that, it just doesn’t sound very Apple-like to make a computer designed to run anything that isn’t from Apple itself. Remember, this is still the time of Steve Jobs, before he got kicked out of the company and founded NeXT instead. According to Stephen Hackett, the project never made it beyond the mockup phase, so we don’t have many details on how it was supposed to work. It does look stunning, though.

Proxmox gives VMware ESXi users a place to go after Broadcom kills free version

One alternative to ESXi for home users and small organizations is Proxmox Virtual Environment, a Debian-based Linux operating system that provides broadly similar functionality and has the benefit of still being an actively developed product. To help jilted ESXi users, the Proxmox team has just added a new “integrated import wizard” to Proxmox that supports importing of ESXi VMs, easing the pain of migrating between platforms. ↫ Andrew Cunningham at Ars Technica It’s of course entirely unsurprising other projects and companies were going to try and capitalise on Broadcom’s horrible management of its acquisition of VMware.

Copilot is finally gone from Windows Server 2025 and admins rejoice

After the Windows Server 2025’s launch, a Windows insider posted a screenshot on X showing Copilot running on Windows Server 2025, Build 26063.1. The admins discovered the feature in shock and wondered if it was a mistake from Microsoft’s part. A month later, the same Bob Pony broke the news that most admins wanted to see: Copilot is gone in Windows Server 2025’s Build 26085. ↫ Claudiu Andone This reminds of Windows Server 2012, which was based on Windows 8 and launched with a Metro user interface.

How Apple plans to update new iPhones without opening them

Unboxing a new gadget is always a fun experience, but it’s usually marred somewhat by the setup process. Either your device has been in a box for months, or it’s just now launching and ships in the box with pre-release software. Either way, the first thing you have to do is connect to Wi-Fi and wait several minutes for an OS update to download and install. The issue is so common that going through a lengthy download is an expected part of buying anything that connects to the Internet. But what if you could update the device while it’s still in the box? That’s the latest plan cooked up by Apple, which is close to rolling out a system that will let Apple Stores wirelessly update new iPhones while they’re still in their boxes. The new system is called “Presto.” ↫ Ron Amadeo at Ars Technica That’s a lot of engineering for a small inconvenience. Just the way I like my engineering.

Oregon’s governor signs right-to-repair law that bans ‘parts pairing’

Oregon Governor Tina Kotek has now signed one of the strongest US right-to-repair bills into law after it passed the state legislature several weeks ago by an almost 3-to-1 margin. Oregon’s SB 1596 will take effect next year, and, like similar laws introduced in Minnesota and California, it requires device manufacturers to allow consumers and independent electronics businesses to purchase the necessary parts and equipment required to make their own device repairs. Oregon’s rules, however, are the first to ban “parts pairing” — a practice manufacturers use to prevent replacement components from working unless the company’s software approves them. These protections also prevent manufacturers from using parts pairing to reduce device functionality or performance or display any misleading warning messages about unofficial components installed within a device. Current devices are excluded from the ban, which only applies to gadgets manufactured after January 1st, 2025. ↫ Jess Weatherbed at The Verge Excellent news, and it wouldn’t be the first time that one US state’s strict (positive) laws end up benefiting all the other states since it’s easier for corporations to just develop to the strictest state’s standards and use that everywhere else (see California’s car safety and emissions regulations for instance). As a European, I hope this will make it way to the European Union, as well.

lEEt/OS: graphical shell and multitasking environment for DOS

lEEt/OS is a graphical shell and partially posix-compliant multitasking operating environment that runs on top of a DOS kernel. The latest version can be downloaded from this site. lEEt/OS is tested with FreeDOS 1.2 and ST-DOS, but it may also work with other DOS implementations. It can be compiled with Open Watcom compiler. 8086 binaries are also available from this site. ↫ lEEt/OS website I had never heard of lEEt/OS before, but it looks quite interesting – and the new ST-DOS kernel the developer is making further adds to its uniqueness. A very cool project I’m putting on my list of operating systems to write short ‘first look’ article about for y’all.

ARM64EC (and ARM64X) explained

Probably the most confused looks I get from other developers when I discuss Windows and ARM64 is when I used the term “ARM64EC”.  They ask is the same thing as ARM64?  Is it a different instruction set than ARM64?  How can you tell if an application is or ARM64 ARM64EC? This tutorial will answer those questions by de-mystifying and explaining the difference between what can be called “classic ARM64” as it existed since Windows 10, and this new “ARM64EC” which was introduced in Windows 11 in 2021. ↫ Darek Mihocka I’m not going to steal the article’s thunder, but the short of it is that the ‘EC’ stands for ‘Emulation Compatible’, meaning it can call unmodified x86-64 code. ARM64X, meanwhile, is an extended version of Windows PE that allows both ARM64 and emulated x86-64 code to coexist in the same binary (which is not the same as a fat binary, which is an either/or situation). There is a whole lot more to this subject – and I truly mean a lot, this a monster of an in-depth article – so be sure to head on over and read it in full. You’ll be busy for a while.