The Encore 91 computer system

Have you ever heard of the Encore 91 computer system, developed and built by Encore Computer Corporation? I stumbled upon the name of this system on the website for the Macintosh like virtual window manager (MLVWM), an old X11 window manager designed to copy some of the look and feel of the classic Mac OS, and wanted to know more about it. An old website from what appears to be a reseller of the Encore 91 has a detailed description and sales pitch of the machine still online, and it’s a great read.

The hardware architecture of the Encore 91 series is based on the Motorola high-performance 88100 25MHz RISC processor. A basic system is a highly integrated fully symmetrical single board multiprocessor. The single board includes two or four 88100 processors with supporting cache memory, 16 megabytes of shared main memory, two synchronous SCSI ports, an Ethernet port, 4 asynchronous ports, real-time clocks, timers, interrupts and a VME-64 bus interface. The VME-64 bus provides full compatibility with VME plus enhancements for greater throughput. Shared main memory may be expanded to 272 megabytes (mb) by adding up to four expansion cards. The expansion memory boards have the same high-speed access characteristics as local memory.

Encore computing 91 system

The Encore 91 ran a combination of AT&T’s system V.3.2 UNIX and Encore’s POSIX-compliant MicroMPX real-time kernel, and would be followed by machines with more powerful processors in the 88xxx series, as well as machines based on the Alpha architecture. The company also created and sold its own modified RISC architecture, RSX, for which there are still some details available online. Bits and bobs of the company were spun off and sold off, and I don’t think much of the original company is still around today.

Regardless, it’s an interesting system with an interesting history, but we’ll most likely never get to see oe in action – unless it turns up in some weird corner of the United States where the rare working examples of hardware like this invariably tends to end up.

Google’s Android developer registration requirement will kill F-Droid

The consequences of Google requiring developer certification to install Android applications, even outside of Google’s own Play Store, are starting to reverberate. F-Droid, probably the single most popular non-Google application repository for Android, has made it very clear that Google’s upcoming requirement is most likely going to mean the end of F-Droid.

If it were to be put into effect, the developer registration decree will end the F-Droid project and other free/open-source app distribution sources as we know them today, and the world will be deprived of the safety and security of the catalog of thousands of apps that can be trusted and verified by any and all. F-Droid’s myriad users will be left adrift, with no means to install — or even update their existing installed — applications.

↫ F-Droid’s blog post

A potential loss of F-Droid would be a huge blow to anyone trying to run Android without Google’s applications and frameworks installed on their device. It’s pretty clear that Google is doing whatever it can to utterly destroy the Android Open Source Project, something I’ve been arguing is what the rumours about Google killing AOSP really mean. Why kill AOSP, when you can just make it utterly unusable and completely barren?

Sadly, there isn’t much F-Droid can do. They’re proposing regulators the world over look at Google’s plans, and hopefully come to the conclusion that they’re anti-competitive. Specifically the European Union and the tools provided by the Digital Markets Act could prove useful here, but in the end, only if the will exists to use them can these tools be used in the first place.

It’s dark times for the smartphone world right now, especially if you care about consumer rights and open source. iOS has always been deeply anti-consumer, and while the European Union has managed to soften some of the rough edges, nothing much has changed there. Android, on the other hand, had a thriving open source, Google-free community, but decision by decision, Google is beating it into submission and killing it off. The Android of yesteryear doesn’t exist anymore, and it’s making people who used to work on Android back during the good old times extremely sad.

Jean-Baptiste Quéru, husband of OSNews’ amazing and legendary previous managing editor Eugenia Loli-Queru, worded it like this a few days ago:

All the tidbits of news about Android make me sad.

I used to be part of the Android team.

When I worked there, making the application ecosystem as open as the web was a goal. Releasing the Android source code as soon as something hit end-user devices was a goal. Being able to run your own build on actual consumer hardware was a goal.

For a while after I left, there continued to be some momentum behind what I had pushed for.

But, now, 12 years later, this seems to have all died.

I am sad…

↫ Jean-Baptiste Quéru

And so am I. Like any operating system, Android is far from perfect, but it was remarkable just how open it used to be. I guess good things just don’t survive once unbridled capitalism hits.

Unite: a decades-old QNX-inspired hobby operating system

Unite is an operating system in which everything is a process, including the things that you normally would expect to be part of the kernel. The hard disk driver is a user process, so is the file system running on top of it. The namespace manager is a user process. The whole thing (in theory, see below) supports network transparency from the ground up, you can use resources of other nodes in the network just as easily as you can use local resources, just prefix them with the node ID. In the late 80’s, early 90’s I had a lot of time on my hands. While living in the Netherlands I’d run into the QNX operating system that was sold locally through a distributor. The distributors brother had need of a 386 version of that OS but Quantum Software, the producers of QNX didn’t want to release a 386 version. So I decided to write my own.

↫ Jacques Mattheij

What a great story.

Mattheij hasn’t done anthing or even looked at the code for this operating system he created in decades, but recently got the urge to fix it up and publish it online for all of us to see. Of course, resurrecting something this old and long untouched required some magic, and there’s still a few things which he simply just can’t get to work properly. I like how the included copy of vi is broken and adds random bits of garbage to files, and things like the mouse driver don’t work because it requires a COM port and the COM ports don’t seem to work in an emulated environment.

Unite is modeled after QNX, so it uses a microkernel. It uses a stripped-down variant of the MINIX file system, only has one user but that user can run multiple sessions, and there’s a basic graphics mode with some goodies. Sadly, the graphics mode is problematic an requires some work to get going, and because you’ll need the COM ports to work to use it properly it’s a bit useless anyway at the moment.

Regardless, it’s cool to see people going back to their old work and fixing it up to publish the code online.

Why was Windows 3.0’s WinHelp called an online help system when it ran offline?

Some time ago, I described Windows 3.0’s WinHelp as “a program for browsing online help files.” But Windows 3.0 predated the Internet, and these help files were available even if the computer was not connected to any other network. How can it be “online”?

↫ Raymond Chen at The Old New Thing

I doubt this will be a conceptual problem for many people reading OSNews, but I can definitely understand especially younger people finding this a curious way of looking at the word “online”. You’ll see the concept of “online help” in quite a few systems from the ’90s (and possibly earlier), so if you’re into retrocomputing you might’ve run into it as well.

Installing Linux on a PC-98 machine

What if you have a PC-98 machine, and you want to run Linux on it, as you do? I mean, CP/M, OS/2, or Windows (2000 and older) might not cut it for you, after all. Well, it turns out that yes, you can run Linux on PC-98 hardware, and thanks to a bunch of work by Nina Kalinina – yes, the same person from a few days ago – there’s now more information gathered in a single place to get you started.

Plamo Linux is one of the few Linux distributions to support PC-98 series. Plamo 3.x is the latest distribution that can be installed on PC-9801 and PC-9821 directly. Unfortunately, it is quite old, and is missing lots of useful stuff.

This repo is to share a-ha moments and binaries for Plamo on PC-98.

↫ Plamo98 goodies

The repository details “upgrading” – it’s a bit more involved than plain upgrading, but it’s not hard – Plamo Linux from 3.x to 4, which gives you access to a bunch of things you might want, like GCC 3.3 over 2.95, KDE 3.x, Python 2.3, and more. There’s also custom BusyBox config files, a newer version of make, and a few other goodies and tools you might want to have. Once it’s all set and done, you can Linux like it’s 2003 on your PC-98.

The number of people to whom this is relevant must be extraorinarily small, but at some point, someone is going to want to do this, only to find this repository of existing work. We’ve all been there.

UNIX99: UNIX for the TI-99/4A

I’ve been working on developing an operating system for the TI-99 for the last 18 months or so. I didn’t intend this—my original plan was to develop enough of the standard C libraries to help with writing cartridge-based and EA5 programs. But that trek led me quickly towards developing an OS. As Unix is by far my preferred OS, this OS is an approximation. Developing an OS within the resources available, particularly the RAM, has been challenging, but also surprisingly doable.

↫ UNIX99 forum announcement post

We’re looking at a quite capable UNIX for the TI-99, with support for its sound, speech, sprites, and legacy 9918A display modes, GPU-accelerated scrolling, stdio (for text and binary files) and stdin/out/err support, a shell (of course), multiple user support, cooperative tasks support, and a ton more. And remember – all of this is running on a machine with a 16-bit processor running at 16MHz and a mere 16KB of RAM.

Absolutely wild.

Another win for the Digital Markets Act: Microsoft gives truly free access to additional year of Windows 10 updates to EU users

A few months ago, Microsoft finally blinked and provided a way for Windows 10 users to gain “free” access to the Windows 10 Extended Security Update program. For regular users to gain access to this program, their options are to either pay around $30, pay 1000 Microsoft points, or to sign up for the Windows Backup application to synchronise their settings to Microsoft’s computers (the “cloud”). In other words, in order to get “free” access to extended security updates for Windows 10 after the 25 October end-of-support deadline, you have to start using OneDrive, and will have to start paying for additional storage since the base 5GB of OneDrive storage won’t be enough for backups.

And we all know OneDrive is hell.

Thanks to the European Union’s Digital Markets Act, though, Microsoft has dropped the OneDrive requirement for users within the European Economic Area (the EU plus Norway, Iceland, and Liechtenstein). Citing the DMA, consumer rights organisations in the EU complained that Microsoft’s OneDrive requirement was in breach of EU law, and Microsoft has now given in. Of course, dropping the OneDrive requirement only applies to consumers in the EU/EEA; users in places with much weaker consumer protection legislation, like the United States, will not benefit from this move.

Consumer rights organisations are lauding Microsoft’s move, but they’re not entirely satisfied just yet. The main point of contention is that the access to the Extended Security Update program is only valid for one year, which they consider too short. In a letter, Euroconsumers, one of the consumer rights organisations, details this issue.

At the same time, several points from our original letter remain relevant. The ESU program is limited to one year, leaving devices that remain fully functional exposed to risk after October 13, 2026. Such a short-term measure falls short of what consumers can reasonably expect for a product that remains widely used and does not align with the spirit of the Digital Content Directive (DCD), nor the EU’s broader sustainable goals. Unlike previous operating system upgrades, which did not typically require new hardware, the move to Windows 11 does. This creates a huge additional burden for consumers, with some estimates suggesting that over 850 million active devices still rely on million Windows 10 and cannot be upgraded due to hardware requirements. By contrast, upgrades from Windows 7 or 8 to Windows 10 did not carry such limitations.

↫ Eurconsumer’s letter

According to the group, the problem is exacerbated by the fact that Microsoft is much more aggressive in phasing out support for Windows 10 than for previous versions of Windows. Windows 10 is being taken behind the shed four years after the launch of Windows 11, while Windows XP and Windows 7 enjoyed 7-8 years. With how many people are still using Windows 10, often with no way to upgrade but buying new hardware, it’s odd that Microsoft is trying to kill it so quickly.

In any event, we can chalk this up as another win for consumers in the European Union, with the Digital Markets Act once again creating better outcomes than in other regions of the world.

NFS at 40: a treasure trove of documents and other material about Sun’s Network File System

The contributions of Sun Microsystems to the world of computing are legion – definitely more than its ignominious absorption into Oracle implies – and one of those is NFS, the Network File system. This month, NFS more or less turned 40 years old, and in honour of this milestone, Russel Berg, Russ Cox, Steve Kleiman, Bob Lyon, Tom Lyon, Joseph Moran, Brian Pawlowski, David Rosenthal, Kate Stout, and Geoff Arnold created a website to honour NFS.

This website gathers material related to the Sun Microsystems Network File System, a project that began in 1983 and remains a fundamental technology for today’s distributed computer systems.

[…]

The core of the collection is design documents, white papers, engineering specifications, conference and journal papers, and standards material. However it also covers marketing materials, trade press, advertising, books, “swag”, and personal ephemera. We’re always looking for new contributions.

↫ NFS at 40

There’s so many amazing documents here, such as the collection of predecessors of NFS that served as inspiration for NFS, like the Cambridge File Server or the Xerox Alto’s Interim File System, but also tons of fun marketing material for things like NFS server accelerators and nerdy NFS buttons. Even if you’re not specifically interested in the history of NFS, there’s great joy in browsing these old documents and photos.

yt-dlp will soon require a full JS runtime to overcome YouTube’s JS challenges

If you download YouTube videos, there’s a real chance you’re using yt-dlp, the long-running and widely-used command-line program for downloading YouTube videos. Even if you’re not using it directly, many other tools for downloading YouTube videos are built on top of yt-dlp, and even some media players which offer YouTube playback use it in the background. Now, yt-dlp has always had a built-in basic JavaScript “interpreter”, but due to changes at YouTube, yt-dlp will soon require a proper JavaScript runtime in order to function.

Up until now, yt-dlp has been able to use its built-in JavaScript “interpreter” to solve the JavaScript challenges that are required for YouTube downloads. But due to recent changes on YouTube’s end, the built-in JS interpreter will soon be insufficient for this purpose. The changes are so drastic that yt-dlp will need to leverage a proper JavaScript runtime in order to solve the JS challenges.

↫ Yt-dlp’s announcement on GitHub

The yt-dlp team suggests using Deno, but compatibility with some alternatives has been added as well. The issue is that the “interpreter” yt-dlp already includes consists of a massive set of very complex regex patterns to solve JS challenges, and those are difficult to maintain and no longer sufficient, so a real runtime is necessary for YouTube downloads. Deno is advised because it’s entirely self-contained and sandboxed, and has no network or filesystem access of any kind. Deno also happens to be a single, portable executable.

As time progresses, it seems yt-dlp is slowly growing into a web browser just to be able to download YouTube videos. I wonder what kind of barriers YouTube will throw up next, and what possible solutions from yt-dlp might look like.

Legacy Update 1.12 released

If you’re still running old versions of Windows from Windows 2000 and up, either for restrocomputing purposes or because you need to keep an old piece of software running, you’ve most likely heard of Legacy Update. This tool allows you to keep Windows Update running on Windows versions no longer supported by the service, and has basically become a must-have for anyone still playing around with older Windows versions.

The project released a fairly major update today.

Legacy Update 1.12 features a significant rewrite of our ActiveX control, and a handful of other bug fixes.

The rewrite allows us to more easily work on the project, and ensures we can continue providing stable releases for the foreseeable future, despite Microsoft recently breaking the Windows XP-compatible compiler included with Visual Studio 2022.

↫ Legacy Update 1.12 release notes

The project switched sway from compiling with Visual C++ 2008 (and 2010, and 2017, and 2022…), which Microsoft recently broke, and now uses an open-source MinGW/GCC toolchain. This has cut the size of the binary in half, which is impressive considering it was already smaller than 1MB. This new version also adds a three-minute timer before performing any required restarts, and speeds up the installation of the slowest type of updates (.NET Frameworks) considerably.

Would you trust Google to remain committed to Android on laptops and desktops?

It’s no secret that Google wants to bring Android to laptops and desktops, and is even sacrificing Chrome OS to get there. It seems this effort is gaining some serious traction lately, as evidenced by a conversation between Rick Osterloh, Google’s SVP of platforms and devices, and Qualcomm’s CEO, Christiano Amon, during Qualcomm’s Snapdragon Summit.

Google may have just dropped its clearest hint yet that Android will soon power more than phones and tablets. At today’s Snapdragon Summit kickoff, Qualcomm CEO Cristiano Amon and Google’s SVP of Devices and Services Rick Osterloh discussed a new joint project that will directly impact personal computing.

“In the past, we’ve always had very different systems between what we are building on PCs and what we are building on smartphones,” Osterloh said on stage. “We’ve embarked on a project to combine that. We are building together a common technical foundation for our products on PCs and desktop computing systems.”

↫ Adamya Sharma at Android Authority

Amon eventually exclaimed that’s he’s seen the prototype devices, and that “it is incredible”. He added that “it delivers on the vision of convergence of mobile and PC. I cannot wait to have one.” Now, marketing nonsense aside, this further confirms that soon, you’ll be able to buy laptops running Android, and possibly even desktop systems running Android. The real question, though, is – would you want to? What’s the gain of buying an Android laptop over a traditional Windows or macOS laptop?

Then there’s Google’s infamous fickle nature, launching and killing products seemingly randomly, without any clear long-term plans and commitments. Would you buy an expensive laptop running Android, knowing full well Google might discontinue or lose interest in its attempt to bring Android to laptops, leaving you with an unsupported device? I’m sire schools that bought into Chromebooks will gradually move over to the new Android laptops as Chrome OS features are merged into Android, but what about everyone else?

I always welcome more players in the desktop space, and anything that can challenge Microsoft and Apple is welcome, but I’m just not sure if I have faith in Google sticking with it in the long run.

Benjamin Button reviews macOS

Apple’s first desktop operating system was Tahoe. Like any first version, it had a lot of issues. Users and critics flooded the web with negative reviews. While mostly stable under the hood, the outer shell — the visual user interface — was jarringly bad. Without much experience in desktop UX, Apple’s first OS looked like a Fisher-Price toy: heavily rounded corners, mismatched colors, inconsistent details and very low information density. Obviously, the tool was designed mostly for kids or perhaps light users or elderly people.

Credit where credit is due: Apple had listened to their users and the next version – macOS Sequoia — shipped with lots of fixes. Border radius was heavily reduced, transparent glass-like panels replaced by less transparent ones, buttons made more serious and less toyish. Most system icons made more serious, too, with focus on more detail. Overall, it seemed like the 2nd version was a giant leap from infancy to teenage years.

↫ Rakhim Davletkali

A top quality operating systems shitpost.

Exploring GrapheneOS’ secure allocator: hardened malloc

GrapheneOS is a security and privacy-focused mobile operating system based on a modified version of Android (AOSP). To enhance its protection, it integrates advanced security features, including its own memory allocator for libc: hardened malloc. Designed to be as robust as the operating system itself, this allocator specifically seeks to protect against memory corruption.

This technical article details the internal workings of hardened malloc and the protection mechanisms it implements to prevent common memory corruption vulnerabilities. It is intended for a technical audience, particularly security researchers or exploit developers, who wish to gain an in-depth understanding of this allocator’s internals.

↫ Nicolas Stefanski at Synacktiv

GrapheneOS is quite possibly the best way to keep your smartphone secure, and even law enforcement is not particularly amused that people are using it. If the choice is between security and convenience, GrapheneOS chooses security every time, and that’s the reason it’s favoured by many people who deeply care about (smartphone) security. The project’s social media accounts can be a bit… Much at times, but their dedication to security is without question, and if you want a secure smartphone, there’s really nowhere else to turn – unless you opt to trust the black box security approach from Apple.

Sadly, GrapheneOS is effectively under attack not from criminals, but from Google itself. As Google tightens its grip on Android more and more, as we’ve been reporting on for years now, it will become ever harder for GrapheneOS to deliver the kind of security and fast update they’ve been able to deliver. I don’t know just how consequential Google’s increasing pressure is for GrapheneOS, but I doubt it’s making the lives of its developers any easier.

It’s self-defeating, too; GrapheneOS has a long history of basically serving as a test best for highly advanced security features Google later implements for Android in general. A great example is the Memory Tagging Extension, a feature implemented by ARM in hardware, which GrapheneOS implements much more widely and extensively than Google does. This way, GrapheneOS users have basically been serving as testers to see if applications and other components experience any issues when using the feature, paving the way for Google to eventually, hopefully, follow in GrapheneOS’ footsteps.

Google benefits from GrapheneOS, and trying to restrict its ability to properly support devices and its access to updates is shortsighted.

DXGI debugging: Microsoft put me on a list

Why does Space Station 14 crash with ANGLE on ARM64? 6 hours later…

So. I’ve been continuing work on getting ARM64 builds out for Space Station 14. The thing I was working on yesterday were launcher builds, specifically a single download that supports both ARM64 and x64. I’d already gotten the game client itself running natively on ARM64, and it worked perfectly fine in my dev environment. I wrote all the new launcher code, am pretty sure I got it right. Zip it up, test it on ARM64, aaand…

The game client crashes on Windows ARM64. Both in my VM and on Julian’s real Snapdragon X laptop.

↫ PJB at A stream of consciousness

Debugging stories can be great fun to read, and this one is a prime example. Trust me, you’ll have no idea what the hell is going on here until you reach the very end, and it’s absolutely wild. Very few people are ever going to run into this exact same set of highly unlikely circumstances, but of course, with a platform as popular as Windows, someone was eventually bound to.

Sidenote: the game in question looks quite interesting.

Yes, Redox can run on some smartphones

I had the pleasure of going to RustConf 2025 in Seattle this year. During the conference, I met lots of new people, but in particular, I had the pleasure of spending a large portion of the conference hanging out with Jeremy Soller of Redox and System76. Eventually, we got chatting about EFI and bootloaders, and my contributions to PostmarketOS, and my experience booting EFI-based operating systems (Linux) on smartphones using U-BootRedox OS is also booted via EFI, and so the nerdsnipe began. Could I run Redox OS on my smartphone the same way I could run PostmarketOS Linux?

Spoilers, yes.

↫ Paul Sajna

The hoops required to get this to work are, unsurprisingly, quite daunting, but it turns out it’s entirely possible to run the ARM build of Redox on a Qualcomm-based smartphone. The big caveat here is that there’s not much you can actually do with it, because among the various missing drivers is the one for touch input, so once you arrive at Redox’ login screen, you can’t go any further.

Still, it’s quite impressive, and highlights both the amazing work done on the PostmarketOS/Linux side, as well as the Redox side.

MV 950 Toy: an emulator of the Metrovick 950, the first commercial transistor computer

After researching the first commercial transistor computer, the British Metrovick 950, Nina Kalinina wrote an emulator, simple assembler, and some additional “toys” (her word) so we can enjoy this machine today. First, what, exactly, is the Metrovick 950?

Metrovick 950, the first commercial transistor computer, is an early British computer, released in 1956. It is a direct descendant of the Manchester Baby (1948), the first electronic stored-program computer ever.

↫ Nina Kalinina

The Baby, formally known as Small-Scale Experimental Machine, was a foundation for the Manchester Mark I (1949). Mark I found commercial success as the Ferranti Mark I. A few years later, Manchester University built a variant of Mark I that used magnetic drum memory instead of Williams tubes and transistors instead of valves. This computer was called the Manchester Transistor Computer (1955). Engineers from Metropolitan-Vickers released a streamlined, somewhat simplified version of the Transistor Computer as Metrovick 950.

The emulator she developed is “only” compatible on a source code level, and emulates “the CPU, a teleprinter with a paper tape punch/reader, a magnetic tape storage device, and a plotter”, at 200-300 operations per second. It’s complete enough you can play Lunar Lander on it, because is a computer you can’t play games on really a computer?

Nina didn’t just create this emulator and its related components, but also wrote a ton of documentation to help you understand the machine and to get started. There’s an introduction to programming and using the Metrovick 950 emulator, additional notes on programming the emulator, and much more. She also posted a long thread on Fedi with a ton more details and background information, which is a great read, as well.

This is amazing work, and interesting not just to programmers interested in ancient computers, but also to historians and people who really put the retro in retrocomputing.

Multikernel architecture proposed for Linux

A very exciting set of kernel patches have just been proposed for the Linux kernel, adding multikernel support to Linux.

This patch series introduces multikernel architecture support, enabling multiple independent kernel instances to coexist and communicate on a single physical machine. Each kernel instance can run on dedicated CPU cores while sharing the underlying hardware resources.

↫ Cong Wang on the LKML

The idea is that you can run multiple instances of the Linux kernel on different CPU cores using kexec, with a dedicated IPI framework taking care of communication between these kernels. The benefits for fault isolation and security is obvious, and it supposedly uses less resources than running virtual machines through kvm and similar technologies.

The main feature I’m interested in is that this would potentially allow for “kernel handover”, in which the system goes from using one kernel to the other. I wonder if this would make it possible to implement a system similar to what Android currently uses for updates, where new versions are installed alongside the one you’re running right now, with the system switching over to the new version upon reboot. If you could do something similar with this technology without even having to reboot, that would be quite amazing and a massive improvement to the update experience.

It’s obviously just a proposal for now, and there will be much, much discussion to follow I’m sure, but the possibilities are definitely exciting.

History of the GEM desktop environment

The 1980s saw a flurry of graphical user interfaces pop up, almost all of them in some way made by people who got to see the work done by Xerox. Today’s topic is no exception – GEM was developed by Lee Jay Lorenzen, who worked at Xerox and wished to create a cheaper, less resource-intensive alternative to the Xerox Star, which he got to do at DRI after leaving Xerox. His work was then shown off to Atari, who were interested in using it.

The entire situation was pretty hectic for a while: DRI’s graphics group worked on the PC version of GEM on MS-DOS; Atari developers were porting it to Apple Lisas running CP/M-68K; and Loveman was building GEMDOS. Against all odds, they succeeded. The operating system for Atari ST consisting of GEM running on top of GEMDOS was named TOS which simply meant “the operating system”, although many believed “T” actually stood for “Tramiel”.

Atari 520 ST, soon nicknamed “Jackintosh”, was introduced at the 1985 Consumer Electronics Show in Las Vegas and became an immediate hit. GEM ran smoothly on the powerful ST’s hardware, and there were no clones to worry about. Atari developed its branch of GEM independently of Digital Research until 1993, when the Atari ST line of computers was discontinued.

↫ Nemanja Trifunovic at Programming at the right level

Other than through articles like these and the occasional virtual machine, I have no experience with the various failed graphical user interfaces of the 1980s, since I was too young at the time. Even from the current day, though, it’s easy to see how all of them can be traced back directly to the work done at Xerox, and just how much we owe to the people working there at the time.

Now that the technology industry is as massive as it is, with the stakes being so high, it’s unlikely we’ll ever see a place like Xerox PARC ever again. Everything is secretive now, and if a line of research doesn’t obviously lead to massive short-term gains, it’s canned before it even starts. The golden age of wild, random computer research without a profit motive is clearly behind us, and that’s sad.

Dark patterns killed my wife’s Windows 11 installation

Last night, my wife looks up from her computer, troubled. She tells me she can’t log into her computer running Windows 11, as every time she enters the PIN code to her account, the login screen throws up a cryptic error: “Your credentials could not be verified”. She’s using the correct PIN code, so that surely isn’t it. We opt for the gold standard in troubleshooting and perform a quick reboot, but that doesn’t fix it. My initial instinct is that since she’s using an online account instead of a local one, perhaps Microsoft is having some server issues? A quick check online indicates that no, Microsoft’s servers seem to be running fine, and to be honest, I don’t even know if that would have an effect on logging into Windows in the first place.

The Windows 11 login screen does give us a link to click in case you forget your PIN code. Despite the fact the PIN code she’s entering is correct, we try to go through this process to see if it goes anywhere. This is where things really start to get weird. A few dialogs flash in and out of existence, until it’s showing us a dialog telling us to insert a security USB key of some sort, which we don’t have. Dismissing it gives us an option to try other login methods, including a basic password login. This, too, doesn’t work; just like with the PIN code, Windows 11 claims the accurate, correct password my wife is entering is invalid (just to be safe, we tested it by logging into her Microsoft account on her phone, which works just fine).

In the account selection menu in the bottom-left, an ominous new account mysteriously appears: WsiAccount.

The next option we try is to actually change the PIN code. This doesn’t work either. Windows wants us to use a second factor using my wife’s phone number, but this throws up another weird error, this time claiming the SMS service to send the code isn’t working. A quick check online once again confirms the service seems to be working just fine for everybody else. I’m starting to get really stumped and frustrated.

Of course, during all of this, we’re both searching the web to find anything that might help us figure out what’s going on. None of our searches bring up anything useful, and none of our findings seem to be related to or match up with the issue we’re having. While she’s looking at her phone and I’m browsing on my Fedora/KDE PC next to hers, she quickly mentions she’s getting a notification that OneDrive is full, which is odd, since she doesn’t use OneDrive for anything.

We take this up as a quick sidequest, and we check up on her OneDrive account on her phone. As OneDrive loads, our jaws drop in amazement: a big banner warning is telling her she’s using over 5500% of her 5GB free account. We look at each other and burst out laughing. We exchange some confused words, and then we realise what is going on: my wife just got a brand new Samsung Galaxy S25, and Samsung has some sort of deal with Microsoft to integrate its services into Samsung’s variant of Android. Perhaps during the process of transferring data and applications from her old to her new phone, OneDrive syncing got turned on? A quick trip to the Samsung Gallery application confirms our suspicions: the phone is synchronising over 280GB of photos and videos to OneDrive.

My wife was never asked for consent to turn this feature on, so it must’ve been turned on by default. We quickly turn it off, delete the 280GB of photos and videos from OneDrive, and move on to the real issue at hand.

Since nothing seems to work, and none of what we find online brings us any closer to what’s going on with her Windows 11 installation, we figured it’s time to bring out the big guns. For the sake of brevity, let’s run through the things we tried. Booting into safe mode doesn’t work; we get the same login problems. Trying to uninstall the latest updates, an option in WinRE, doesn’t work, and throws up an unspecified error. We try to use a restore point, but despite knowing for 100% certain the feature to periodically create restore points is enabled, the only available restore point is from 2022, and is located on a drive other than her root drive (or “C:\” in Windows parlance). Using the reset option in WinRE doesn’t work either, as it also throws up an error, this time about not having enough free space. I also walk through a few more complex suggestions, like a few manual registry hacks related to the original error using cmd.exe in WinRE. None of it yields any results.

It’s now approaching midnight, and we need to get up early to drop the kids off at preschool, so I tell my wife I’ll reinstall her copy of Windows 11 tomorrow. We’re out of ideas.

The next day, I decide to give it one last go before opting for the trouble of going through a reinstallation. The one idea I still have left is to enable the hidden administrator account in Windows 11, which gives you password-free access to what is basically Windows’ root account. It involves booting into WinRE, loading up cmd.exe, and replacing utilman.exe in system32 with cmd.exe:

move c:\windows\system32\utilman.exe c:\
copy c:\windows\system32\cmd.exe c:\windows\system32\utilman.exe

If you then proceed to boot into Windows 11 and click on the Accessibility icon in the bottom-right, it will open “utilman.exe”, but since that’s just cmd.exe with the utilman.exe name, you get a command prompt to work with, right on the login screen. From here, you can launch regedit, find the correct key, change a REG_BINARY, save, and reboot. At the login screen, you’ll see a new “adminstrator” account with full access to your computer.

During the various reboots, I do some more web searching, and I stumble upon a post on /r/WindowsHelp from 7 months ago. The user William6212 isn’t having the exact same issues and error messages we’re seeing, but it’s close enough that it warrants a look at the replies. The top reply by user lemonsandlimes30 contains just two words:

storage full

↫ lemonsandlimes30, the real MVP

And all of a sudden all the pieces of the puzzle fall into place. I instantly figure out the course of events: my wife gets her new Galaxy S25, and transfers over the applications and data from her old phone. During this setup process, the option in the Samsung Gallery application to synchronise photos and videos to OneDrive is enabled without her consent and without informing her. The phone starts uploading the roughly 280GB of photos and videos from her phone to her 5GB OneDrive account, and she gets a warning notification that her OneDrive storage is a bit full.

And now her Windows 11 PC enters the scene. Despite me knowing with 100% certainty I deleted OneDrive completely off her Windows 11 PC, some recent update or whatever must’ve reinstalled it and enabled its synchronisation feature, which in turn, right as my wife’s new phone secretly started uploading her photos and videos to OneDrive, started downloading those same photos and videos to her Windows 11’s relatively small root drive. All 280GB of them.

Storage full.

The reboots were done, and indeed, the secret passwordless administrator account was now available on the login screen. I log in, wait for Windows 11’s stupid out-of-box-experience thing to run its course, immediately open Explorer, and there it is: her root drive is completely full, with a mere 25MB or so available. We go into her account’s folder, delete the OneDrive folder and its 280GB of photos and videos, and remove OneDrive from her computer once again. Hopefully this will do the trick.

It didn’t. We still can’t log in, as the original issue persists. I log back into the administrator account, open up compmgmt.msc, go to Users, and try to change my wife’s password. No luck – it’s an online account, and it turns out you can’t change the password of such an account using traditional user management tools; you have to log into your Microsoft account on the web, and change your password there. After we do this, we can finally log back into her Windows 11 account with the newly-set password.

We fixed it.

Darkest of patterns

My wife and I fell victim to a series of dark patterns that nearly rendered her Windows 11 installation unrecoverable. The first dark pattern is Samsung enabling the OneDrive synchronisation feature without my wife’s consent and without informing her. The second dark pattern is Microsoft reinstalling OneDrive onto my wife’s PC without my wife’s consent and without informing her. The third dark pattern is OneDrive secretely downloading 280GB of photos and videos without once realising this was way more data than her root drive could store. The fifth and final dark pattern runs through all of this like a red thread: Microsoft’s insistence on forcefully converting every local Windows 11 user account to an online Microsoft account.

This tragedy of dark patterns then neatly cascaded into a catastrophic comedy of bugs, where a full root drive apparently corrupts online Microsoft accounts on Windows 11 so hard they become essentially unrecoverable. There were no warnings and no informational popups. Ominous user accounts started to appear on the login screen. Weird suggestions to use corporate-looking security USB keys pop up. Windows wrongfully tells my wife the PIN code and password she enters are incorrect. The suggestion to change the password or PIN code breaks completely. All the well-known rescue options any average user would turn to in WinRE throw up cryptic errors.

At this point, any reasonable person would assume their Windows 11 installation was unrecoverable, or worse, that some sort of malware had taken over their machine – ominous “WsiAccount” and demands for a security USB key and all. The only course of action most Windows users would take at this point is a full reinstallation. If it wasn’t for me having just enough knowledge to piece the puzzle together – thank you lemonsandlimes30 – we’d be doing a reinstallation today, possibly running into the issue again a few days or weeks later.

No sane person would go this deep to try and fix this problem.

This cost us hours and hours of our lives, causing especially my wife a significant amount of stress, during an already very difficult time in our lives (which I won’t get into). I’m seething with rage towards Microsoft and its utter incompetence and maliciousness. Let me, for once, not mince words here: Windows 11 is a travesty, a loose collection of dark patterns and incompetence, run by people who have zero interest in lovingly crafting an operating system they can be proud of. Windows has become a vessel for subscriptions and ads, and cannot reasonably be considered anything other than a massive pile of user-hostile dark patterns designed to extract data, ad time, and subscription money from its users.

If you can switch away and ditch Windows, you should. The ship is burning, and there’s nobody left to put out the fires.

Intel to build x86 CPUs with NVIDIA graphics, most likely spelling the end of ARC

Intel is in very dire straits, and as such, the company needs investments and partnerships more than anything. Today, NVIDIA and Intel announced just such a partnership, in which NVIDIA will invest $5 billion into the troubled chip giant, while the two companies will develop products that combine Intel’s x86 processors with NVIDIA’s GPUs.

For data centers, Intel will build NVIDIA-custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms and offer to the market.

For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.

↫ NVIDIA press release

My immediate reaction to this news was to worry about the future of Intel’s ARC graphics efforts. Just as the latest crop of their ARC GPUs have received a ton of good press and positive feedback, with some of their cards becoming the go-to suggestion for a budget-friendly but almost on-par alternative to offerings from NVIDIA and AMD, it would be a huge blow to user choice and competition if Intel were to abandon the effort.

I think this news pretty much spells the end for the ARC graphics effort. Making dedicated GPUs able to compete with AMD and NVIDIA must come at a pretty big financial cost for Intel, and I wouldn’t be surprised if they’ve been itching to find an excuse to can the whole project. With NVIDIA GPUs fulfilling the role of more powerful integrated GPUs, all Intel really needs is a skeleton crew developing the basic integrated GPUs for cheaper and non-gaming oriented devices, which would be a lot cheaper to maintain.

For just $5 billion dollars, NVIDIA most likely just eliminated a budding competitor in the GPU space. That’s cheap.