Microsoft finally allows you to name your own home folder during Windows setup

It’s only a small annoyance in the grand scheme of the utter idiocy that is modern Windows, but apparently it’s one enough people complained about Microsoft is finally addressing it. In all of its wisdom, Microsoft doesn’t allow you to set the name of your user’s home folder during the installation procedure of Windows 11. The folder’s name is automatically generated based on your Microsoft account’s username or email address, something I’ve personally really disliked since I have been using thomholwerda for as long as I can remember. Last year, they introduced an incredibly obtuse method of setting your own home folder name, but now the company is finally adding it as an optional step during the regular installation process. Expanding on our work which started rolling to Insiders last fall, you can now choose a custom name for your user folder on the Device Name page when going through Windows setup. This most recent update now makes it easier to choose a custom name. The naming option is available during setup only. If you skip this step, Windows will use the default folder name and continue setup as usual. ↫ Windows Insider Program Team This means you now have the option of defining your own home folder name, excluding CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, COM¹, COM², COM³, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9, LPT¹, LPT², and LPT³. It’s a very small change, and certainly not something that will turn Windows’ ship around, but at least it’s something that’s being done for users who actually care. It’s also such a small change, such a small addition, that one wonders why it’s taken them this long. I’m assuming there’s already some incredibly complex and hacky way to change your automatically assigned home folder name by diving deep into the registry, converting your root drive back to FAT16, changing some values in a DLL file through a hex editor, and then converting back to NTFS, but this is clearly a much better way of handling it.

CSMWrap: make UEFI-only systems boot BIOS-based operating systems

What if you have a very modern machine that is entirely UEFI-only, meaning it has no compatibility support module and thus no way of enabling a legacy BIOS mode? Well, install a CSM as an EFI application, of course! CSMWrap is an EFI application designed to be a drop-in solution to enable legacy BIOS booting on modern UEFI-only (class 3) systems. It achieves this by wrapping a Compatibility Support Module (CSM) build of the SeaBIOS project as an out-of-firmware EFI application, effectively creating a compatibility layer for traditional PC BIOS operation. ↫ CSMWrap’s GitHub page The need for this may not be immediately obvious, but here’s the problem: if you want to run an older operating system that absolutely requires a traditional BIOS on a modern machine that only has UEFI without any CSM options (a class 3-machine), you won’t be able to boot said operating system. CSMWrap is a possible solution, as it leverages innate EFI capabilities to run a CSM as an EFI application, thereby adding the CSM functionality back in. All you need to do is drop CSMWrap into /efi/boot on the same drive the operating system that needs BIOS to boot is on, and UEFI will list it as a bootable operating system. It does come with some limitations, however. For instance, one logical core of your processor will be taken up by CSMWrap and will be entirely unavailable to the booted BIOS-based operating system. In other words, this means you’re going to need a processor with at least more than one logical processor (e.g., even a single-core machine with hyperthreading will work). It’s also suggested to add a legacy-capable video card if you’re using an operating system that doesn’t support VESA BIOS extensions (e.g. anything older than NT). This is an incredibly neat idea, and even comes with advantages over built-in CSMs, since many of those are untested and riddled with issues. CSMWrap uses SeaBIOS, which is properly tested and generally a much better BIOS than whatever native CSMs contain. All in all, a great project.

Understanding SMF properties in Solaris-based operating systems

SMF is the illumos system for managing traditional Unix services (long-lived background processes, usually). It’s quite rich in order to correctly accommodate a lot of different use cases. But it sometimes exposes that complexity to users even when they’re trying to do something simple. In this post, I’ll walk through an example using a demo service and the svcprop(1) tool to show the details. ↫ Dave Pacheco Soalris’ system management facility or SMF is effectively Solaris’ systemd, and this article provides a deeper insight into one of its features: properties. While using SMF and its suite of tools and commands for basic tasks is rather elementary and easy to get into – even I can do it – once you start to dive deeper into what is can do, things get complex and capable very fast.

Chrome comes to Linux on ARM64

Google has announced that it will release Chrome for Linux on ARM64 in the second quarter of this year. Launching Chrome for ARM64 Linux devices allows more users to enjoy the seamless integration of Google’s most helpful services into their browser. This move addresses the growing demand for a browsing experience that combines the benefits of the open-source Chromium project with the Google ecosystem of apps and features. This release represents a significant undertaking to ensure that ARM64 Linux users receive the same secure, stable, and rich Chrome experience found on other platforms. ↫ The Chromium Blog While the idea of running Linux on Arm, only to defile it with something as unpleasant as Chrome seem entirely foreign to me, most normal people do actually use Google’s browser. Having it available on Linux for Arm makes perfect sense, and might convince a few people to buy an Arm machine for Linux, assuming the platform can get its act together.

Just try Plan 9 already

I will not pass up an opportunity to make you talk about Plan 9, so let’s focus on Acme. Acme is remarkable for what it represents: a class of application that leverages a simple, text-based GUI to create a compelling model of interacting with all of the tools available in the Unix (or Plan 9) environment. Cox calls it an “integrating development environment,” distinguishing it from the more hermetic “integrated development environment” developers will be familiar with. The simplicity of its interface is important. It is what has allowed Acme to age gracefully over the past 30 or so years, without the constant churn of adding support for new languages, compilers, terminals, or color schemes. ↫ Daniel Moch While the article mentions you can use Acme on UNIX, to really appreciate it you have to use it on Plan 9, which today most likely means 9front. Now, I am not the kind of person who can live and breathe inside 9front – you need to be of a certain mindset to be able to do so – but even then I find that messing around with Plan 9 has given me a different outlook on UNIX. In fact, I think it has helped me understand UNIX and UNIX-like systems better and more thoroughly. If you’re not sure if Plan 9 is something that suits you, the only real way to find out is to just use it. Fire up a VM, read the excellent documentation at 9front, and just dive into it. Most of you will just end up confused and disoriented, but a small few of you will magically discover you possess the right mindset. Just do it.

Fedora struggles bringing its RISC-V variant online due to slow build times

Red Hat developer Marcin Juszkiewicz is working on the RISC-V port of Fedora Linux, and after a few months of working on it, published a blog post about just how incredibly slow RISC-V seems to be. This is a real problem, as in Fedora, build results are only released once all architectures have completed their builds. There is no point of going for inclusion with slow builders as this will make package maintainers complain. You see, in Fedora build results are released into repositories only when all architectures finish. And we had maintainers complaining about lack of speed of AArch64 builders in the past. Some developers may start excluding RISC-V architecture from their packages to not have to wait. And any future builders need to be rackable and manageable like any other boring server (put in a rack, connect cables, install, do not touch any more). Because no one will go into a data centre to manually reboot an SBC-based builder. Without systems fulfilling both requirements, we can not even plan for the RISC-V 64-bit architecture to became one of official, primary architectures in Fedora Linux. ↫ Marcin Juszkiewicz RISC-V really seems to have hit some sort of ceiling over the past few years, with performance improvements stalling and no real performance-oriented chips and boards becoming available. Everybody seems to want RISC-V to succeed and become an architecture that can stand its own against x86 and Arm, but the way things are going, that just doesn’t seem likely any time soon. There’s always some magical unicorn chip or board just around the corner, but when you actually turn that corner, it’s just another slow SBC only marginally faster than the previous one. Fedora is not the first distribution struggling with bringing RISC-V online. Chimera Linux faced a similar issue about a year ago, but managed to eventually get by because someone from the Adélie Linux team granted remote access to an unused Milk-V Pioneer, which proved enough for Chimera for now. My hope is still that eventually we’re going to see performant, capable RISC-V machines, because I would absolutely jump for joy if I could have a proper RISC-V workstation.

Amazon enters “find out” phase

Now let’s go live to Amazon for the latest updates about this developing story. Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools. The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT. Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.” ↫ Rafe Rosner-Uddin at Ars Technica Oh boy.

You’re supposed to replace the stock photos in new picture frames

Back in 2023, John Earnest created a fun drawing application called WigglyPaint. The thing that makes WigglyPaint unique is that it automatically applies what artists call the line boil effect to anything you draw, making it seem as if everything is wiggling (hence the name). Even if you’re not aware of the line boil effect, you’ve surely encountered it several times in your life. The tool may seem simple at first glance, but as Earnest details, he’s put quite a lot of thought into the little tool. WigglyPaint was well-received, but mostly remained a curiosity – that is, until artists in Asia picked up on it, and the popularity of WigglyPaint positively exploded from a few hundred into the millions. The problem, though, is that basically nobody is actually using WigglyPaint: they’re all using slopcoded copycats. The sites are slop; slapdash imitations pieced together with the help of so-called “Large Language Models” (LLMs). The closer you look at them, the stranger they appear, full of vague, repetitive claims, outright false information, and plenty of unattributed (stolen) art. This is what LLMs are best at: quickly fabricating plausible simulacra of real objects to mislead the unwary. It is no surprise that the same people who have total contempt for authorship find LLMs useful; every LLM and generative model today is constructed by consuming almost unimaginably massive quantities of human creative work- writing, drawings, code, music- and then regurgitating them piecemeal without attribution, just different enough to hide where it came from (usually). LLMs are sharp tools in the hands of plagiarists, con-men, spammers, and everyone who believes that creative expression is worthless. People who extract from the world instead of contributing to it. It is humiliating and infuriating to see my work stolen by slop enthusiasts, and worse, used to mislead artists into paying scammers for something that ought to be free. ↫ John Earnest There’s a huge amount of slopcoded WrigglyPaint ripoffs out there, and it goes far beyond websites, too. People are putting slopcoded ripoffs in basic webviews, and uploading them en masse to the Play Store and App Store. None of these slopcoded ripoffs actually build upon WrigglyPaint with new ideas or approaches, there’s no creativity or innovation; it’s just trash barfed up by glorified autocomplete built upon mass plagiarism and theft, “made” by bottom feeders who despise creativity, art, and originality. You know how when you go to IKEA or whatever other similar store to buy picture frames, they have these stock photos of random people in them? I wonder if “AI” enthusiasts understand you’re supposed to replace those with pictures that actually have meaning to you.

Redox bans code regurgitated by “AI”

Redox, the rapidly improving general purpose operating system written in Rust, has amended its contribution policy to explicitly ban code regurgitated by “AI”. Redox OS does not accept contributions generated by LLMs (Large Language Models), sometimes also referred to as “AI”. This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project. ↫ Redox’ contribution policy Excellent news.

FreeBSD 14.4 released

While FreeBSD 15.x may be getting all the attention, the FreeBSD 14.x branch continues to be updated for the more conservative users among us. FreeBSD 14.4 has been released today, and brings with it updated versions of OpenSSH, OpenZFS, and Bhyve virtual machines can now share files with their host over 9pfs – among other things, of course.

ArcaOS 5.1.2 released

While IBM’s OS/2 technically did die, its development was picked up again much later, first through eComStation, and later, after money issues at its parent company Mensys, through ArcaOS. eComStation development stalled because of the money issues and has been dead for years; ArcaOS picked up where it left off and has been making steady progress since its first release in 2017. Regardless, the developers behind both projects develop OS/2 under license from IBM, but it’s unclear just how much they can change or alter, and what the terms of the agreement are. Anyway, ArcaOS 5.1.2 has just been released, and it seems to be a rather minor release. It further refines ArcaOS’ support for UEFI and GPT-based disks, the tentpole feature of ArcaOS 5.1 which allows the operating system to be installed on a much more modern systems without having to fiddle with BIOS compatibility modes. Looking at the list of changes, there’s the usual list of updated components from both Arca Noae and the wider OS/2 community. You’ll find the latest versions of of the Panorama graphics drivers, ACPI, USB, and NVMe drivers, improved localisation, newer versions of the VNC server and viewer, and much more. If you have an active Support & Maintenance subscription for ArcaOS 5.1, this update is free, and it’s also available at discounted prices as upgrades for earlier versions. A brand new copy of ArcaOS 5.1.x will set you back $139, which isn’t cheap, but considering this price is probably a consequence of what must be some onerous licensing terms and other agreements with IBM, I doubt there’s much Arca Noae can do about it.

“AI” translations are ruining Wikipedia

Oh boy. Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article. ↫ Emanuel Maiberg at 404 Media There seems to be this pervasive conviction among Silicon Valley techbro types, and many programmers and developers in general, that translation and localisation are nothing more than basic find/replace tasks that you can automate away. At first, we just needed to make corpora of two different languages kiss and smooch, and surely that would automate translation and localisation away if the corpora were large enough. When this didn’t turn out to work very well, they figured that if we made the words in the corpora tumble down a few pachinko machines and then made them kiss and smooch, yes, then we’d surely have automated translation and localisation. Nothing could be further from the truth. As someone who has not only worked as a professional translator for over 15 years, but who also holds two university degrees in the subject, I keep reiterating that translation isn’t just a dumb substitution task; it’s a real craft, a real art, one you can have talent for, one you need to train for, and study for. You’d think anyone with sufficient knowledge in two languages can translate effectively between the two, but without a much deeper understanding of language in general and the languages involved in particular, as well as a deep understanding of the cultures in which the translation is going to be used, and a level of reading and text comprehension that go well beyond that of most, you’re going to deliver shit translations. Trust me, I’ve seen them. I’ve been paid good money to correct, fix, and mangle something usable out of other people’s translations. You wouldn’t believe the shit I’ve seen. Translation involves the kinds of intricacies, nuances, and context “AI” isn’t just bad at, but simply cannot work with in any way, shape, or form. I’ve said it before, but it won’t be long before people start getting seriously injured – or worse – because of the cost-cutting in the translation industry, and the effects that’s going to have on, I don’t know, the instruction manuals for complex tools, or the leaflet in your grandmother’s medications. Because some dumbass bean counter kills the budget for proper, qualified, trained, and experienced translators, people are going to die.

“I don’t know what is Apple’s endgame for the Fn/Globe key, and I’m not sure Apple knows either”

Every modifier key starts simple and humble, with a specific task and a nice matching name. This never lasts. The tasks become larger and more convoluted, and the labels grow obsolete. Shift no longer shifts a carriage, Control doesn’t send control codes, Alt isn’t for alternate nerdy terminal functions. Fn is the newest popular modifier key, and it feels we’re speedrunning it through all the challenges without having learned any of the lessons. ↫ Marcin Wichary Grab a blanket, curl up on the couch with some coffee or tea, and enjoy.

Is Windows 11 Forcing Developers to Abandon Legacy Toolchains?

Windows 11 has never pretended to be a clean backward-compatibility story. Since its launch, Microsoft has systematically trimmed support for aging APIs, drivers, and runtimes — and developers maintaining legacy codebases are absorbing the consequences. The question worth asking in 2026 isn’t whether these deprecations cause friction. They clearly do. The real question is whether that friction is severe enough to push serious developers away from Windows entirely. The answer is complicated. Microsoft’s deprecation schedule is aggressive, but the ecosystem response has been pragmatic rather than revolutionary. Developers aren’t fleeing en masse — they’re adapting, often through containerization and virtualization workarounds that add complexity without solving root problems. Where Regulated Software Sectors Feel It Most Industries running long-lifecycle software — healthcare, manufacturing, financial services — feel Windows 11’s deprecations most acutely. These environments commonly depend on legacy Visual C++ redistributables, hardcoded system paths, and DLL dependencies that assume a Windows 7 or Windows 10 runtime environment. Refactoring isn’t a sprint; it’s a multi-year program. The October 2025 end-of-support deadline for Windows 10 accelerated these conversations significantly. Organizations that delayed migration decisions now face either extended security update costs or forced compatibility work — neither of which was budgeted under normal refresh cycles. Some sectors exploring digitally-adjacent tools, from productivity software to platforms like New York Casinos online, face analogous pressures around maintaining software that runs consistently across evolving OS environments. Which Legacy APIs Windows 11 Actually Breaks According to Microsoft’s deprecated features documentation, APIs including NPLogonNotify and NPPasswordChangeNotify have had their password payload functionality disabled by default starting in Windows 11 version 24H2, with potential full removal signaled for future releases. This matters most for authentication middleware and enterprise SSO integrations built years ago with assumptions about credential pipeline access. The kernel-level changes arriving in April 2026 compound this. Microsoft is now blocking legacy cross-signed drivers by default — a policy shift affecting toolchain components that have operated uninterrupted for decades. Older trusted drivers retain compatibility for now, but the direction of travel is clear: unsigned or legacy-signed kernel code is getting progressively harder to run without explicit policy overrides. How Developers Are Responding to Forced Migration Containerization has become the dominant short-term response. Vendors like Numecent and Cloudhouse offer packaging solutions that isolate legacy runtimes — including 16-bit emulation and Windows XP compatibility modes — inside containers that run on Windows 11 without requiring refactoring. This buys time, but it doesn’t eliminate technical debt. As XMA’s migration analysis notes, while 99.7% of applications are compatible with Windows 11, the remaining 0.3% are disproportionately critical legacy systems that can block entire enterprise upgrade pipelines. For development teams maintaining those systems, workarounds like Azure Virtual Desktop or Windows 365 are Microsoft’s preferred answer — cloud-hosted compatibility rather than native resolution. Does Linux Finally Win the Developer Desktop? No direct evidence suggests Windows 11 is triggering a meaningful migration of developers to Linux or macOS as a primary environment. Microsoft’s own response to compatibility pressure consistently points back to Windows-native solutions. Tools like UiPath Studio, for instance, still maintain Windows-Legacy .NET Framework 4.6.1 support — signaling that the ecosystem isn’t yet willing to cut that rope entirely. What’s actually shifting is the developer mental model around dependency management. The assumption that Windows will perpetually run anything from any era is visibly eroding. Developers building new toolchains today are making different architectural choices — favoring cross-platform runtimes, containerized builds, and abstracted driver interfaces precisely because Windows’ compatibility guarantees feel less permanent than they once did. Linux gains ground not through dramatic defection but through incremental preference shifts among developers who simply want fewer surprises.

MenuetOS 1.59.20 released

MenuetOS, the operating system written in x86-64 assembly, has released two new versions since we last talked about it roughly two months ago. In fact, I’m not actually sure it’s just two, or more, or fewer, since it seems sometimes releases disappear entirely from the changelog, making things a bit unclear. Anyway, since the last time we talked about MenuetOS, it got improvements to videocalling, networking, and HDA audio drivers, and a few other small tidbits.

5 Ways RFID Tracking Reduces Equipment Loss and Inventory Shrinkage

Equipment loss and inventory shrinkage create real challenges for businesses that depend on accurate asset visibility. Missing tools, misplaced materials, and incorrect stock counts slow operations and increase costs. Companies that manage large volumes of equipment require reliable systems that provide clarity and control across every stage of movement. Modern solutions now help organizations maintain accurate oversight without adding complexity. One such solution is RFID tracking, which enables businesses to monitor assets in real time and reduce manual errors. This article explains how this technology improves visibility, strengthens accountability, and helps organizations protect valuable resources while improving operational efficiency. 1. Improved Real-Time Visibility Across Locations Clear visibility helps organizations reduce loss before it becomes a larger issue. RFID technology allows businesses to monitor equipment movement across warehouses, job sites, and storage areas without manual checks. As a result, teams quickly identify the location of tools and materials. When equipment moves between departments, the system records each transition automatically. This visibility helps managers track usage patterns and maintain accurate records. Employees access system data and locate items within seconds, which reduces downtime and supports smoother operations. Better awareness also helps teams respond quickly when items move outside designated zones. 2. Stronger Accountability Among Teams Accountability improves when equipment usage becomes transparent. RFID-enabled systems create detailed records of when items move, who accessed them, and where they traveled. This clarity encourages responsible handling of shared assets and reduces confusion across teams. Employees follow structured processes that remove uncertainty about equipment ownership. When tools return to designated areas, the system automatically updates their status and improves record accuracy. Managers gain confidence in asset visibility, while organizations refine workflows based on usage patterns. Over time, this structured accountability strengthens operational discipline and reduces inventory shrinkage. 3. Faster Inventory Audits With Automated Data Manual inventory audits require time, coordination, and consistent attention. RFID-based solutions simplify this process by collecting data automatically. Teams complete audits faster while maintaining accuracy and reducing operational disruptions. Key Advantages of Automated Audits Automated audits also allow businesses to conduct more frequent checks. This consistent oversight helps prevent shrinkage and keeps asset records updated without increasing workload. 4. Prevention of Misplacement and Unauthorized Movement Misplacement contributes significantly to equipment loss. RFID systems help prevent this issue by alerting teams when items move beyond approved zones. These alerts help organizations act quickly and maintain control over valuable assets. Location-based tracking also supports better organization within storage facilities. Equipment remains in assigned areas, and teams locate items quickly when needed. This structured layout improves operational efficiency and reduces time spent searching. Managers also track movement across departments, which strengthens oversight and supports consistent workflows. 5. Data Insights That Support Smarter Decisions Accurate data helps organizations refine asset management strategies. RFID systems generate reports that highlight usage trends, movement patterns, and storage efficiency. These insights help businesses allocate resources effectively and improve asset utilization. With RFID tracking, organizations identify underused equipment and redistribute assets where needed. This improves utilization and reduces unnecessary purchases. Better forecasting also supports long-term planning and improves operational efficiency. Data-driven insights strengthen decision-making and help businesses maintain accurate inventory control. Equipment loss and inventory shrinkage affect operational efficiency and financial performance. RFID-based solutions provide clear visibility, stronger accountability, and faster audits that support accurate asset management. Organizations that adopt structured tracking systems gain better control over equipment movement and inventory accuracy. With improved data insights and organized workflows, businesses create efficient environments that protect resources and support sustainable growth.

Haiku inches closer to next beta release

And when a Redox monthly progress report is here, Haiku’s monthly report is never far behind (or vice versa, depending on the month). Haiku’s February was definitely a busy month, but there’s no major tentpole changes or new features, highlighting just how close Haiku is to a new regular beta release. The OpenBSD drivers have been synchronised wit upstream to draw in some bugfixes, there’s a ton of smaller fixes to various applications like StyledEdit, Mail, and many more, as well a surprisingly long list of various file system fixes, improving the drivers for file systems like NTFS, Btrfs, XFS, and others. There’s more, of course, so just like with Redox, head on over to pore over the list of smaller changes, fixes, and improvements. Just like last month, I’d like to mention once again that you really don’t need to wait for the beta release to try out Haiku. The operating system has been in a fairly stable and solid condition for a long time now, and whatever’s the latest nightly will generally work just fine, and can be updated without reinstallation.

Redox gets NodeJS, COSMIC’s compositor, and much more

February has been a busy month for Redox, the general purpose operating system written in Rust. For instance, the COSMIC compositor can now run on Redox as a winit window, the first step towards fully porting the compositor from COSMIC to Redox. Similarly, COSMIC Settings now also runs on Redox, albeit with only a very small number of available settings as Redox-specific settings panels haven’t been made yet. It’s clear the effort to get the new COSMIC desktop environment from System76 running on Redox is in full swing. Furthermore, Vulkan software can now run on Redox, thanks to enabling Lavapipe in Mesa3D. There’s also a ton of fixes related to the boot process, the reliability of multithreading has been improved, and there’s the usual long list of kernel, driver, and Relibc improvements as well. A major port comes in the form of NodeJS, which now runs on Redox, and helped in uncovering a number of bugs that needed to be fixed. Of course, there’s way more in this month’s progress report, so be sure to head on over and read the whole thing.

Hardware hotplug events on Linux, the gory details

One day, I suddenly wondered how to detect when a USB device is plugged or unplugged from a computer running Linux. For most users, this would be solved by relying on libusb. However, the use case I was investigating might not actually want to do so, and so this led me down a poorly-documented rabbit hole. ↫ ArcaneNibble (or R) And ArcaneNibble (or R) is taking you down with them.