Oracle Solaris 11.4 SRU90 released

Despite continuous rumors to the contrary, Oracle is still actively developing Solaris, and it’s been more active than ever lately. Yesterday, the company pushed out another release for customers with the proper support contracts: Oracle Solaris 11.4 SRU90. Aside from the various package updates to bring them up to speed with the latest releases, this new Solaris version also comes with a slew of improvements for ZFS. ZFS changes in Oracle Solaris 11.4.90 include more flexibility in setting retention properties when receiving a new file system, and adding the ability for zfs scrub and resilver to run before all the blocks have been freed from previous zfs destroy operations. (This requires upgrading pools to the new zpool version 54.) ↫ Alan Coopersmith You can now also set boot environments to never be destroyed by either manual or automatic means, and more work has been done to prevent a specific type of bug that would accidentally kill all running processes on the system. It seems some programs mistakenly use -1 as a pid value in kill() calls. Now in 11.4.90, the kill system call was modified to not allow processes to use a pid of -1 unless they’d specifically set a process flag that they intend to kill all processes first, to help with programs that didn’t check for errors when finding the process id for the singular process they wanted to kill. ↫ Alan Coopersmith There’s many more changes and improvements, of course, and hopefully, we’ll get to see these in the next CBE release as well, so us mere mortals without expensive support contracts can benefit from them too.

Blue-light filters are pure quackery

I was trading New Year’s resolutions with a circle of friends a few weeks ago, and someone mentioned a big one: sleeping better. I’m a visual neuroscientist by training, so whenever the topic pops up it inevitably leads to talking about the dreaded blue light from monitors, blue light filters, and whether they do anything. My short answer is no, blue light filters don’t work, but there are many more useful things that someone can do to control their light intake to improve their sleep—and minimize jet lag when they’re traveling. My longer answer is usually a half-hour rant about why they don’t work, covering everything from a tiny nucleus of cells above the optic chiasm, to people living in caves without direct access to sunlight, to neuropeptides, the different cones, how monitors work, gamma curves, what I learned running ismy.blue, corn bulbs, melatonin, finally sharing my Apple Watch & WHOOP stats. What follows is slightly more than you needed to know about blue light filters and more effective ways to control your circadian rhythm. Spoiler: the real lever is total luminance, not color. ↫ Patrick Mineault And yet, despite a complete and utter lack of evidence blue-light filters do anything at all, even the largest technology companies in the world peddle them without so much as blinking an eye. It’s pure quackery, and as always, we let them get away with it.

Windows 11 26H1 will be Snapdragon-specific

As if keeping track of whatever counts as a release schedule for Windows wasn’t complicated enough – don’t lie, you don’t know when that feature they announced is actually being released either – Microsoft is making everything even more complicated. Soon, Microsoft will be releasing Windows 11 26H1, but you most likely won’t be getting it because it’s strictly limited to devices with Qualcomm’s new Snapdragon X2 Series processors. The only way to get this version of Windows is to go out and buy a device with a Snapdragon X2 Series processor. Windows 11 26H1 will not be made available to any other Windows 11 users, so nobody will be able to upgrade to it. Furthermore, users of Windows 11 26H1 will not be able to update to the “feature update” for users of Windows 11 24H2 and 25H2, the regular Windows versions, planned for late 2026. Instead, Microsoft promises there will be an upgrade path for 26H1 users in a “future” release of Windows. Why? Devices running Windows 11, version 26H1 will not be able to update to the next annual feature update in the second half of 2026. This is because Windows 11, version 26H1 is based on a different Windows core than Windows 11, versions 24H2 and 25H2, and the upcoming feature update. These devices will have a path to update in a future Windows release. ↫ AriaUpdated at the Windows IT Pro Blog The same thing happened when Qualcomm releases its first round of Snapdragon processors for Windows, as Windows 24H2 was also tied to this specific platform. It seems Microsoft is forced to have entirely separate and partially incompatible codebases just to support Snapdragon processors, which must be a major pain in the ass to deal with. Considering Windows on ARM hasn’t exactly been a smashing success, one may wonder how long Microsoft remains willing to make such exceptions for a singular chip.

Undo in Vi and its successors

So vi only has one level of undo, which is simply no longer fit for the times we live in now, and also wholly unnecessary given even the least powerful devices that might need to run vi probably have more than enough resources to give at least a few more levels of undo. What I didn’t know, however, is that vi’s limited undo behaviour is actually part of POSIX, and for full compliance, you’re going to need it. As Chris Siebenmann notes, vim and its derivatives ignore this POSIX requirement and implement multiple levels of undo in the obviously correct way. What about nvi, the default on the BSD variants? I didn’t know this, but it has a convoluted workaround to both maintain POSIX compatibility and offer multiple levels of undo, and it’s definitely something. Nvi has opted to remain POSIX compliant and operate in the traditional vi way, while still supporting multi-level undo. To get multi-level undo in nvi, you extend the first ‘u’ with ‘.’ commands, so ‘u..’ undoes the most recent three changes. The ‘u’ command can be extended with ‘.’ in either of its modes (undo’ing or redo’ing), so ‘u..u..’ is a no-op. The ‘.’ operation doesn’t appear to take a count in nvi, so there is no way to do multiple undos (or redos) in one action; you have to step through them by hand. I’m not sure how nvi reacts if you want do things like move your cursor position during an undo or redo sequence (my limited testing suggests that it can perturb the sequence, so that ‘.’ now doesn’t continue undoing or redoing the way vim will continue if you use ‘u’ or Ctrl-r again). ↫ Chris Siebenmann Siebenmann lists a few other implementations and how they work with undo, and it’s interesting to see how all of them try to solve the problem in slightly different ways.

F9: an L4-style microkernel for ARM Cortex-M

F9 is an L4-inspired microkernel designed for ARM Cortex-M, targeting real-time embedded systems with hard determinism requirements. It implements the fundamental microkernel principles—address spaces, threads, and IPC, while adding advanced features from industrial RTOSes. ↫ F9 kernel GitHub page For once, not written in Rust, and comes with both an L4-style native API and a userspace POSIX API, and there’s a ton of documentation to get you started.

Windows 11’s new MIDI framework delivers MIDI 2.0

It’s been well over a year since Microsoft unveiled it was working on bringing MIDI 2.0 to Windows, and now it’s actually here available for everyone. We’ve been working on MIDI over the past several years, completely rewriting decades of MIDI 1.0 code on Windows to both support MIDI 2.0 and make MIDI 1.0 amazing. This new combined stack is called “Windows MIDI Services.” The Windows MIDI Services core components are built into Windows 11, rolling out through a phased enablement process now to in-support retail releases of Windows 11. This includes all the infrastructure needed to bring more features to existing MIDI 1.0 apps, and also support apps using MIDI 2.0 through our new Windows MIDI Services App SDK. ↫ Pete Brown and Gary Daniels at the Windows Blogs This is the kind of work users of an operating system want to see. Improvements and new features like these actually have a meaningful, positive impact for people using MIDI, and will genuinely give them them benefits they otherwise wouldn’t get. I won’t pretend to know much about the detailed features and improvements listed in Microsoft’s blog post, but I’m sure the musicians in the audience will be quite pleased. Whomever at Microsoft was responsible for pushing this through, managing this team, and of course the team members themselves should probably be overseeing more than just this. Less “AI” bullshit, more of this.

KDE Plasma 6.6 released

KDE Plasma 6.6 has been released, and brings with a whole slew of new features. You can save any combination of themes as a global theme, and there’s a new feature allowing you to increase or decrease the contrast of frames and outlines. If your device has a camera, you can now scan Wi-F settings from QR codes, which is quite nice if you spend a lot of time on the road. There’s a new colour filter for people who are colour blind, allowing you to set the entire UI to grayscale, as well as a brand new virtual keyboard. Other new accessibility features include tracking the mouse cursor when using the zoom feature, a reduced motion setting, and more. Spectacle gets a text extraction feature and a feature to exclude windows from screen recordings. There’s also a new optional login manager, optimised for Wayland, a new first-run setup wizard, and much more. As always, KDE 6.6 will find its way to your distribution’s repositories soon enough.

SvarDOS: an open-source DOS distribution

SvarDOS is an open-source project that is meant to integrate the best out of the currently available DOS tools, drivers and games. DOS development has been abandoned by commercial players a long time ago, mostly during early nineties. Nowadays it survives solely through the efforts of hobbyists and retro-enthusiasts, but this is a highly sparse and unorganized ecosystem. SvarDOS aims to collect available DOS software and make it easy to find and install applications using a network-enabled package manager (like apt-get, but for DOS and able to run on a 8086 PC). ↫ SvarDOS website SvarDOS is built around a fork of the Enhanced DR-DOS kernel, which is available in a dedicated GitHub repository. The project’s base installation is extremely minimal, containing only the kernel, a command interpreter, and some basic system administration tools, and this basic installation is compatible down to the 8086. You are then free to add whatever packages you want, either from local storage or from the online repository using the included package manager. SvarDOS is a rolling release, and you can use the package manager to keep it updated. Aside from a set of regular installation images for a variety of floppy sizes, there’s also a dedicated “talking” build that uses the PROVOX screen reader and Braille ‘n Speak synthesizer at the COM1 port. It’s rare for a smaller project like this to have the resources to dedicate to accessibility, so this is a rather pleasant surprise.

Proper Linux on your wrist: AsteroidOS 2.0 released

It’s been a while since we’ve talked about AsteroidOS, the Linux distribution designed specifically to run on smartwatches, providing a smartwatch interface and applications built with Qt and QML. The project has just released version 2.0, and it comes with a ton of improvements. AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect. ↫ AsteroidOS 2.0 release announcement I’m pleasantly surprised by how many watches are actually fully supported by AsteroidOS 2.0; especially watches from Fossil and Ticwatch are a safe buy if you want to run proper Linux on your wrist. There are also synchronisation applications for Android, desktop Limux, Sailfish OS, and UBports Ubuntu Touch. iOS is obviously missing from this list, but considering Apple’s stranglehold on iOS, that’s not unexpected. Then again, if you bought into the Apple ecosystem, you knew what you were getting into. As for the future of the project, they hope to add a web-based flashing tool and an application store, among other things. I’m definitely intrigued, and am now contemplating if I should get my hands on a (used) supported watch to try this out. Anything I can move to Linux is a win.

A deep dive into Apple’s .car file format

Every modern iOS, macOS, watchOS, and tvOS application uses Asset Catalogs to manage images, colors, icons, and other resources. When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. Despite being a fundamental part of every Apple app, there is little to none official documentation about this file format. In this post, I’ll walk through the process of reverse engineering the .car file format, explain its internal structures, and show how to parse these files programmatically. This knowledge could be useful for security research and building developer tools that does not rely on Xcode or Apple’s proprietary tools. ↫ ordinal0 at dbg.re Not only did ordinal0 reverse-engineer the file format, they also developed their own unique custom parser and compiler for .car files that don’t require any of Apple’s tools.

dBASE on the Kaypro II

Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine, February 1984, said, “Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers.” Similar to past subjects HyperCard and Scala Multimedia, Wayne Ratcliff’s dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let’s see what made this such a power couple. ↫ Christopher Drum If you’ve ever wanted to run a company using CP/M – and who doesn’t – this article is as good a starting point as any.

Is the Era of Anonymous Computing Finally Coming to an End?

For decades, the concept of the “personal computer” implied a machine that was largely self-contained, answering only to its owner. In the early days of general-purpose computing, data stayed local, and the operating system acted as a silent facilitator of tasks rather than an active participant in a global data economy. However, as we move deeper into 2026, the architectural reality of modern computing has shifted fundamentally. The device on your desk or in your pocket is no longer a solitary tool; it is a node in a vast, interconnected mesh where silence is increasingly treated as a malfunction. Telemetry Saturation in Modern Commercial Operating Systems The volume of diagnostic data leaving modern endpoints is staggering. What began as simple crash reporting—sending a stack trace when an application failed—has evolved into comprehensive behavioral analysis. Operating system vendors now argue that real-time telemetry is essential for maintaining security postures and optimizing performance in a fragmented hardware ecosystem. This data collection is rarely optional in the true sense; while toggle switches exist in settings menus, the underlying architectural dependencies often require data exchange for core system services to operate. This trend is driven by a booming market for network intelligence and analytics. For OS developers, this is the fuel for predictive maintenance and AI-driven feature sets. However, for the end-user, it represents a permanent tether to the vendor’s infrastructure. The result is an environment where “offline” computing is treated as a degraded state, and the OS constantly phones home to validate its own existence and the user’s actions. Rising Demand for Minimal-Data Software Services Despite the tightening grip of OS-level surveillance, or perhaps because of it, there is a growing counter-movement demanding software that respects digital silence. This demand creates a bifurcated market. On one side, enterprise environments and general consumers accept deep integration and identity verification as the cost of convenience. On the other, a robust subculture of developers and power users is actively seeking out platforms and services that reject the “verify everything” philosophy. This friction drives users toward services that explicitly decouple activity from identity. For instance, technically savvy users exploring new no KYC casinos often do so not just for the entertainment value, but to support ecosystems that bypass invasive identity verification protocols. This behavior mirrors the adoption of non-systemd Linux distributions or privacy-focused mobile ROMs; it is a deliberate technical choice to minimize the attack surface of one’s personal identity. The persistence of these “grey” markets demonstrates that a significant portion of the user base views mandatory identification as a bug, not a feature, and is willing to migrate to alternative platforms to avoid it. The Conflict Between TPM Requirements and Anonymity The tension between security and privacy is most visible at the hardware level, specifically regarding Trusted Platform Modules (TPM) and hardware-based attestation. Modern security architectures rely on the device proving its integrity to remote servers. This “Remote Attestation” allows a service to verify that a computer is running a signed, uncompromised kernel before granting access to resources. While this effectively mitigates rootkits and cheating in video games, it also creates a unique digital fingerprint for every machine, effectively eliminating the possibility of hardware anonymity. When an operating system requires a TPM handshake to boot or access specific services, the user’s physical hardware becomes inextricably linked to their digital identity. In the United States, Windows maintained a dominant position with 32.95% of the operating system market share in August 2025, illustrating the massive reach of proprietary telemetry pipelines. With such market dominance, the standards set by these major vendors effectively become the laws of the internet. If the dominant OS architecture mandates hardware-backed identity, the ability to remain truly anonymous on the web degrades significantly, pushing privacy-conscious users toward increasingly niche hardware solutions. Assessing the Viability of Privacy-First Computing As we look toward the latter half of the decade, the viability of privacy-first computing faces significant economic and technical headwinds. The commercial incentives for data collection are simply too high for major vendors to ignore. Global operating systems market valuation is expected to climb from $48.5 billion in 2025 to $49.35 billion in 2026, driven largely by cloud integration and connected device ecosystems. This growth relies on the seamless integration of services, which in turn relies on knowing exactly who the user is and what they are doing at all times. However, the era of anonymous computing is not necessarily over; it is merely retreating into the realm of specialized knowledge. General-purpose anonymity—the kind that existed by default in the 1990s—is gone. In its place, we have a landscape where privacy is an active pursuit requiring specific hardware choices, such as RISC-V architectures or pre-ME (Management Engine) Intel chips, and open-source software stacks. For the skilled system administrator or developer, the tools to build a silent machine still exist, but maintaining that silence against the cacophony of the modern web requires constant vigilance and technical expertise.

The Digital Balance: Navigating Mental Health in the Age of Technology

In the age of continuous connectivity, frantic software development, and constant notifications, our mental operating systems tend to be on automatic mode. Computer programmers, designers, and technology-loving minds are constantly tuning their machines and computers; however, most people do not consider the health of the single processing unit, which is the most important one: the brain. To those who are firmly rooted in digital culture, the burden of being productive, educated, and connected may build up gradually into a more significant phenomenon, namely, digital fatigue. The pressures of modern technology can stretch your cognitive capacity to the maximum, whether you are working on open-source projects late at night or working on heavy cognitive loads during long coding sessions. Locating a professional therapist in Austin is no longer a matter of crisis intervention. To a great number of workers in the modern hurly-burly digital world, mental health assistance is now a component of performance, survival, and sustainability within a tense work setting. 1. Burnout and the Open-Source Cycle: A Developer’s Perspective Technology has a way of glorifying innovation, teamwork, and output. But after most big software breakthroughs is an ugly truth: burnout. It is common to have open-source developers and contributors working long hours, often unpaid, to maintain some of the critical infrastructure used by millions of people across the globe. Lately, the issue of maintainer fatigue has been presented in terms of emotional and cognitive loads on contributors to complex systems. The Silent Battery of Cognitive Overload Clinically speaking, the brain can be influenced by the effects of a prolonged cognitive load. This may cause a breakdown of the prefrontal cortex—which is the decision-maker, problem-solver, and attention center—when it is under constant stress. Recognizing Indicators of Cognitive Strain This usually has symptoms the developers are immediately aware of: The symptoms are closely related to digital fatigue, an increasing burden of professionals whose working career is based on high standards of concentration. Prevention of burnout does not just mean working fewer hours. It needs deliberate measures to control stress, preserve brain capacity, and impose a boundary in a world where everyone can work anywhere. 2. Debugging the Mind: Therapy as a System Update Clean code and effective architecture are valued by many developers. In most aspects, mental health care does the same. Therapy may be viewed as a methodical approach to examining and streamlining thought patterns—in other words, debugging the programs that drive emotional reactions and behavior. Cognitive Behavioral Therapy: Refactoring Your Thought Code Cognitive Behavioral Therapy (CBT) is one of the evidence-based practices that are most commonly used. Technically speaking, CBT is refactoring bad code. Rather than playing the same automatic responses in a loop, therapy assists one in identifying maladaptive loops of thought and substituting them with more adaptable and healthy ways of thinking. Functional Upgrades Through CBT This can help individuals: EMDR: Processing Legacy Data and Stored Trauma EMDR treatment (Eye Movement Desensitization and Reprocessing) is another strong treatment approach that has been applied in trauma-informed care. In case CBT is rewriting code, EMDR is more of processing stored legacy data that has been poorly stored in the system. The trauma might not be processed, and this affects the reaction of the brain to the stress. EMDR assists the brain in reprocessing these memories in order to no longer cause excessive responses. 3. Actionable Plans for Continuous System Maintenance The most efficient operating systems also need periodical maintenance. This is the same case with mental health. The following are some of the practical strategies that can be applied to tech professionals who are in charge of spending large hours with digital systems. Practical Tech-Health Integration Tips 1. Apply the 50/10 Deep Work Rule 2. Strategies to Guard Your Mental Horsepower Mental capacity is limited, as is the case with RAM. Attempt to minimize the use of extraneous background processes by: 3. Airplane Mode as a Mental Reset Flights are not the only places where one can use airplane mode. It may also be taken as a psychological barrier. Switching off connectivity is one of the ways to give the brain time to get over the input of constant stimuli and avoid digital fatigue. 4. Reconnecting with the Physical Environment The outdoor culture of Austin provides a strong contrast to workdays dominated by screens. Stress hormones can be regulated by activities such as: 5. Creating a Screen-Life Balance Routine The concept of digital wellness is not to get rid of technology but to live in balance. Consider: 4. Enhancing Mental Health with Evidence-Based Professional Care Personal habits might also ensure balance; professional assistance can be extra insightful and extra resilient in the long term. The evidence-based practices, which licensed therapists use, are aimed toward the treatment of hysteria, despair, trauma, and stress-related illnesses. Specialized Areas of Trauma-Informed Counseling A trauma-informed approach ensures that therapy is made aware of the effects of the past on current emotional patterns. Trauma-informed care therapists usually specialize in: 5. Teletherapy and Modern Accessibility in Texas Technology has also modified the way intellectual fitness offerings are provided. Workers who are busy do not have time to look for doctors face-to-face. Fortunately, mental health guide has end up greater available in Texas because of teletherapy. Advantages of Remote Clinical Sessions Through telehealth sessions, people are able to: 6. Assembling a Sustainable Digital Health Habit There is no one update to mental health, but rather a continuous system optimization. Establishing a sustainable digital wellness practice can take several support levels: Conclusion: The Final Hardware Upgrade Technology evolves swiftly, the human mind remains the most state-of-the-art device we depend on each day. Maintaining that system requires care, recognition, and now and again, expert steering. Digital fatigue, burnout, and cognitive overload are increasingly more not unusual amongst experts who spend lengthy hours interacting with technology. But these demanding situations are achievable while approached with the right equipment, behavior, and assist systems. If the burden of the digital world is dragging down your processing speed, you can call a special

Why do I not use “AI” at OSNews?

In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked: Why do I care if you use AI? ↫ A comment posted on OSNews A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind. The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here? RAM prices went up for this. This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article. In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh: This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed. ↫ Scott Shambaugh A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events. On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said. Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. ↫ Ken Fisher at Ars Technica In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing. That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.

Microsoft’s original Windows NT OS/2 design documents

Have you ever wanted to read the original design documents underlying the Windows NT operating system? This binder contains the original design specifications for “NT OS/2,” an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft’s 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM’s OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT’s technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001. ↫ Object listing at the Smithsonian The actual binder is housed in the Smithsonian, although it’s not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft’s terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O’Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more. A fantastic time capsule we should be thrilled to still have access to.

Exploring Linux on a LoongArch mini PC

There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time). Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience. Performance of this chip is rather mid, at best. The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W. So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400). ↫ Wesley Moore I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.

A brief history of barbed wire fence telephone networks

If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada! ↫ Lori Emerson I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.

Haiku further improves its touchpad support

January was a busy month for Haiku, with their monthly report listing a metric ton of smaller fixes, changes, and improvements. Perusing the list, a few things stand out to me, most notably continued work on improving Haiku’s touchpad support. The remainder of samuelrp84’s patchset implementing new touchpad functionality was merged, including two-finger scrolling, edge motion, software button areas, and click finger support; and on the hardware side, driver support for Elantech “version 4” touchpads, with experimental code for versions 1, 2, and 3. (Version 2, at least, seems to be incomplete and had to be disabled for the time being.) ↫ Haiku’s January 2026 activity report On a related note, the still-disabled I2C-HID saw a number of fixes in January, and the rtl8125 driver has been synced up with OpenBSD. I also like the changes to kernel_version, which now no longer returns some internal number like BeOS used to do, instead returning B_HAIKU_VERSION; the uname command was changed accordingly to use this new information. There’s some small POSIX compliance fixes, a bunch of work was done on unit tests, and a ton more.

Microsoft Store gets another CLI tool

We often lament Microsoft’s terrible stewardship of its Windows operating system, but that doesn’t mean that they never do anything right. In a blog post detailing changes and improvements coming to the Microsoft Store, the company announced something Windows users might actually like? A new command-line interface for the Microsoft Store brings app discovery, installation and update management directly to your terminal. This enables developers and users with a new way to discover and install Store apps, without needing the GUI. The Store CLI is available only on devices where Microsoft Store is enabled. ↫ Giorgio Sardo at the Windows Blogs Of course, this new command-line frontend to the Microsoft Store comes with commands to install, update, and search for applications in the store, but sadly, it doesn’t seem to come with an actual TUI for browsing and discovery, which is a shame. I sometimes find it difficult to use dnf to find applications, as it’s not always obvious which search terms to use, which exact spelling packagers are using, which words they use in the description, and so on. In other words, it may not always be clear if the search terms you’re using are the correct ones to find the application you need. If package managers had a TUI to enable browsing for applications instead of merely searching for them, the process of using the command line to find and install applications would be much nicer. Arch has this third-party TUI called pacseek for its package manager, and it looks absolutely amazing. I’ve run into a rudimentary dnf TUI called dnfseek, but it’s definitely not as well-rounded as pacseek, and it also hasn’t seen any development since its initial release. I couldn’t find anything for apt, but there’s always aptitude, which uses ncurses and thus fulfills a similar role. To really differentiate this new Microsoft Store command-line tool from winget, the company could’ve built a proper TUI, but instead it seems to just be winget with nicer formatted output that is limited to just the Microsoft Store. Nice, I guess.

The future for Tyr

The team behind Tyr started 2025 with little to show in our quest to produce a Rust GPU driver for Arm Mali hardware, and by the end of the year, we were able to play SuperTuxKart (a 3D open-source racing game) at the Linux Plumbers Conference (LPC). Our prototype was a joint effort between Arm, Collabora, and Google; it ran well for the duration of the event, and the performance was more than adequate for players. Thankfully, we picked up steam at precisely the right moment: Dave Airlie just announced in the Maintainers Summit that the DRM subsystem is only “about a year away” from disallowing new drivers written in C and requiring the use of Rust. Now it is time to lay out a possible roadmap for 2026 in order to upstream all of this work. ↫ Daniel Almeida at LWN.net A very detailed look at what the team behind Tyr is trying to achieve with their Rust GPU driver for Arm Mali chips.