So vi only has one level of undo, which is simply no longer fit for the times we live in now, and also wholly unnecessary given even the least powerful devices that might need to run vi probably have more than enough resources to give at least a few more levels of undo. What I didn’t know, however, is that vi’s limited undo behaviour is actually part of POSIX, and for full compliance, you’re going to need it. As Chris Siebenmann notes, vim and its derivatives ignore this POSIX requirement and implement multiple levels of undo in the obviously correct way. What about nvi, the default on the BSD variants? I didn’t know this, but it has a convoluted workaround to both maintain POSIX compatibility and offer multiple levels of undo, and it’s definitely something. Nvi has opted to remain POSIX compliant and operate in the traditional vi way, while still supporting multi-level undo. To get multi-level undo in nvi, you extend the first ‘u’ with ‘.’ commands, so ‘u..’ undoes the most recent three changes. The ‘u’ command can be extended with ‘.’ in either of its modes (undo’ing or redo’ing), so ‘u..u..’ is a no-op. The ‘.’ operation doesn’t appear to take a count in nvi, so there is no way to do multiple undos (or redos) in one action; you have to step through them by hand. I’m not sure how nvi reacts if you want do things like move your cursor position during an undo or redo sequence (my limited testing suggests that it can perturb the sequence, so that ‘.’ now doesn’t continue undoing or redoing the way vim will continue if you use ‘u’ or Ctrl-r again). ↫ Chris Siebenmann Siebenmann lists a few other implementations and how they work with undo, and it’s interesting to see how all of them try to solve the problem in slightly different ways.
F9 is an L4-inspired microkernel designed for ARM Cortex-M, targeting real-time embedded systems with hard determinism requirements. It implements the fundamental microkernel principles—address spaces, threads, and IPC, while adding advanced features from industrial RTOSes. ↫ F9 kernel GitHub page For once, not written in Rust, and comes with both an L4-style native API and a userspace POSIX API, and there’s a ton of documentation to get you started.
It’s been well over a year since Microsoft unveiled it was working on bringing MIDI 2.0 to Windows, and now it’s actually here available for everyone. We’ve been working on MIDI over the past several years, completely rewriting decades of MIDI 1.0 code on Windows to both support MIDI 2.0 and make MIDI 1.0 amazing. This new combined stack is called “Windows MIDI Services.” The Windows MIDI Services core components are built into Windows 11, rolling out through a phased enablement process now to in-support retail releases of Windows 11. This includes all the infrastructure needed to bring more features to existing MIDI 1.0 apps, and also support apps using MIDI 2.0 through our new Windows MIDI Services App SDK. ↫ Pete Brown and Gary Daniels at the Windows Blogs This is the kind of work users of an operating system want to see. Improvements and new features like these actually have a meaningful, positive impact for people using MIDI, and will genuinely give them them benefits they otherwise wouldn’t get. I won’t pretend to know much about the detailed features and improvements listed in Microsoft’s blog post, but I’m sure the musicians in the audience will be quite pleased. Whomever at Microsoft was responsible for pushing this through, managing this team, and of course the team members themselves should probably be overseeing more than just this. Less “AI” bullshit, more of this.
KDE Plasma 6.6 has been released, and brings with a whole slew of new features. You can save any combination of themes as a global theme, and there’s a new feature allowing you to increase or decrease the contrast of frames and outlines. If your device has a camera, you can now scan Wi-F settings from QR codes, which is quite nice if you spend a lot of time on the road. There’s a new colour filter for people who are colour blind, allowing you to set the entire UI to grayscale, as well as a brand new virtual keyboard. Other new accessibility features include tracking the mouse cursor when using the zoom feature, a reduced motion setting, and more. Spectacle gets a text extraction feature and a feature to exclude windows from screen recordings. There’s also a new optional login manager, optimised for Wayland, a new first-run setup wizard, and much more. As always, KDE 6.6 will find its way to your distribution’s repositories soon enough.
SvarDOS is an open-source project that is meant to integrate the best out of the currently available DOS tools, drivers and games. DOS development has been abandoned by commercial players a long time ago, mostly during early nineties. Nowadays it survives solely through the efforts of hobbyists and retro-enthusiasts, but this is a highly sparse and unorganized ecosystem. SvarDOS aims to collect available DOS software and make it easy to find and install applications using a network-enabled package manager (like apt-get, but for DOS and able to run on a 8086 PC). ↫ SvarDOS website SvarDOS is built around a fork of the Enhanced DR-DOS kernel, which is available in a dedicated GitHub repository. The project’s base installation is extremely minimal, containing only the kernel, a command interpreter, and some basic system administration tools, and this basic installation is compatible down to the 8086. You are then free to add whatever packages you want, either from local storage or from the online repository using the included package manager. SvarDOS is a rolling release, and you can use the package manager to keep it updated. Aside from a set of regular installation images for a variety of floppy sizes, there’s also a dedicated “talking” build that uses the PROVOX screen reader and Braille ‘n Speak synthesizer at the COM1 port. It’s rare for a smaller project like this to have the resources to dedicate to accessibility, so this is a rather pleasant surprise.
It’s been a while since we’ve talked about AsteroidOS, the Linux distribution designed specifically to run on smartwatches, providing a smartwatch interface and applications built with Qt and QML. The project has just released version 2.0, and it comes with a ton of improvements. AsteroidOS 2.0 has arrived, bringing major features and improvements gathered during its journey through community space. Always-on-Display, expanded support for more watches, new launcher styles, customizable quick settings, significant performance increases in parts of the User Interface, and enhancements to our synchronization clients are just some highlights of what to expect. ↫ AsteroidOS 2.0 release announcement I’m pleasantly surprised by how many watches are actually fully supported by AsteroidOS 2.0; especially watches from Fossil and Ticwatch are a safe buy if you want to run proper Linux on your wrist. There are also synchronisation applications for Android, desktop Limux, Sailfish OS, and UBports Ubuntu Touch. iOS is obviously missing from this list, but considering Apple’s stranglehold on iOS, that’s not unexpected. Then again, if you bought into the Apple ecosystem, you knew what you were getting into. As for the future of the project, they hope to add a web-based flashing tool and an application store, among other things. I’m definitely intrigued, and am now contemplating if I should get my hands on a (used) supported watch to try this out. Anything I can move to Linux is a win.
Every modern iOS, macOS, watchOS, and tvOS application uses Asset Catalogs to manage images, colors, icons, and other resources. When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. Despite being a fundamental part of every Apple app, there is little to none official documentation about this file format. In this post, I’ll walk through the process of reverse engineering the .car file format, explain its internal structures, and show how to parse these files programmatically. This knowledge could be useful for security research and building developer tools that does not rely on Xcode or Apple’s proprietary tools. ↫ ordinal0 at dbg.re Not only did ordinal0 reverse-engineer the file format, they also developed their own unique custom parser and compiler for .car files that don’t require any of Apple’s tools.
Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine, February 1984, said, “Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers.” Similar to past subjects HyperCard and Scala Multimedia, Wayne Ratcliff’s dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let’s see what made this such a power couple. ↫ Christopher Drum If you’ve ever wanted to run a company using CP/M – and who doesn’t – this article is as good a starting point as any.
For decades, the concept of the “personal computer” implied a machine that was largely self-contained, answering only to its owner. In the early days of general-purpose computing, data stayed local, and the operating system acted as a silent facilitator of tasks rather than an active participant in a global data economy. However, as we move deeper into 2026, the architectural reality of modern computing has shifted fundamentally. The device on your desk or in your pocket is no longer a solitary tool; it is a node in a vast, interconnected mesh where silence is increasingly treated as a malfunction. Telemetry Saturation in Modern Commercial Operating Systems The volume of diagnostic data leaving modern endpoints is staggering. What began as simple crash reporting—sending a stack trace when an application failed—has evolved into comprehensive behavioral analysis. Operating system vendors now argue that real-time telemetry is essential for maintaining security postures and optimizing performance in a fragmented hardware ecosystem. This data collection is rarely optional in the true sense; while toggle switches exist in settings menus, the underlying architectural dependencies often require data exchange for core system services to operate. This trend is driven by a booming market for network intelligence and analytics. For OS developers, this is the fuel for predictive maintenance and AI-driven feature sets. However, for the end-user, it represents a permanent tether to the vendor’s infrastructure. The result is an environment where “offline” computing is treated as a degraded state, and the OS constantly phones home to validate its own existence and the user’s actions. Rising Demand for Minimal-Data Software Services Despite the tightening grip of OS-level surveillance, or perhaps because of it, there is a growing counter-movement demanding software that respects digital silence. This demand creates a bifurcated market. On one side, enterprise environments and general consumers accept deep integration and identity verification as the cost of convenience. On the other, a robust subculture of developers and power users is actively seeking out platforms and services that reject the “verify everything” philosophy. This friction drives users toward services that explicitly decouple activity from identity. For instance, technically savvy users exploring new no KYC casinos often do so not just for the entertainment value, but to support ecosystems that bypass invasive identity verification protocols. This behavior mirrors the adoption of non-systemd Linux distributions or privacy-focused mobile ROMs; it is a deliberate technical choice to minimize the attack surface of one’s personal identity. The persistence of these “grey” markets demonstrates that a significant portion of the user base views mandatory identification as a bug, not a feature, and is willing to migrate to alternative platforms to avoid it. The Conflict Between TPM Requirements and Anonymity The tension between security and privacy is most visible at the hardware level, specifically regarding Trusted Platform Modules (TPM) and hardware-based attestation. Modern security architectures rely on the device proving its integrity to remote servers. This “Remote Attestation” allows a service to verify that a computer is running a signed, uncompromised kernel before granting access to resources. While this effectively mitigates rootkits and cheating in video games, it also creates a unique digital fingerprint for every machine, effectively eliminating the possibility of hardware anonymity. When an operating system requires a TPM handshake to boot or access specific services, the user’s physical hardware becomes inextricably linked to their digital identity. In the United States, Windows maintained a dominant position with 32.95% of the operating system market share in August 2025, illustrating the massive reach of proprietary telemetry pipelines. With such market dominance, the standards set by these major vendors effectively become the laws of the internet. If the dominant OS architecture mandates hardware-backed identity, the ability to remain truly anonymous on the web degrades significantly, pushing privacy-conscious users toward increasingly niche hardware solutions. Assessing the Viability of Privacy-First Computing As we look toward the latter half of the decade, the viability of privacy-first computing faces significant economic and technical headwinds. The commercial incentives for data collection are simply too high for major vendors to ignore. Global operating systems market valuation is expected to climb from $48.5 billion in 2025 to $49.35 billion in 2026, driven largely by cloud integration and connected device ecosystems. This growth relies on the seamless integration of services, which in turn relies on knowing exactly who the user is and what they are doing at all times. However, the era of anonymous computing is not necessarily over; it is merely retreating into the realm of specialized knowledge. General-purpose anonymity—the kind that existed by default in the 1990s—is gone. In its place, we have a landscape where privacy is an active pursuit requiring specific hardware choices, such as RISC-V architectures or pre-ME (Management Engine) Intel chips, and open-source software stacks. For the skilled system administrator or developer, the tools to build a silent machine still exist, but maintaining that silence against the cacophony of the modern web requires constant vigilance and technical expertise.
In the age of continuous connectivity, frantic software development, and constant notifications, our mental operating systems tend to be on automatic mode. Computer programmers, designers, and technology-loving minds are constantly tuning their machines and computers; however, most people do not consider the health of the single processing unit, which is the most important one: the brain. To those who are firmly rooted in digital culture, the burden of being productive, educated, and connected may build up gradually into a more significant phenomenon, namely, digital fatigue. The pressures of modern technology can stretch your cognitive capacity to the maximum, whether you are working on open-source projects late at night or working on heavy cognitive loads during long coding sessions. Locating a professional therapist in Austin is no longer a matter of crisis intervention. To a great number of workers in the modern hurly-burly digital world, mental health assistance is now a component of performance, survival, and sustainability within a tense work setting. 1. Burnout and the Open-Source Cycle: A Developer’s Perspective Technology has a way of glorifying innovation, teamwork, and output. But after most big software breakthroughs is an ugly truth: burnout. It is common to have open-source developers and contributors working long hours, often unpaid, to maintain some of the critical infrastructure used by millions of people across the globe. Lately, the issue of maintainer fatigue has been presented in terms of emotional and cognitive loads on contributors to complex systems. The Silent Battery of Cognitive Overload Clinically speaking, the brain can be influenced by the effects of a prolonged cognitive load. This may cause a breakdown of the prefrontal cortex—which is the decision-maker, problem-solver, and attention center—when it is under constant stress. Recognizing Indicators of Cognitive Strain This usually has symptoms the developers are immediately aware of: The symptoms are closely related to digital fatigue, an increasing burden of professionals whose working career is based on high standards of concentration. Prevention of burnout does not just mean working fewer hours. It needs deliberate measures to control stress, preserve brain capacity, and impose a boundary in a world where everyone can work anywhere. 2. Debugging the Mind: Therapy as a System Update Clean code and effective architecture are valued by many developers. In most aspects, mental health care does the same. Therapy may be viewed as a methodical approach to examining and streamlining thought patterns—in other words, debugging the programs that drive emotional reactions and behavior. Cognitive Behavioral Therapy: Refactoring Your Thought Code Cognitive Behavioral Therapy (CBT) is one of the evidence-based practices that are most commonly used. Technically speaking, CBT is refactoring bad code. Rather than playing the same automatic responses in a loop, therapy assists one in identifying maladaptive loops of thought and substituting them with more adaptable and healthy ways of thinking. Functional Upgrades Through CBT This can help individuals: EMDR: Processing Legacy Data and Stored Trauma EMDR treatment (Eye Movement Desensitization and Reprocessing) is another strong treatment approach that has been applied in trauma-informed care. In case CBT is rewriting code, EMDR is more of processing stored legacy data that has been poorly stored in the system. The trauma might not be processed, and this affects the reaction of the brain to the stress. EMDR assists the brain in reprocessing these memories in order to no longer cause excessive responses. 3. Actionable Plans for Continuous System Maintenance The most efficient operating systems also need periodical maintenance. This is the same case with mental health. The following are some of the practical strategies that can be applied to tech professionals who are in charge of spending large hours with digital systems. Practical Tech-Health Integration Tips 1. Apply the 50/10 Deep Work Rule 2. Strategies to Guard Your Mental Horsepower Mental capacity is limited, as is the case with RAM. Attempt to minimize the use of extraneous background processes by: 3. Airplane Mode as a Mental Reset Flights are not the only places where one can use airplane mode. It may also be taken as a psychological barrier. Switching off connectivity is one of the ways to give the brain time to get over the input of constant stimuli and avoid digital fatigue. 4. Reconnecting with the Physical Environment The outdoor culture of Austin provides a strong contrast to workdays dominated by screens. Stress hormones can be regulated by activities such as: 5. Creating a Screen-Life Balance Routine The concept of digital wellness is not to get rid of technology but to live in balance. Consider: 4. Enhancing Mental Health with Evidence-Based Professional Care Personal habits might also ensure balance; professional assistance can be extra insightful and extra resilient in the long term. The evidence-based practices, which licensed therapists use, are aimed toward the treatment of hysteria, despair, trauma, and stress-related illnesses. Specialized Areas of Trauma-Informed Counseling A trauma-informed approach ensures that therapy is made aware of the effects of the past on current emotional patterns. Trauma-informed care therapists usually specialize in: 5. Teletherapy and Modern Accessibility in Texas Technology has also modified the way intellectual fitness offerings are provided. Workers who are busy do not have time to look for doctors face-to-face. Fortunately, mental health guide has end up greater available in Texas because of teletherapy. Advantages of Remote Clinical Sessions Through telehealth sessions, people are able to: 6. Assembling a Sustainable Digital Health Habit There is no one update to mental health, but rather a continuous system optimization. Establishing a sustainable digital wellness practice can take several support levels: Conclusion: The Final Hardware Upgrade Technology evolves swiftly, the human mind remains the most state-of-the-art device we depend on each day. Maintaining that system requires care, recognition, and now and again, expert steering. Digital fatigue, burnout, and cognitive overload are increasingly more not unusual amongst experts who spend lengthy hours interacting with technology. But these demanding situations are achievable while approached with the right equipment, behavior, and assist systems. If the burden of the digital world is dragging down your processing speed, you can call a special
In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked: Why do I care if you use AI? ↫ A comment posted on OSNews A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind. The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here? RAM prices went up for this. This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article. In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh: This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed. ↫ Scott Shambaugh A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events. On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said. Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. ↫ Ken Fisher at Ars Technica In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing. That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.
Have you ever wanted to read the original design documents underlying the Windows NT operating system? This binder contains the original design specifications for “NT OS/2,” an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft’s 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM’s OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT’s technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001. ↫ Object listing at the Smithsonian The actual binder is housed in the Smithsonian, although it’s not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft’s terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O’Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more. A fantastic time capsule we should be thrilled to still have access to.
There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time). Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience. Performance of this chip is rather mid, at best. The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W. So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400). ↫ Wesley Moore I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.
If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada! ↫ Lori Emerson I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.
January was a busy month for Haiku, with their monthly report listing a metric ton of smaller fixes, changes, and improvements. Perusing the list, a few things stand out to me, most notably continued work on improving Haiku’s touchpad support. The remainder of samuelrp84’s patchset implementing new touchpad functionality was merged, including two-finger scrolling, edge motion, software button areas, and click finger support; and on the hardware side, driver support for Elantech “version 4” touchpads, with experimental code for versions 1, 2, and 3. (Version 2, at least, seems to be incomplete and had to be disabled for the time being.) ↫ Haiku’s January 2026 activity report On a related note, the still-disabled I2C-HID saw a number of fixes in January, and the rtl8125 driver has been synced up with OpenBSD. I also like the changes to kernel_version, which now no longer returns some internal number like BeOS used to do, instead returning B_HAIKU_VERSION; the uname command was changed accordingly to use this new information. There’s some small POSIX compliance fixes, a bunch of work was done on unit tests, and a ton more.
We often lament Microsoft’s terrible stewardship of its Windows operating system, but that doesn’t mean that they never do anything right. In a blog post detailing changes and improvements coming to the Microsoft Store, the company announced something Windows users might actually like? A new command-line interface for the Microsoft Store brings app discovery, installation and update management directly to your terminal. This enables developers and users with a new way to discover and install Store apps, without needing the GUI. The Store CLI is available only on devices where Microsoft Store is enabled. ↫ Giorgio Sardo at the Windows Blogs Of course, this new command-line frontend to the Microsoft Store comes with commands to install, update, and search for applications in the store, but sadly, it doesn’t seem to come with an actual TUI for browsing and discovery, which is a shame. I sometimes find it difficult to use dnf to find applications, as it’s not always obvious which search terms to use, which exact spelling packagers are using, which words they use in the description, and so on. In other words, it may not always be clear if the search terms you’re using are the correct ones to find the application you need. If package managers had a TUI to enable browsing for applications instead of merely searching for them, the process of using the command line to find and install applications would be much nicer. Arch has this third-party TUI called pacseek for its package manager, and it looks absolutely amazing. I’ve run into a rudimentary dnf TUI called dnfseek, but it’s definitely not as well-rounded as pacseek, and it also hasn’t seen any development since its initial release. I couldn’t find anything for apt, but there’s always aptitude, which uses ncurses and thus fulfills a similar role. To really differentiate this new Microsoft Store command-line tool from winget, the company could’ve built a proper TUI, but instead it seems to just be winget with nicer formatted output that is limited to just the Microsoft Store. Nice, I guess.
The team behind Tyr started 2025 with little to show in our quest to produce a Rust GPU driver for Arm Mali hardware, and by the end of the year, we were able to play SuperTuxKart (a 3D open-source racing game) at the Linux Plumbers Conference (LPC). Our prototype was a joint effort between Arm, Collabora, and Google; it ran well for the duration of the event, and the performance was more than adequate for players. Thankfully, we picked up steam at precisely the right moment: Dave Airlie just announced in the Maintainers Summit that the DRM subsystem is only “about a year away” from disallowing new drivers written in C and requiring the use of Rust. Now it is time to lay out a possible roadmap for 2026 in order to upstream all of this work. ↫ Daniel Almeida at LWN.net A very detailed look at what the team behind Tyr is trying to achieve with their Rust GPU driver for Arm Mali chips.
For decades, the operating system kernel handled process scheduling, memory isolation, hardware abstraction, and resource allocation. Applications sat neatly on top, consuming services but rarely redefining them. That boundary is no longer as clear. Today’s browsers now ship with their own process managers, JIT compilers, sandboxing layers, GPU pipelines, and even virtualized runtimes through WebAssembly. For many users, the browser is the main application environment. The question is no longer rhetorical: are browsers starting to behave like miniature operating systems? Abstraction Layers And The Rise Of Webassembly WebAssembly changed the browser from a document renderer into a portable execution environment. It allows near-native code to run inside a sandbox, abstracting away the underlying hardware and much of the host OS. In practice, this means the browser mediates between application logic and CPU, memory, and graphics resources. That mediation is increasingly optimized for specific platforms. Oasis and Safari leverage platform-specific OS hooks to use 60% less RAM than Chrome when running on their native operating systems. Those gains do not come from web standards alone; they depend on tight integration with kernel services, graphics drivers, and memory subsystems. As a result, the browser engine has become a portability layer comparable to what POSIX once offered. Developers target Chromium or WebKit, and the browser translates that intent into system calls, GPU queues, and thread pools. The abstraction is deep enough that many applications no longer need to care which OS sits beneath. Latency Management In Real-Time Web Applications Real-time collaboration tools, cloud IDEs, and browser-based games have pushed latency management into the browser core. Task scheduling, priority hints, and background throttling now resemble lightweight kernel schedulers. When dozens of tabs compete for CPU time, the browser arbitrates. Performance differences show how much this layer matters. Brave browser loads web pages 21% faster, uses 9% less CPU, and consumes 4% less battery on average compared to main competitors due to native ad and tracker blocking. Those savings are essentially policy decisions about resource allocation, implemented above the kernel but below the application. The same infrastructure powers high-demand workloads. Streaming platforms, complex dashboards, and even high-traffic environments such as online gambling platforms depend on predictable frame timing and low input latency inside the browser sandbox. For instance, a player placing a live bet or spinning a slot at online casinos cannot experience input delays or dropped frames without it affecting the interaction itself. The browser must process animations, user clicks, network responses, and security checks almost simultaneously, ensuring results appear instantly and consistently across thousands of simultaneous sessions. This means the browser’s event loop and rendering pipeline function like a specialized runtime scheduler. They coordinate animation frames, WebSocket traffic, and UI updates so that gameplay remains smooth even when the page is performing constant background communication with remote servers. For OS enthusiasts, this is notable. The kernel still schedules threads, but the browser increasingly decides which threads exist, when they wake, and how aggressively they consume resources. Memory Isolation Differences Between Tabs And Processes Security concerns accelerated this architectural shift. Chromium’s Site Isolation model assigns separate OS processes to different sites, reducing cross-origin attacks. That approach mirrors traditional multi-process isolation strategies in Unix-like systems. There is a cost. Chrome’s “Site Isolation” feature increases memory usage by an estimated 10–20% to enhance security through dedicated OS processes per website. The browser chooses stronger isolation boundaries and accepts higher RAM pressure, effectively trading kernel-level efficiency for application-level containment. Tab isolation also obscures responsibility. The kernel sees multiple browser processes, but it is the browser that defines their lifecycles, privileges, and communication channels. Shared memory, IPC mechanisms, and sandbox rules are orchestrated by the engine, not directly by the OS administrator. For developers used to thinking in terms of system daemons and user processes, this inversion is striking. The browser becomes a supervisor, while the kernel enforces boundaries defined elsewhere. The Diminishing Role Of The Underlying Host OS None of this means the host operating system is irrelevant. Linux still manages cgroups and namespaces. Windows enforces kernel patch protection and virtualization-based security. macOS controls entitlements and code signing. Yet from the application’s perspective, the browser often feels like the real platform. Benchmarks in 2026 highlight how OS-specific optimizations now flow through browser engines rather than standalone apps. Safari’s tight macOS integration and Chrome’s GPU tuning on Windows show that performance differences emerge from how deeply browsers hook into kernel services. The OS provides primitives; the browser assembles the policy. For administrators and hobbyists, this shifts where meaningful experimentation happens. Tuning the scheduler or switching filesystems still matters, but adjusting browser flags, sandbox modes, and rendering backends can produce equally visible results. Today’s browser has not replaced the kernel, yet it increasingly defines how users experience it, acting as a policy engine layered directly atop system calls.
With the original release of Windows 8, Microsoft also enforced Secure Boot. It’s been 15 years since that release, and that means the original 2011 Secure Boot certificates are about to expire. If these certificates are not replaced with new ones, Secure Boot will cease to function – your machine will still boot and operate, but the benefits of Secure Boot are mostly gone, and as newer vulnerabilities are discovered, systems without updated Secure Boot certificates will be increasingly exposed. Microsoft has already been rolling out new certificates through Windows updates, but only for users of supported versions of Windows, which means Windows 11. If you’re using Windows 10, without the Extended Security Updates, you won’t be getting the new certificates through Windows Update. Even if you use Windows 11, you may need a UEFI update from your laptop or motherboard OEM, assuming they still support your device. For Linux users using Secure Boot, you’re probably covered by fwupd, which will update the certificates as part of your system’s update program, like KDE’s Discover. Of course, you can also use fwupd manually in the terminal, if you’d like. For everyone else not using Secure Boot, none of this will matter and you’re going to be just fine. I honestly doubt there will be much fallout from this updating process, but there’s always bound to be a few people who fall between the cracks. All we can do is hope whomever is responsible for Secure Boot at Microsoft hasn’t started slopcoding yet.
What happens when you slopcode a bunch of bloat to your basic text editor? Well, you add a remote code execution vulnerability to notepad.exe. Improper neutralization of special elements used in a command (‘command injection’) in Windows Notepad App allows an unauthorized attacker to execute code over a network. An attacker could trick a user into clicking a malicious link inside a Markdown file opened in Notepad, causing the application to launch unverified protocols that load and execute remote files. ↫ CVE-2026-20841 I don’t know how many more obvious examples one needs to understand that Microsoft simply does not care, in any way, shape, or form, about Windows. A lot of people seem very hesitant to accept that with even LinkedIn generating more revenue for Microsoft than Windows, the writing is on the wall. Anyway, the fix has been released through the Microsoft Store.