Every modern iOS, macOS, watchOS, and tvOS application uses Asset Catalogs to manage images, colors, icons, and other resources. When you build an app with Xcode, your .xcassets folders are compiled into binary .car files that ship with your application. Despite being a fundamental part of every Apple app, there is little to none official documentation about this file format. In this post, I’ll walk through the process of reverse engineering the .car file format, explain its internal structures, and show how to parse these files programmatically. This knowledge could be useful for security research and building developer tools that does not rely on Xcode or Apple’s proprietary tools. ↫ ordinal0 at dbg.re Not only did ordinal0 reverse-engineer the file format, they also developed their own unique custom parser and compiler for .car files that don’t require any of Apple’s tools.
Within the major operating system of its day, on popular hardware of its day, ran the utterly dominant relational database software of its day. PC Magazine, February 1984, said, “Independent industry watchers estimate that dBASE II enjoys 70 percent of the market for microcomputer database managers.” Similar to past subjects HyperCard and Scala Multimedia, Wayne Ratcliff’s dBASE II was an industry unto itself, not just for data-management, but for programmability, a legacy which lives on today as xBase. Written in assembly, dBASE II squeezed maximum performance out of minimal hardware specs. This is my first time using both CP/M and dBASE. Let’s see what made this such a power couple. ↫ Christopher Drum If you’ve ever wanted to run a company using CP/M – and who doesn’t – this article is as good a starting point as any.
For decades, the concept of the “personal computer” implied a machine that was largely self-contained, answering only to its owner. In the early days of general-purpose computing, data stayed local, and the operating system acted as a silent facilitator of tasks rather than an active participant in a global data economy. However, as we move deeper into 2026, the architectural reality of modern computing has shifted fundamentally. The device on your desk or in your pocket is no longer a solitary tool; it is a node in a vast, interconnected mesh where silence is increasingly treated as a malfunction. Telemetry Saturation in Modern Commercial Operating Systems The volume of diagnostic data leaving modern endpoints is staggering. What began as simple crash reporting—sending a stack trace when an application failed—has evolved into comprehensive behavioral analysis. Operating system vendors now argue that real-time telemetry is essential for maintaining security postures and optimizing performance in a fragmented hardware ecosystem. This data collection is rarely optional in the true sense; while toggle switches exist in settings menus, the underlying architectural dependencies often require data exchange for core system services to operate. This trend is driven by a booming market for network intelligence and analytics. For OS developers, this is the fuel for predictive maintenance and AI-driven feature sets. However, for the end-user, it represents a permanent tether to the vendor’s infrastructure. The result is an environment where “offline” computing is treated as a degraded state, and the OS constantly phones home to validate its own existence and the user’s actions. Rising Demand for Minimal-Data Software Services Despite the tightening grip of OS-level surveillance, or perhaps because of it, there is a growing counter-movement demanding software that respects digital silence. This demand creates a bifurcated market. On one side, enterprise environments and general consumers accept deep integration and identity verification as the cost of convenience. On the other, a robust subculture of developers and power users is actively seeking out platforms and services that reject the “verify everything” philosophy. This friction drives users toward services that explicitly decouple activity from identity. For instance, technically savvy users exploring new no KYC casinos often do so not just for the entertainment value, but to support ecosystems that bypass invasive identity verification protocols. This behavior mirrors the adoption of non-systemd Linux distributions or privacy-focused mobile ROMs; it is a deliberate technical choice to minimize the attack surface of one’s personal identity. The persistence of these “grey” markets demonstrates that a significant portion of the user base views mandatory identification as a bug, not a feature, and is willing to migrate to alternative platforms to avoid it. The Conflict Between TPM Requirements and Anonymity The tension between security and privacy is most visible at the hardware level, specifically regarding Trusted Platform Modules (TPM) and hardware-based attestation. Modern security architectures rely on the device proving its integrity to remote servers. This “Remote Attestation” allows a service to verify that a computer is running a signed, uncompromised kernel before granting access to resources. While this effectively mitigates rootkits and cheating in video games, it also creates a unique digital fingerprint for every machine, effectively eliminating the possibility of hardware anonymity. When an operating system requires a TPM handshake to boot or access specific services, the user’s physical hardware becomes inextricably linked to their digital identity. In the United States, Windows maintained a dominant position with 32.95% of the operating system market share in August 2025, illustrating the massive reach of proprietary telemetry pipelines. With such market dominance, the standards set by these major vendors effectively become the laws of the internet. If the dominant OS architecture mandates hardware-backed identity, the ability to remain truly anonymous on the web degrades significantly, pushing privacy-conscious users toward increasingly niche hardware solutions. Assessing the Viability of Privacy-First Computing As we look toward the latter half of the decade, the viability of privacy-first computing faces significant economic and technical headwinds. The commercial incentives for data collection are simply too high for major vendors to ignore. Global operating systems market valuation is expected to climb from $48.5 billion in 2025 to $49.35 billion in 2026, driven largely by cloud integration and connected device ecosystems. This growth relies on the seamless integration of services, which in turn relies on knowing exactly who the user is and what they are doing at all times. However, the era of anonymous computing is not necessarily over; it is merely retreating into the realm of specialized knowledge. General-purpose anonymity—the kind that existed by default in the 1990s—is gone. In its place, we have a landscape where privacy is an active pursuit requiring specific hardware choices, such as RISC-V architectures or pre-ME (Management Engine) Intel chips, and open-source software stacks. For the skilled system administrator or developer, the tools to build a silent machine still exist, but maintaining that silence against the cacophony of the modern web requires constant vigilance and technical expertise.
In the age of continuous connectivity, frantic software development, and constant notifications, our mental operating systems tend to be on automatic mode. Computer programmers, designers, and technology-loving minds are constantly tuning their machines and computers; however, most people do not consider the health of the single processing unit, which is the most important one: the brain. To those who are firmly rooted in digital culture, the burden of being productive, educated, and connected may build up gradually into a more significant phenomenon, namely, digital fatigue. The pressures of modern technology can stretch your cognitive capacity to the maximum, whether you are working on open-source projects late at night or working on heavy cognitive loads during long coding sessions. Locating a professional therapist in Austin is no longer a matter of crisis intervention. To a great number of workers in the modern hurly-burly digital world, mental health assistance is now a component of performance, survival, and sustainability within a tense work setting. 1. Burnout and the Open-Source Cycle: A Developer’s Perspective Technology has a way of glorifying innovation, teamwork, and output. But after most big software breakthroughs is an ugly truth: burnout. It is common to have open-source developers and contributors working long hours, often unpaid, to maintain some of the critical infrastructure used by millions of people across the globe. Lately, the issue of maintainer fatigue has been presented in terms of emotional and cognitive loads on contributors to complex systems. The Silent Battery of Cognitive Overload Clinically speaking, the brain can be influenced by the effects of a prolonged cognitive load. This may cause a breakdown of the prefrontal cortex—which is the decision-maker, problem-solver, and attention center—when it is under constant stress. Recognizing Indicators of Cognitive Strain This usually has symptoms the developers are immediately aware of: The symptoms are closely related to digital fatigue, an increasing burden of professionals whose working career is based on high standards of concentration. Prevention of burnout does not just mean working fewer hours. It needs deliberate measures to control stress, preserve brain capacity, and impose a boundary in a world where everyone can work anywhere. 2. Debugging the Mind: Therapy as a System Update Clean code and effective architecture are valued by many developers. In most aspects, mental health care does the same. Therapy may be viewed as a methodical approach to examining and streamlining thought patterns—in other words, debugging the programs that drive emotional reactions and behavior. Cognitive Behavioral Therapy: Refactoring Your Thought Code Cognitive Behavioral Therapy (CBT) is one of the evidence-based practices that are most commonly used. Technically speaking, CBT is refactoring bad code. Rather than playing the same automatic responses in a loop, therapy assists one in identifying maladaptive loops of thought and substituting them with more adaptable and healthy ways of thinking. Functional Upgrades Through CBT This can help individuals: EMDR: Processing Legacy Data and Stored Trauma EMDR treatment (Eye Movement Desensitization and Reprocessing) is another strong treatment approach that has been applied in trauma-informed care. In case CBT is rewriting code, EMDR is more of processing stored legacy data that has been poorly stored in the system. The trauma might not be processed, and this affects the reaction of the brain to the stress. EMDR assists the brain in reprocessing these memories in order to no longer cause excessive responses. 3. Actionable Plans for Continuous System Maintenance The most efficient operating systems also need periodical maintenance. This is the same case with mental health. The following are some of the practical strategies that can be applied to tech professionals who are in charge of spending large hours with digital systems. Practical Tech-Health Integration Tips 1. Apply the 50/10 Deep Work Rule 2. Strategies to Guard Your Mental Horsepower Mental capacity is limited, as is the case with RAM. Attempt to minimize the use of extraneous background processes by: 3. Airplane Mode as a Mental Reset Flights are not the only places where one can use airplane mode. It may also be taken as a psychological barrier. Switching off connectivity is one of the ways to give the brain time to get over the input of constant stimuli and avoid digital fatigue. 4. Reconnecting with the Physical Environment The outdoor culture of Austin provides a strong contrast to workdays dominated by screens. Stress hormones can be regulated by activities such as: 5. Creating a Screen-Life Balance Routine The concept of digital wellness is not to get rid of technology but to live in balance. Consider: 4. Enhancing Mental Health with Evidence-Based Professional Care Personal habits might also ensure balance; professional assistance can be extra insightful and extra resilient in the long term. The evidence-based practices, which licensed therapists use, are aimed toward the treatment of hysteria, despair, trauma, and stress-related illnesses. Specialized Areas of Trauma-Informed Counseling A trauma-informed approach ensures that therapy is made aware of the effects of the past on current emotional patterns. Trauma-informed care therapists usually specialize in: 5. Teletherapy and Modern Accessibility in Texas Technology has also modified the way intellectual fitness offerings are provided. Workers who are busy do not have time to look for doctors face-to-face. Fortunately, mental health guide has end up greater available in Texas because of teletherapy. Advantages of Remote Clinical Sessions Through telehealth sessions, people are able to: 6. Assembling a Sustainable Digital Health Habit There is no one update to mental health, but rather a continuous system optimization. Establishing a sustainable digital wellness practice can take several support levels: Conclusion: The Final Hardware Upgrade Technology evolves swiftly, the human mind remains the most state-of-the-art device we depend on each day. Maintaining that system requires care, recognition, and now and again, expert steering. Digital fatigue, burnout, and cognitive overload are increasingly more not unusual amongst experts who spend lengthy hours interacting with technology. But these demanding situations are achievable while approached with the right equipment, behavior, and assist systems. If the burden of the digital world is dragging down your processing speed, you can call a special
In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked: Why do I care if you use AI? ↫ A comment posted on OSNews A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind. The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here? RAM prices went up for this. This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article. In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh: This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed. ↫ Scott Shambaugh A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events. On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said. Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here. ↫ Ken Fisher at Ars Technica In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing. That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.
Have you ever wanted to read the original design documents underlying the Windows NT operating system? This binder contains the original design specifications for “NT OS/2,” an operating system designed by Microsoft that developed into Windows NT. In the late 1980s, Microsoft’s 16-bit operating system, Windows, gained popularity, prompting IBM and Microsoft to end their OS/2 development partnership. Although Windows 3.0 proved to be successful, Microsoft wished to continue developing a 32-bit operating system completely unrelated to IBM’s OS/2 architecture. To head the redesign project, Microsoft hired David Cutler and others away from Digital Equipment Corporation (DEC). Unlike Windows 3.x and its successor, Windows 95, NT’s technology provided better network support, making it the preferred Windows environment for businesses. These two product lines continued development as separate entities until they were merged with the release of Windows XP in 2001. ↫ Object listing at the Smithsonian The actual binder is housed in the Smithsonian, although it’s not currently on display. Luckily for us, a collection of Word and PDF files encompassing the entire book is available online for your perusal. Reading these documents will allow you to peel back over three decades of Microsoft’s terrible stewardship of Windows NT layer by layer, eventually ending up at the original design and intent as laid out by Dave Cutler, Helen Custer, Daryl E. Havens, Jim Kelly, Edwin Hoogerbeets, Gary D. Kimura, Chuck Lenzmeier, Mark Lucovsky, Tom Miller, Michael J. O’Leary, Lou Perazzoli, Steven D. Rowe, David Treadwell, Steven R. Wood, and more. A fantastic time capsule we should be thrilled to still have access to.
There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time). Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience. Performance of this chip is rather mid, at best. The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W. So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400). ↫ Wesley Moore I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.
If you look at the table of contents for my book, Other Networks: A Radical Technology Sourcebook, you’ll see that entries on networks before/outside the internet are arranged first by underlying infrastructure and then chronologically. You’ll also notice that within the section on wired networks, there are two sub-sections: one for electrical wire and another for barbed wire. Even though the barbed wire section is quite short, it was one of the most fascinating to research and write about – mostly because the history of using barbed wire to communicate is surprisingly long and almost entirely undocumented, even though barbed wire fence phones in particular were an essential part of early- to mid-twentieth century rural life in many parts of the U.S. and Canada! ↫ Lori Emerson I had no idea this used to be a thing, but it obviously makes a ton of sense. If you can have a conversation by stringing a few tin cans together, you can obviously do something similar across metal barbed wire. There’s something poetic about using one of mankind’s most dividing inventions to communicate, and thus bring people closer together.
January was a busy month for Haiku, with their monthly report listing a metric ton of smaller fixes, changes, and improvements. Perusing the list, a few things stand out to me, most notably continued work on improving Haiku’s touchpad support. The remainder of samuelrp84’s patchset implementing new touchpad functionality was merged, including two-finger scrolling, edge motion, software button areas, and click finger support; and on the hardware side, driver support for Elantech “version 4” touchpads, with experimental code for versions 1, 2, and 3. (Version 2, at least, seems to be incomplete and had to be disabled for the time being.) ↫ Haiku’s January 2026 activity report On a related note, the still-disabled I2C-HID saw a number of fixes in January, and the rtl8125 driver has been synced up with OpenBSD. I also like the changes to kernel_version, which now no longer returns some internal number like BeOS used to do, instead returning B_HAIKU_VERSION; the uname command was changed accordingly to use this new information. There’s some small POSIX compliance fixes, a bunch of work was done on unit tests, and a ton more.
We often lament Microsoft’s terrible stewardship of its Windows operating system, but that doesn’t mean that they never do anything right. In a blog post detailing changes and improvements coming to the Microsoft Store, the company announced something Windows users might actually like? A new command-line interface for the Microsoft Store brings app discovery, installation and update management directly to your terminal. This enables developers and users with a new way to discover and install Store apps, without needing the GUI. The Store CLI is available only on devices where Microsoft Store is enabled. ↫ Giorgio Sardo at the Windows Blogs Of course, this new command-line frontend to the Microsoft Store comes with commands to install, update, and search for applications in the store, but sadly, it doesn’t seem to come with an actual TUI for browsing and discovery, which is a shame. I sometimes find it difficult to use dnf to find applications, as it’s not always obvious which search terms to use, which exact spelling packagers are using, which words they use in the description, and so on. In other words, it may not always be clear if the search terms you’re using are the correct ones to find the application you need. If package managers had a TUI to enable browsing for applications instead of merely searching for them, the process of using the command line to find and install applications would be much nicer. Arch has this third-party TUI called pacseek for its package manager, and it looks absolutely amazing. I’ve run into a rudimentary dnf TUI called dnfseek, but it’s definitely not as well-rounded as pacseek, and it also hasn’t seen any development since its initial release. I couldn’t find anything for apt, but there’s always aptitude, which uses ncurses and thus fulfills a similar role. To really differentiate this new Microsoft Store command-line tool from winget, the company could’ve built a proper TUI, but instead it seems to just be winget with nicer formatted output that is limited to just the Microsoft Store. Nice, I guess.
The team behind Tyr started 2025 with little to show in our quest to produce a Rust GPU driver for Arm Mali hardware, and by the end of the year, we were able to play SuperTuxKart (a 3D open-source racing game) at the Linux Plumbers Conference (LPC). Our prototype was a joint effort between Arm, Collabora, and Google; it ran well for the duration of the event, and the performance was more than adequate for players. Thankfully, we picked up steam at precisely the right moment: Dave Airlie just announced in the Maintainers Summit that the DRM subsystem is only “about a year away” from disallowing new drivers written in C and requiring the use of Rust. Now it is time to lay out a possible roadmap for 2026 in order to upstream all of this work. ↫ Daniel Almeida at LWN.net A very detailed look at what the team behind Tyr is trying to achieve with their Rust GPU driver for Arm Mali chips.
For decades, the operating system kernel handled process scheduling, memory isolation, hardware abstraction, and resource allocation. Applications sat neatly on top, consuming services but rarely redefining them. That boundary is no longer as clear. Today’s browsers now ship with their own process managers, JIT compilers, sandboxing layers, GPU pipelines, and even virtualized runtimes through WebAssembly. For many users, the browser is the main application environment. The question is no longer rhetorical: are browsers starting to behave like miniature operating systems? Abstraction Layers And The Rise Of Webassembly WebAssembly changed the browser from a document renderer into a portable execution environment. It allows near-native code to run inside a sandbox, abstracting away the underlying hardware and much of the host OS. In practice, this means the browser mediates between application logic and CPU, memory, and graphics resources. That mediation is increasingly optimized for specific platforms. Oasis and Safari leverage platform-specific OS hooks to use 60% less RAM than Chrome when running on their native operating systems. Those gains do not come from web standards alone; they depend on tight integration with kernel services, graphics drivers, and memory subsystems. As a result, the browser engine has become a portability layer comparable to what POSIX once offered. Developers target Chromium or WebKit, and the browser translates that intent into system calls, GPU queues, and thread pools. The abstraction is deep enough that many applications no longer need to care which OS sits beneath. Latency Management In Real-Time Web Applications Real-time collaboration tools, cloud IDEs, and browser-based games have pushed latency management into the browser core. Task scheduling, priority hints, and background throttling now resemble lightweight kernel schedulers. When dozens of tabs compete for CPU time, the browser arbitrates. Performance differences show how much this layer matters. Brave browser loads web pages 21% faster, uses 9% less CPU, and consumes 4% less battery on average compared to main competitors due to native ad and tracker blocking. Those savings are essentially policy decisions about resource allocation, implemented above the kernel but below the application. The same infrastructure powers high-demand workloads. Streaming platforms, complex dashboards, and even high-traffic environments such as online gambling platforms depend on predictable frame timing and low input latency inside the browser sandbox. For instance, a player placing a live bet or spinning a slot at online casinos cannot experience input delays or dropped frames without it affecting the interaction itself. The browser must process animations, user clicks, network responses, and security checks almost simultaneously, ensuring results appear instantly and consistently across thousands of simultaneous sessions. This means the browser’s event loop and rendering pipeline function like a specialized runtime scheduler. They coordinate animation frames, WebSocket traffic, and UI updates so that gameplay remains smooth even when the page is performing constant background communication with remote servers. For OS enthusiasts, this is notable. The kernel still schedules threads, but the browser increasingly decides which threads exist, when they wake, and how aggressively they consume resources. Memory Isolation Differences Between Tabs And Processes Security concerns accelerated this architectural shift. Chromium’s Site Isolation model assigns separate OS processes to different sites, reducing cross-origin attacks. That approach mirrors traditional multi-process isolation strategies in Unix-like systems. There is a cost. Chrome’s “Site Isolation” feature increases memory usage by an estimated 10–20% to enhance security through dedicated OS processes per website. The browser chooses stronger isolation boundaries and accepts higher RAM pressure, effectively trading kernel-level efficiency for application-level containment. Tab isolation also obscures responsibility. The kernel sees multiple browser processes, but it is the browser that defines their lifecycles, privileges, and communication channels. Shared memory, IPC mechanisms, and sandbox rules are orchestrated by the engine, not directly by the OS administrator. For developers used to thinking in terms of system daemons and user processes, this inversion is striking. The browser becomes a supervisor, while the kernel enforces boundaries defined elsewhere. The Diminishing Role Of The Underlying Host OS None of this means the host operating system is irrelevant. Linux still manages cgroups and namespaces. Windows enforces kernel patch protection and virtualization-based security. macOS controls entitlements and code signing. Yet from the application’s perspective, the browser often feels like the real platform. Benchmarks in 2026 highlight how OS-specific optimizations now flow through browser engines rather than standalone apps. Safari’s tight macOS integration and Chrome’s GPU tuning on Windows show that performance differences emerge from how deeply browsers hook into kernel services. The OS provides primitives; the browser assembles the policy. For administrators and hobbyists, this shifts where meaningful experimentation happens. Tuning the scheduler or switching filesystems still matters, but adjusting browser flags, sandbox modes, and rendering backends can produce equally visible results. Today’s browser has not replaced the kernel, yet it increasingly defines how users experience it, acting as a policy engine layered directly atop system calls.
With the original release of Windows 8, Microsoft also enforced Secure Boot. It’s been 15 years since that release, and that means the original 2011 Secure Boot certificates are about to expire. If these certificates are not replaced with new ones, Secure Boot will cease to function – your machine will still boot and operate, but the benefits of Secure Boot are mostly gone, and as newer vulnerabilities are discovered, systems without updated Secure Boot certificates will be increasingly exposed. Microsoft has already been rolling out new certificates through Windows updates, but only for users of supported versions of Windows, which means Windows 11. If you’re using Windows 10, without the Extended Security Updates, you won’t be getting the new certificates through Windows Update. Even if you use Windows 11, you may need a UEFI update from your laptop or motherboard OEM, assuming they still support your device. For Linux users using Secure Boot, you’re probably covered by fwupd, which will update the certificates as part of your system’s update program, like KDE’s Discover. Of course, you can also use fwupd manually in the terminal, if you’d like. For everyone else not using Secure Boot, none of this will matter and you’re going to be just fine. I honestly doubt there will be much fallout from this updating process, but there’s always bound to be a few people who fall between the cracks. All we can do is hope whomever is responsible for Secure Boot at Microsoft hasn’t started slopcoding yet.
What happens when you slopcode a bunch of bloat to your basic text editor? Well, you add a remote code execution vulnerability to notepad.exe. Improper neutralization of special elements used in a command (‘command injection’) in Windows Notepad App allows an unauthorized attacker to execute code over a network. An attacker could trick a user into clicking a malicious link inside a Markdown file opened in Notepad, causing the application to launch unverified protocols that load and execute remote files. ↫ CVE-2026-20841 I don’t know how many more obvious examples one needs to understand that Microsoft simply does not care, in any way, shape, or form, about Windows. A lot of people seem very hesitant to accept that with even LinkedIn generating more revenue for Microsoft than Windows, the writing is on the wall. Anyway, the fix has been released through the Microsoft Store.
If you’re a developer and use KDE, you’re going to be interested in a new feature KDE is working on for KDE Linux. In my last post, I laid out the vision for Kapsule—a container-based extensibility layer for KDE Linux built on top of Incus. The pitch was simple: give users real, persistent development environments without compromising the immutable base system. At the time, it was a functional proof of concept living in my personal namespace. Well, things have moved fast. ↫ Herp De Derp Not only is Kapsule now available in KDE Linux, it’s also properly integrated with Konsole now. This means you can launch Kapsule containers right from the new tab menu in Konsole for even easier access. They’re also working on allowing users to easily launch graphical applications from the containers and have them appear in the host desktop environment, and they intend to make the level of integration with the host more configurable so developers can better tailor their containers to their needs.
Another month, another Redox progress report. January turned out to be a big month for the Rust-based general purpose operating system, as they’ve cargo and rustc working on Redox. Cargo and rustc are now working on Redox! Thanks to Anhad Singh and his southern-hemisphere Redox Summer of Code project, we are now able to compile your favorite Rust CLI and TUI programs on Redox. Compilers are often one of the most challenging things for a new operating system to support, because of the intensive and somewhat scattershot use of resources. ↫ Ribbon and Ron Williams That’s not all for January, though. An initial capability-based security infrastructure has been implemented for granular permissions, SSH support has been improved and now works properly for remoting into Redox sessions, and USB input latency has been massively reduced. You can now also add, remove, and change boot parameters in a new text editing environment in the bootloader, and the login manager now has power and keyboard layout menus. January also saw the first commit made entirely from within Redox, which is pretty neat. Of course, there’s much more, as well as the usual slew of kernel, relibc, and application bugfixes and small changes.
I’m currently building an 80386-compatible core in SystemVerilog, driven by the original Intel microcode extracted from real 386 silicon. Real mode is now operational in simulation, with more than 10,000 single-instruction test cases passing successfully, and work on protected-mode features is in progress. In the course of this work, corners of the 386 microcode and silicon have been examined in detail; this series documents the resulting findings. In the previous post, we looked at multiplication and division — iterative algorithms that process one bit per cycle. Shifts and rotates are a different story: the 386 has a dedicated barrel shifter that completes an arbitrary multi-bit shift in a single cycle. What’s interesting is how the microcode makes one piece of hardware serve all shift and rotate variants — and how the complex rotate-through-carry instructions are handled. ↫ nand2mario I understood some of this.
For me, vim is a combination of genuine improvements in vi’s core editing behavior (cf), frustrating (to me) bits of trying too hard to be smart (which I mostly disable when I run across them), and an extension mechanism I ignore but people use to make vim into a superintelligent editor with things like LSP integrations. Some of the improvements and additions to vi’s core editing may be things that Bill Joy either didn’t think of or didn’t think were important enough. However, I feel strongly that some or even many of omitted features and differences are a product of the limited environments vi had to operate in. The poster child for this is vi’s support of only a single level of undo, which drastically constrains the potential memory requirements (and implementation complexity) of undo, especially since a single editing operation in vi can make sweeping changes across a large file (consider a whole-file ‘:…s/../../’ substitution, for example). ↫ Chris Siebenmann I have only very limited needs when it comes to command-line text editors, and as such, I absolutely swear by the simplicity of nano. In other words, I’m probably not the right person to dive into the editor debate that’s been raging for decades, but reading Siebenmann’s points I can’t help but agree. In this day and age, defaulting an editor that has only one level of undo is insanity, and I can’t imagine doing the kind of complex work people who use command-line editors do while being limited to just one window. As for the debate about operating systems that symlink the vi command to vim or a similar improved variant of vi, I feel like that’s the wrong thing to do. Much like how I absolutely despise how macOS hides its UNIX-y file system structure from the GUI, leading to bizarre ls results in the terminal, I don’t think you should be tricking users. If a user enters vi, it should launch vi, and not something that kind of looks like vi but isn’t. Computers shouldn’t be lying to users. If they don’t want their users to be using vi, they shouldn’t be installing vi in the first place.
Update: we’ve already hit the €5000 goal, in a little over 24 hours. Considering I thought this would take weeks – assuming we’d hit the goal at all – I’m a bit overwhelmed with all the love and support. Thank you so, so much. Since people are still donating, I upped the goal to €7500 to give people something to donate to. You people are wild. Amazing. It’s time for an OSNews fundrasier! This time, it’s unplanned due to a financial emergency after our car unexpectedly had to be scrapped (you can find more details below). If you want to support one of the few independent technology news websites left, this is your chance. OSNews is entirely supported by you, our readers, so go to our Ko-Fi and donate to our emergency fundraiser today! Why support OSNews? In short, we are truly independent. After turning off our ads, our Patreons and donors are our sole source of income, and since I know many of you prefer the occasional individual donation over recurring Patreon ones, I run a fundraiser a few times a year to rally the troops, so to speak. This particular fundraiser wasn’t planned, however, given the circumstances described below, several readers have urged me to run a fundraiser now. We’re incredibly grateful for even having the opportunity to do something like this, and as always, I’d like to stress that OSNews will never be paywalled, and that access to our website will never be predicated on your financial support. You can ignore all of this and continue on reading the site as usual. What’s going on? Sadly, and unexpectedly, we’ve had to scrap our car. Our 2007 Hyundai Santa Fe did not survive this Arctic Winter, as the two decades in the biting cold has taken a toll on a long list of components and parts – it would no longer start. After consulting an expert, we determined that repairs would’ve been too expensive to make financial sense for such an old vehicle. Sometimes, you have to take the loss lest you throw money down a pit. An unreliable car in an Arctic climate is a really bad idea, since getting stranded on a back road somewhere when it’s -30°C (or colder) with two toddlers is not going to be a fun time. On top of that, my wife uses our car to commute to work, and while using the bus is going to be fine for a little while, her job in home care for the very elderly and recovering alcoholics is incredibly stressful and intensive. Dealing with bus schedules and wait times at such low temperatures is not exactly compatible with her job. Since she’s just recovering from a doctor-mandated rest period – very common in her line of work – her income has taken a hit. Taking professional care of people with severe dementia or other old-age related conditions is a thankless and underpaid job, and it’s no surprise those working in this profession often require mandated rest (and thus a temporary pay cut). And so, urged on by readers on Mastodon, I’m doing an OSNews fundraiser to help us pay for the “new” car. Of course, we’re looking for a used car, not a new one, and based on our needs we’ve set a budget of around €10,000. This should allow us to buy something like a used Mazda 6 or Volvo V60 from around 2014-2015, or something similar in size and age, with a reasonable petrol engine (an EV is well out of our price range). We consider this the sweet spot for safety features, size, age, longevity, and reliability. We’ve got some savings, but most of the purchase price will have to come in the form of a car loan. We’ve already made some changes to our monthly expenses to cover for part of the monthly repayments, including a lucky break where our daycare expenses will be going down considerably next month. Based on this, I’ve set the fundraising goal at €5000. If we manage to hit that – and the last few times we hit our goals quite fast – it won’t cover the entire purchase price, but it will cut down on the amount we need to loan considerably. I’m feeling a little apprehensive about all of this, since this isn’t really an OSNews-related expense I can easily get some content out of. However, I’m entirely open to suggestions about how I could get some OSNews content out of this – perhaps buying and installing one of those Android headunits with a large display? They make them tailored for almost every vehicle at low prices on AliExpress, and the installation process and user experience might be something interesting to write about, as it’s potentially a great way to add some modern features to an older car. Feel free to make any suggestions. I’m also open to other crazy ideas. If you happen to work at an automaker, and need some testing done in an Arctic environment – including ice roads – I’m open to ideas. A few random notes Since about half of our audience hails from the United States, I figured I’d make a few notes about car pricing in Europe, and in Arctic Sweden in particular. Cars are definitely more expensive here in Europe, doubly so in the sparsely populated area where we live (low supply leads to higher prices). Buying a brand new car is entirely out of the question due to pricing, and leasing is also far too expensive (well over €500/month for even a basic, small car). Used electric cars are still well out of our budget as well, and since we don’t have our own driveway, we wouldn’t be able to charge at home anyway. Opting to forego a car entirely is sadly not an option either. With two small children, the Arctic climate, the remoteness, my wife’s stressful job and commute, and long distances to basic amenities, we can’t “go Dutch” and live
About a year ago I mentioned that I had rediscovered the Dillo Web Browser. Unlike some of my other hobbies, endeavours, and interests, my appreciation for Dillo has not wavered. I only have a moment to gush today, so I’ll cut right to it. Dillo has been plugging along nicely (see the Git forge.) and adding little features. Features that even I, a guy with a blog, can put to use. Here are a few of my favourites. ↫ Bobby Hiltz If you’re looking for a more minimalist, less distracting browser experience that gives you a ton of interesting UNIXy control, you should really consider giving Dillo a try.