After researching the first commercial transistor computer, the British Metrovick 950, Nina Kalinina wrote an emulator, simple assembler, and some additional “toys” (her word) so we can enjoy this machine today. First, what, exactly, is the Metrovick 950? Metrovick 950, the first commercial transistor computer, is an early British computer, released in 1956. It is a direct descendant of the Manchester Baby (1948), the first electronic stored-program computer ever. ↫ Nina Kalinina The Baby, formally known as Small-Scale Experimental Machine, was a foundation for the Manchester Mark I (1949). Mark I found commercial success as the Ferranti Mark I. A few years later, Manchester University built a variant of Mark I that used magnetic drum memory instead of Williams tubes and transistors instead of valves. This computer was called the Manchester Transistor Computer (1955). Engineers from Metropolitan-Vickers released a streamlined, somewhat simplified version of the Transistor Computer as Metrovick 950. The emulator she developed is “only” compatible on a source code level, and emulates “the CPU, a teleprinter with a paper tape punch/reader, a magnetic tape storage device, and a plotter”, at 200-300 operations per second. It’s complete enough you can play Lunar Lander on it, because is a computer you can’t play games on really a computer? Nina didn’t just create this emulator and its related components, but also wrote a ton of documentation to help you understand the machine and to get started. There’s an introduction to programming and using the Metrovick 950 emulator, additional notes on programming the emulator, and much more. She also posted a long thread on Fedi with a ton more details and background information, which is a great read, as well. This is amazing work, and interesting not just to programmers interested in ancient computers, but also to historians and people who really put the retro in retrocomputing.
A very exciting set of kernel patches have just been proposed for the Linux kernel, adding multikernel support to Linux. This patch series introduces multikernel architecture support, enabling multiple independent kernel instances to coexist and communicate on a single physical machine. Each kernel instance can run on dedicated CPU cores while sharing the underlying hardware resources. ↫ Cong Wang on the LKML The idea is that you can run multiple instances of the Linux kernel on different CPU cores using kexec, with a dedicated IPI framework taking care of communication between these kernels. The benefits for fault isolation and security is obvious, and it supposedly uses less resources than running virtual machines through kvm and similar technologies. The main feature I’m interested in is that this would potentially allow for “kernel handover”, in which the system goes from using one kernel to the other. I wonder if this would make it possible to implement a system similar to what Android currently uses for updates, where new versions are installed alongside the one you’re running right now, with the system switching over to the new version upon reboot. If you could do something similar with this technology without even having to reboot, that would be quite amazing and a massive improvement to the update experience. It’s obviously just a proposal for now, and there will be much, much discussion to follow I’m sure, but the possibilities are definitely exciting.
People notice speed more than they realize. Whether they’re ordering food online, watching a video, or checking out of an e-commerce store, that near-instant response gives a quiet kind of reassurance. It tells them, without saying a word, that the system behind the screen is working properly. When everything moves smoothly, people tend to believe the platform knows what it’s doing. Speed becomes less about impatience and more about reliability; it’s how a website or app earns a user’s confidence without ever asking for it outright. When things slow down, even slightly, the feeling changes. A spinning wheel or delayed confirmation sends a small jolt of uncertainty through the user’s mind. It’s subtle, but it’s enough. People start wondering if the system is secure or if something’s gone wrong in the background. Most companies understand this reaction now, which is why they spend so much time and money making sure their sites load quickly and transactions go through smoothly. Fast performance doesn’t just please customers; it convinces them they can trust the process. Online casinos show this relationship between speed and trust especially well. Players want games that run without lag, deposits that clear quickly, and withdrawals that arrive when promised. The platforms that do this consistently don’t just look professional. They build lasting reputations. That’s one reason many players pick trusted sites with the best payouts, where the speed of payments matches the fairness of the games themselves. These casinos often have their systems tested by independent reviewers to confirm both payout accuracy and security, showing that real credibility comes from proof, not promises. There’s also something psychological about how we respond to quick actions. When things happen instantly, it gives people a sense of control. A fast confirmation email or immediate transaction approval makes them feel safe, like the system is responsive and alive. Think about how quickly we lose patience when a message doesn’t send right away. That hesitation we feel isn’t really about time. It’s about trust. Slow responses leave room for worry, and in the digital space, worry spreads faster than anything else. The speed of a platform often mirrors how transparent it feels. A site that runs smoothly gives off the impression that its systems are well managed. Even users who know little about technology pick up on that. Industries that handle sensitive data (finance, entertainment, healthcare) depend heavily on this perception. When transactions lag or screens freeze, people begin to question what’s happening underneath. So speed becomes more than a technical achievement; it’s an emotional one that reassures users everything is in working order. Fast payments are one of the clearest examples of this idea. Digital wallets and cryptocurrency platforms, for instance, have won users over because transfers happen almost in real time. That pace builds comfort. People like knowing their money moves when they move. The influence of speed stretches far beyond finance. Social networks depend on it to keep people connected. When messages appear instantly and feeds refresh without effort, users feel present and engaged. But when those same tools slow down, even slightly, people lose interest or suspect something’s broken. We’ve grown accustomed to instant feedback, and that expectation has quietly become the baseline for trust online. Still, being fast isn’t enough by itself. A website that rushes through interactions but delivers half-finished results won’t hold anyone’s confidence for long. Reliability takes consistency, not just quickness. The companies that succeed online tend to combine performance with honesty. They respond quickly, yes, but they also follow through, fix problems, and keep communication open. Those qualities, together, make speed meaningful. If there’s one lesson that stands out, it’s that quick service reflects genuine respect for people’s time. Every second saved tells the user that their experience matters. From confirming a payment to collecting winnings, that seamless, responsive flow builds a kind of trust no marketing campaign can replace. This efficiency becomes the quiet proof of reliability in a world where attention is short and expectations are high.
Photo by Steve Johnson on Unsplash The growth of AI in recent years has faced criticism in a number of fields, especially when it comes to generative AI that has been accused of plagiarizing from creatives, but its implementation across a variety of industries has transformed how they operate, improving efficiency and reducing costs, which can be passed on to consumers. When it comes to investments, AI can be an extremely useful tool that provides valuable insights for potential investment opportunities. This is true across the board and is exemplified in the crypto industry, which is enjoying a period of growth amidst regulatory restructuring. In the case of sites like Coinfutures.io, consumers can follow markets and make money by predicting whether cryptocurrencies will rise or fall in value, all without having to invest in specific coins. Where consumers might have been put off by traditional stock market investments, many feel more comfortable with cryptocurrencies that can be used as an alternative to traditional currencies or sat on as an investment. Improving access via mobile apps has also helped to open up the market, and many are now exploring what AI can offer when making investments while adhering to ethics relevant to its use. Automated Decision Making and Personalization The pressure of making investment decisions can be overwhelming at times, with results impacting consumers with the benefit of hindsight. Using AI algorithms to suggest opportunities based on data analysis can help to take personal feelings out of the equation and help people focus on the facts. It is also possible for users to personalize this by adding in specific requirements, making AI do all the heavy lifting, and resulting in a list of options with all the relevant data in an easily accessible form. Intensive Data Analytics Data analysis is at the heart of AI use in investment, as it is able to cover significantly more new and historical data that can help make a decision. Making an investment based purely on gut instinct relies on a lot of luck, whereas studying as much relevant data as possible will give investors a better idea about the potential investment opportunity, factors that might affect this, and the market as a whole. This would take people a significant amount of time, and even then, they might not be able to go over all the information available to them. AI can do this and collate it in a way that is manageable. High-Quality Forecasting Models Predicting the stock market, financial markets, cryptocurrencies, or any other investment opportunity is not an exact science. However, AI forecasting models are able to pull in data from every available source, run it against historical data, and come up with predictions based on fact rather than feeling. These predictions might not always work out exactly, but they can provide valuable information about how similar opportunities have reacted to different market conditions. Portfolio Automation The thought of handing over finances to AI might seem daunting for some, but it is possible to automate portfolios in a way that won’t get out of control. Parameters can be set that require investment opportunities to tick a certain number of boxes before investments are made, and the same is true of selling. AI automation can follow your instructions with more sentience than traditional computer programs, with ML technology helping it improve as it goes. Sentiment Analysis Basing investments purely on facts is one way to go, but making the most of all the available information includes sentiment analysis. This is a form of analysis carried out by AI language models to track the general feeling towards investment opportunities and markets. This can cover everything from analyzing breaking market news and expert opinions, to gauging the reaction of the regular consumers via social media. Risk and Fraud Detection The use of AI as a security tool has helped a wide variety of industries, and it is something that can be used to mitigate risk and identify potential fraudulent activities. Its use in websites, apps, and exchanges can help to protect accounts, and when used on a broader scale, can also help to assess the risk of investment opportunities.While care must be taken with how far we let AI go, especially with generative AI, there are definite applications that can benefit users and operators.
The 1980s saw a flurry of graphical user interfaces pop up, almost all of them in some way made by people who got to see the work done by Xerox. Today’s topic is no exception – GEM was developed by Lee Jay Lorenzen, who worked at Xerox and wished to create a cheaper, less resource-intensive alternative to the Xerox Star, which he got to do at DRI after leaving Xerox. His work was then shown off to Atari, who were interested in using it. The entire situation was pretty hectic for a while: DRI’s graphics group worked on the PC version of GEM on MS-DOS; Atari developers were porting it to Apple Lisas running CP/M-68K; and Loveman was building GEMDOS. Against all odds, they succeeded. The operating system for Atari ST consisting of GEM running on top of GEMDOS was named TOS which simply meant “the operating system”, although many believed “T” actually stood for “Tramiel”. Atari 520 ST, soon nicknamed “Jackintosh”, was introduced at the 1985 Consumer Electronics Show in Las Vegas and became an immediate hit. GEM ran smoothly on the powerful ST’s hardware, and there were no clones to worry about. Atari developed its branch of GEM independently of Digital Research until 1993, when the Atari ST line of computers was discontinued. ↫ Nemanja Trifunovic at Programming at the right level Other than through articles like these and the occasional virtual machine, I have no experience with the various failed graphical user interfaces of the 1980s, since I was too young at the time. Even from the current day, though, it’s easy to see how all of them can be traced back directly to the work done at Xerox, and just how much we owe to the people working there at the time. Now that the technology industry is as massive as it is, with the stakes being so high, it’s unlikely we’ll ever see a place like Xerox PARC ever again. Everything is secretive now, and if a line of research doesn’t obviously lead to massive short-term gains, it’s canned before it even starts. The golden age of wild, random computer research without a profit motive is clearly behind us, and that’s sad.
Last night, my wife looks up from her computer, troubled. She tells me she can’t log into her computer running Windows 11, as every time she enters the PIN code to her account, the login screen throws up a cryptic error: “Your credentials could not be verified”. She’s using the correct PIN code, so that surely isn’t it. We opt for the gold standard in troubleshooting and perform a quick reboot, but that doesn’t fix it. My initial instinct is that since she’s using an online account instead of a local one, perhaps Microsoft is having some server issues? A quick check online indicates that no, Microsoft’s servers seem to be running fine, and to be honest, I don’t even know if that would have an effect on logging into Windows in the first place. The Windows 11 login screen does give us a link to click in case you forget your PIN code. Despite the fact the PIN code she’s entering is correct, we try to go through this process to see if it goes anywhere. This is where things really start to get weird. A few dialogs flash in and out of existence, until it’s showing us a dialog telling us to insert a security USB key of some sort, which we don’t have. Dismissing it gives us an option to try other login methods, including a basic password login. This, too, doesn’t work; just like with the PIN code, Windows 11 claims the accurate, correct password my wife is entering is invalid (just to be safe, we tested it by logging into her Microsoft account on her phone, which works just fine). In the account selection menu in the bottom-left, an ominous new account mysteriously appears: WsiAccount. The next option we try is to actually change the PIN code. This doesn’t work either. Windows wants us to use a second factor using my wife’s phone number, but this throws up another weird error, this time claiming the SMS service to send the code isn’t working. A quick check online once again confirms the service seems to be working just fine for everybody else. I’m starting to get really stumped and frustrated. Of course, during all of this, we’re both searching the web to find anything that might help us figure out what’s going on. None of our searches bring up anything useful, and none of our findings seem to be related to or match up with the issue we’re having. While she’s looking at her phone and I’m browsing on my Fedora/KDE PC next to hers, she quickly mentions she’s getting a notification that OneDrive is full, which is odd, since she doesn’t use OneDrive for anything. We take this up as a quick sidequest, and we check up on her OneDrive account on her phone. As OneDrive loads, our jaws drop in amazement: a big banner warning is telling her she’s using over 5500% of her 5GB free account. We look at each other and burst out laughing. We exchange some confused words, and then we realise what is going on: my wife just got a brand new Samsung Galaxy S25, and Samsung has some sort of deal with Microsoft to integrate its services into Samsung’s variant of Android. Perhaps during the process of transferring data and applications from her old to her new phone, OneDrive syncing got turned on? A quick trip to the Samsung Gallery application confirms our suspicions: the phone is synchronising over 280GB of photos and videos to OneDrive. My wife was never asked for consent to turn this feature on, so it must’ve been turned on by default. We quickly turn it off, delete the 280GB of photos and videos from OneDrive, and move on to the real issue at hand. Since nothing seems to work, and none of what we find online brings us any closer to what’s going on with her Windows 11 installation, we figured it’s time to bring out the big guns. For the sake of brevity, let’s run through the things we tried. Booting into safe mode doesn’t work; we get the same login problems. Trying to uninstall the latest updates, an option in WinRE, doesn’t work, and throws up an unspecified error. We try to use a restore point, but despite knowing for 100% certain the feature to periodically create restore points is enabled, the only available restore point is from 2022, and is located on a drive other than her root drive (or “C:\” in Windows parlance). Using the reset option in WinRE doesn’t work either, as it also throws up an error, this time about not having enough free space. I also walk through a few more complex suggestions, like a few manual registry hacks related to the original error using cmd.exe in WinRE. None of it yields any results. It’s now approaching midnight, and we need to get up early to drop the kids off at preschool, so I tell my wife I’ll reinstall her copy of Windows 11 tomorrow. We’re out of ideas. The next day, I decide to give it one last go before opting for the trouble of going through a reinstallation. The one idea I still have left is to enable the hidden administrator account in Windows 11, which gives you password-free access to what is basically Windows’ root account. It involves booting into WinRE, loading up cmd.exe, and replacing utilman.exe in system32 with cmd.exe: If you then proceed to boot into Windows 11 and click on the Accessibility icon in the bottom-right, it will open “utilman.exe”, but since that’s just cmd.exe with the utilman.exe name, you get a command prompt to work with, right on the login screen. From here, you can launch regedit, find the correct key, change a REG_BINARY, save, and reboot. At the login screen, you’ll see a new “adminstrator” account with full access to your computer. During the various reboots, I do some more web searching, and I stumble upon a post on
Intel is in very dire straits, and as such, the company needs investments and partnerships more than anything. Today, NVIDIA and Intel announced just such a partnership, in which NVIDIA will invest $5 billion into the troubled chip giant, while the two companies will develop products that combine Intel’s x86 processors with NVIDIA’s GPUs. For data centers, Intel will build NVIDIA-custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms and offer to the market. For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs. ↫ NVIDIA press release My immediate reaction to this news was to worry about the future of Intel’s ARC graphics efforts. Just as the latest crop of their ARC GPUs have received a ton of good press and positive feedback, with some of their cards becoming the go-to suggestion for a budget-friendly but almost on-par alternative to offerings from NVIDIA and AMD, it would be a huge blow to user choice and competition if Intel were to abandon the effort. I think this news pretty much spells the end for the ARC graphics effort. Making dedicated GPUs able to compete with AMD and NVIDIA must come at a pretty big financial cost for Intel, and I wouldn’t be surprised if they’ve been itching to find an excuse to can the whole project. With NVIDIA GPUs fulfilling the role of more powerful integrated GPUs, all Intel really needs is a skeleton crew developing the basic integrated GPUs for cheaper and non-gaming oriented devices, which would be a lot cheaper to maintain. For just $5 billion dollars, NVIDIA most likely just eliminated a budding competitor in the GPU space. That’s cheap.
All good things come to an end eventually, and that includes support for 32bit Windows in Steam. As of January 1 2026, Steam will stop supporting systems running 32-bit versions of Windows. Windows 10 32-bit is the only 32-bit version that is currently supported by Steam and is only in use on 0.01% of systems reported through the Steam Hardware Survey. Windows 10 64-bit will still be supported and 32-bit games will still run. ↫ Steam support article While existing installations will continue to work, they will no longer receive any Steam updates or support. Valve obviously advises the small sliver of users still using 32bit Windows – unbeknownst to them, I’m sure – to upgrade to a 64bit release. Upcoming versions of Steam will only work on 64bit systems.
GNOME 49 has been released, and it’s got a lot of nice updates, improvements, and fixes for everyone. GNOME 49 finally replaces the ageing Totem video player with Showtime, and Evince, GNOME’s document viewer, is replaced by the new Papers. Both of these new applications bring a modern GTK4 user interface to replace their older GTK3 counterparts. Papers supports a ton of both document-oriented as well as comic book formats, and has annotation features. We’ve already touched on the extensive accessibility improvements in GNOME Calendar, but other applications have been improved as well, such as Maps, Software, and Web. Software’s improvements focus on improving the application’s performance, especially when dealing with Flatpaks from Flathub, while Web, GNOME’s web browser, comes with improved ad blocking and optional regional blocklists, better bookmark management, improved security features, and more. The remote desktop experience also saw a lot of work, with multitouch input support, extended virtual monitors, and relative mouse input. For developers, GNOME 49 comes with the new GTK 4.20, the latest version of Glib, and Libadwaita 1.8, released only a few days ago. It brings a brand new shortcuts information dialog as its most user-facing feature, on top of a whole bunch of other, developer-oriented features. GNOME 49 will find its way to your distribution of choice soon enough.
It’s 2025, and yes, you can still install and run a modern Linux distribution like Debian through a real hardware terminal. While I have used a terminal with the Pi, I’ve never before used it as a serial console all the way from early boot, and I have never installed Debian using the terminal to run the installer. A serial terminal gives you a login prompt. A serial console gives you access to kernel messages, the initrd environment, and sometimes even the bootloader. This might be fun, I thought. ↫ John Goerzen at The Changelog It seems Debian does a lot of the correct configurations for you, but there’s still a few things you’ll need to manually change, but none of it seems particularly complicated. Once the installation is completed, you have a system that’s completely accessible and usable from a hardware terminal, which, while maybe not particularly important in this day and age of effortless terminal emulators, is still quite a cool thing to have.
Another month, another summary of changes in Haiku, the BeOS-inspired operating system. The main focus this past month has been improving the performance of git status, which has been measurably worse on Haiku than on Linux running on similar hardware. This work has certainly paid off, as the numbers demonstrate. The results are clearly more than worth the trouble, though: in one test setup with git status in Haiku’s buildtools repository (which contains the entirety of the gcc and binutils source code, among other things – over 160,000 files) went from around 33 seconds with a cold disk cache, to around 20 seconds; and with a hot disk cache, from around 15 seconds to around 2.5 seconds. This is still a ways off from Linux (with a similar setup in the same repository, git status there with a hot disk cache takes only 0.3 seconds). Performance on Haiku will likely be measurably faster on builds without KDEBUG enabled, but not by that much. Still, this is clearly a significant improvement over the way things were before now. ↫ Haiku Activity & Contract Report, August 2025 There’s more than this, of course, such as initial support for Intel’s Apollo Lake GPU in the Intel modesetting driver, improvements to USB disk performance, a reduction in power usage when in KDL, and much, much more.
Some time ago, people noticed that buried in the Windows Bluetooth drivers is the hard-coded name of the Microsoft Wireless Notebook Presenter Mouse 8000. What’s going on there? Does the Microsoft Wireless Notebook Presenter Mouse 8000 receive favorable treatment from the Microsoft Bluetooth drivers? Is this some sort of collusion? No, it’s not that. ↫ Raymond Chen So, what is the actual problem? It’s a funny one: an encoding mistake. The device local name string for a device needs to be encoded in UTF-8, and that’s where the developers of the Microsoft Wireless Notebook Presenter Mouse 8000 made a mistake. The string contains a registered trademark symbol – ® – but they encoded it in code page 1252, which not only isn’t allowed, but gets rejected completely. So, Windows’ Bluetooth drivers have a table that contains the wrong name for a driver, accompanied by the right name to use. This mouse is the only entry.
Java 25 has been released. JDK 25, the reference implementation of Java 25, is now Generally Available. We shipped build 36 as the second Release Candidate of JDK 25 on 15 August, and no P1 bugs have been reported since then. Build 36 is therefore now the GA build, ready for production use. ↫ Java 25/JDK 25 release announcement If you want to dive into the details about this new release, feel free to peruse the long, long list of improvements and changes.
Proper care of a medical air regulator ensures safety and effective use for individuals or professionals. These devices manage airflow in sensitive settings, which makes their upkeep a priority. Regular attention to maintenance extends their life while reducing possible risks. A regulator must be handled with precision to avoid pressure inconsistencies that may affect its role. Just like precision tools, every part requires timely checks and care. The information below explores how to use, monitor, and preserve regulators effectively. Alongside these steps, you will also learn practical ways to prevent failures or unsafe conditions. Device Overview A medical air regulator functions as a control device that adjusts airflow to a desired level. By monitoring its gauge, you can determine the output and regulate pressure accordingly. Many variations exist, though the principle remains the same, regulating consistent pressure for safe operation. Proper inspection is essential for sustained performance. At this stage, it is useful to recognize the importance of reliable suppliers, for instance, nitrogen regulators for sale often serve as a trustworthy reference for quality. Careful selection of parts makes ongoing maintenance more efficient while supporting safety objectives. Regular Inspection A regulator requires periodic checks to ensure optimal function. Below are some effective steps to follow. Safe Handling Safe use requires awareness of common practices. Never attempt sudden adjustments, as it can cause strain on the regulator. Gentle turns of the control knob are recommended to avoid damaging internal components. Always make sure the regulator is securely fitted before releasing pressure. If unusual sounds occur stop operation immediately and identify the cause. Clean handling minimizes the chance of contamination entering sensitive parts. Each action should be deliberate and controlled to maintain steady airflow. Cleaning Practices Clean surroundings reduce the risk of contamination. Some specific steps enhance regulator life. Pressure Monitoring Replacement Needs Every regulator has a lifespan that depends on use and conditions. Over time, certain parts may no longer hold the desired pressure or may show visible wear. Signs like irregular flow or stuck knobs suggest it is time for a replacement. Failing to replace on time risks unsafe conditions that can escalate quickly. Regular planning ensures that spare regulators or parts are available whenever required. Storage Methods Safety Awareness Smart Care Sustaining the regulator requires constant attention with simple yet vital actions. By checking, cleaning, and storing with care, you ensure long-term reliability. Taking timely action on replacements protects people from sudden breakdowns. Creating a practice of training ensures safe handling for everyone. Careful monitoring of pressure levels reduces hidden dangers. If quality spares are needed, search wisely for trusted suppliers such as nitrogen regulators for sale because genuine materials increase both efficiency and protection. A consistent approach transforms these tasks into habits that save effort while providing reassurance.
It’s been a little over a month since OSNews went completely ad-free for everyone. I can say the support has been overwhelming, with the accompanying fundraiser currently sitting at 67% of the €5000 goal! Of course things slowed down a bit after the initial week of one donation after the next, so I’m throwing out this reminder that without your support, OSNews can’t exist – doubly so now that I’ve removed any and all advertising. Help us reach that 100%! So, what can you do to support OSNews? By being entirely free from the corrupting influence of advertising, I have even less desire to chase views, entrap users with slop content, game search engines with shitty SEO spam, or turn on the taps of “AI”-generated trash to spew forth as much “articles” and thus views as possible. This also means that OSNews is one of the few technology news websites remaining that is not part of a massive corporate media conglomerate, so there’s no pressure from “corporate” to go easy on advertisers or write favourable stuff about corporate’s friends. You’d be surprised to learn how many technology sites out there are not independent. The response to OSNews no longer having any advertising has been overwhelmingly positive – unsurprisingly – and that has taken away any reservations I might have had about taking this step. In a world where so many websites are disappearing, turning into corporate mouthpieces, or becoming glorified content farms, OSNews can keep on doing what it does, independent of any outside influence, thanks to the countless contributions from all of you. Thank you.
It’s release day for all of Apple’s operating systems, so if you’re fully or only partway into the ecosystem, you’ve got some upgrades ahead of you. Version 26 for macOS, iOS and iPadOS, watchOS, tvOS, visionOS, and HomePod Software have all been released today, so if you own any device running any of these operating system, it’s time to head on over to the update section of the settings application and wait for that glass to slowly and sensually liquefy all over your screens. Do put a sock on the doorknob.
I recently implemented a minimal proof of concept time-sharing operating system kernel on RISC-V. In this post, I’ll share the details of how this prototype works. The target audience is anyone looking to understand low-level system software, drivers, system calls, etc., and I hope this will be especially useful to students of system software and computer architecture. Finally, to do things differently here, I implemented this exercise in Zig, rather than traditional C. In addition to being an interesting experiment, I believe Zig makes this experiment much more easily reproducible on your machine, as it’s very easy to set up and does not require any installation (which could otherwise be slightly messy when cross-compiling to RISC-V). ↫ Uros Popovic This is not the first, and certainly not the last, operating system implemented from scratch as a teaching exercise, both for the creator itself, as well as for others wanting to follow along. This time it’s developed for RISC-V, and in an interesting twist, programmed in Zig (no Rust for once!).
And the beatings continue until “AI” improves. Except if you live in the European Union/EEA, that is. Windows devices with the Microsoft 365 desktop client apps will automatically install the Microsoft 365 Copilot app. This app installation takes place in the background and would not disrupt the user. This app installation will start in Fall 2025. ↫ Microsoft support document Basically, if you have Microsoft 365 desktop applications installed – read my article about some deep Microsoft lore to figure out what that means – Microsoft is going to force-install all the Copilot stuff onto your computer, whether you like it or not. Thanks to more robust consumer protection legislation in the European Union/EEA, like the Digital Markets Act and Digital Services Act, this force-install will not take place there. Administrators managing Office 365 deployments get an option to opt-out through the Microsoft 365 Apps admin center, but I’m not sure if regular users can use this method as well. Remember, when you’re using Windows (or macOS, for that matter), you don’t own your computer. Plan accordingly.
It may be arcane knowledge to most users of UNIX-like systems today, but there is supposed to be a difference between /usr/bin and /usr/sbin; the latter is supposed to be for “system binaries”, not needed by most normal users. The Filesystem Hierarchy Standard states that sbin directories are intended to contain “utilities used for system administration (and other root-only commands)”, which is quite vague when you think about it. This has led to UNIX-like systems basically just winging it, making the distinction almost entirely arbitrary. For a long time, there has been no strong organizing principle to /usr/sbin that would draw a hard line and create a situation where people could safely leave it out of their $PATH. We could have had a principle of, for example, “programs that don’t work unless run by root”, but no such principle was ever followed for very long (if at all). Instead programs were more or less shoved in /usr/sbin if developers thought they were relatively unlikely to be used by normal people. But ‘relatively unlikely’ is not ‘never’, and shortly after people got told to ‘run traceroute’ and got ‘command not found’ when they tried, /usr/sbin (probably) started appearing in $PATH. ↫ Chris Siebenmann As such, Fedora 42 unifies /usr/bin and /usr/sbin, which is kind of a follow-up to the /usr merge, and serves as a further simplification and clean-up of the file system layout by removing divisions and directories that used to make sense, but no longer really do. Decisions like these have a tendency to upset a small but very vocal group of people, people who often do not even use the distribution implementing the decisions in question in the first place. My suggestions to those people would be to stick to distributions that more closely resemble classic UNIX. Or use a real UNIX. Anyway, these are good moves, and I’m glad most prominent Linux distributions are not married to decisions made in the ’70s, especially not when they can be undone without users really noticing anything.
Google continues putting nails in the coffin that is the Android Open Source Project. This time, they’re changing the way they handle security updates to appease slow, irresponsible Android OEMs, while screwing over everyone else. The basic gist is that instead of providing monthly security updates for OEMs to implement on their Android devices, Google will now move to a quarterly model, publishing only extremely severe issues on a monthly basis. The benefit for OEMs is that for most vulnerabilities, they get three months to distribute (most) fixes instead of just one month, but the downsides are also legion. Vulnerabilities will now be out in the wild for three months instead of just one, and while they’re shared with OEMs “privately”, we’re talking tends of thousands of pairs of eyes here, so “privately” is a bit of a misnomer. The dangers are obvious; these vulnerabilities will be leaked, and they will be abused by malicious parties. Another massive downside related to this change is that Google will now no longer be providing the monthly patches as open source within AOSP, instead only releasing the quarterly patch drops as open source. This means exactly what you think it does: no more monthly security updates from third-party ROMs, unless those third-party ROMs choose to violate the embargo themselves and thus invite all sorts of problems. Extending the patch access window from one month to three is absolutely insane. Google should be striving to shorten this window as much as possible, but instead, they’re tripling it in length to create a false sense of security. OEMs can now point at their quarterly security updates and claim to be patching vulnerabilities as soon as Google publishes them, while in fact, the unpatched vulnerabilities will have been out in the wild for months by that point. This change is irresponsible, misguided, and done only to please lazy, shitty OEMs to create a false sense of security for marketing purposes.