UNIX99: UNIX for the TI-99/4A

I’ve been working on developing an operating system for the TI-99 for the last 18 months or so. I didn’t intend this—my original plan was to develop enough of the standard C libraries to help with writing cartridge-based and EA5 programs. But that trek led me quickly towards developing an OS. As Unix is by far my preferred OS, this OS is an approximation. Developing an OS within the resources available, particularly the RAM, has been challenging, but also surprisingly doable. ↫ UNIX99 forum announcement post We’re looking at a quite capable UNIX for the TI-99, with support for its sound, speech, sprites, and legacy 9918A display modes, GPU-accelerated scrolling, stdio (for text and binary files) and stdin/out/err support, a shell (of course), multiple user support, cooperative tasks support, and a ton more. And remember – all of this is running on a machine with a 16-bit processor running at 16MHz and a mere 16KB of RAM. Absolutely wild.

Another win for the Digital Markets Act: Microsoft gives truly free access to additional year of Windows 10 updates to EU users

A few months ago, Microsoft finally blinked and provided a way for Windows 10 users to gain “free” access to the Windows 10 Extended Security Update program. For regular users to gain access to this program, their options are to either pay around $30, pay 1000 Microsoft points, or to sign up for the Windows Backup application to synchronise their settings to Microsoft’s computers (the “cloud”). In other words, in order to get “free” access to extended security updates for Windows 10 after the 25 October end-of-support deadline, you have to start using OneDrive, and will have to start paying for additional storage since the base 5GB of OneDrive storage won’t be enough for backups. And we all know OneDrive is hell. Thanks to the European Union’s Digital Markets Act, though, Microsoft has dropped the OneDrive requirement for users within the European Economic Area (the EU plus Norway, Iceland, and Liechtenstein). Citing the DMA, consumer rights organisations in the EU complained that Microsoft’s OneDrive requirement was in breach of EU law, and Microsoft has now given in. Of course, dropping the OneDrive requirement only applies to consumers in the EU/EEA; users in places with much weaker consumer protection legislation, like the United States, will not benefit from this move. Consumer rights organisations are lauding Microsoft’s move, but they’re not entirely satisfied just yet. The main point of contention is that the access to the Extended Security Update program is only valid for one year, which they consider too short. In a letter, Euroconsumers, one of the consumer rights organisations, details this issue. At the same time, several points from our original letter remain relevant. The ESU program is limited to one year, leaving devices that remain fully functional exposed to risk after October 13, 2026. Such a short-term measure falls short of what consumers can reasonably expect for a product that remains widely used and does not align with the spirit of the Digital Content Directive (DCD), nor the EU’s broader sustainable goals. Unlike previous operating system upgrades, which did not typically require new hardware, the move to Windows 11 does. This creates a huge additional burden for consumers, with some estimates suggesting that over 850 million active devices still rely on million Windows 10 and cannot be upgraded due to hardware requirements. By contrast, upgrades from Windows 7 or 8 to Windows 10 did not carry such limitations. ↫ Eurconsumer’s letter According to the group, the problem is exacerbated by the fact that Microsoft is much more aggressive in phasing out support for Windows 10 than for previous versions of Windows. Windows 10 is being taken behind the shed four years after the launch of Windows 11, while Windows XP and Windows 7 enjoyed 7-8 years. With how many people are still using Windows 10, often with no way to upgrade but buying new hardware, it’s odd that Microsoft is trying to kill it so quickly. In any event, we can chalk this up as another win for consumers in the European Union, with the Digital Markets Act once again creating better outcomes than in other regions of the world.

NFS at 40: a treasure trove of documents and other material about Sun’s Network File System

The contributions of Sun Microsystems to the world of computing are legion – definitely more than its ignominious absorption into Oracle implies – and one of those is NFS, the Network File system. This month, NFS more or less turned 40 years old, and in honour of this milestone, Russel Berg, Russ Cox, Steve Kleiman, Bob Lyon, Tom Lyon, Joseph Moran, Brian Pawlowski, David Rosenthal, Kate Stout, and Geoff Arnold created a website to honour NFS. This website gathers material related to the Sun Microsystems Network File System, a project that began in 1983 and remains a fundamental technology for today’s distributed computer systems. The core of the collection is design documents, white papers, engineering specifications, conference and journal papers, and standards material. However it also covers marketing materials, trade press, advertising, books, “swag”, and personal ephemera. We’re always looking for new contributions. ↫ NFS at 40 There’s so many amazing documents here, such as the collection of predecessors of NFS that served as inspiration for NFS, like the Cambridge File Server or the Xerox Alto’s Interim File System, but also tons of fun marketing material for things like NFS server accelerators and nerdy NFS buttons. Even if you’re not specifically interested in the history of NFS, there’s great joy in browsing these old documents and photos.

yt-dlp will soon require a full JS runtime to overcome YouTube’s JS challenges

If you download YouTube videos, there’s a real chance you’re using yt-dlp, the long-running and widely-used command-line program for downloading YouTube videos. Even if you’re not using it directly, many other tools for downloading YouTube videos are built on top of yt-dlp, and even some media players which offer YouTube playback use it in the background. Now, yt-dlp has always had a built-in basic JavaScript “interpreter”, but due to changes at YouTube, yt-dlp will soon require a proper JavaScript runtime in order to function. Up until now, yt-dlp has been able to use its built-in JavaScript “interpreter” to solve the JavaScript challenges that are required for YouTube downloads. But due to recent changes on YouTube’s end, the built-in JS interpreter will soon be insufficient for this purpose. The changes are so drastic that yt-dlp will need to leverage a proper JavaScript runtime in order to solve the JS challenges. ↫ Yt-dlp’s announcement on GitHub The yt-dlp team suggests using Deno, but compatibility with some alternatives has been added as well. The issue is that the “interpreter” yt-dlp already includes consists of a massive set of very complex regex patterns to solve JS challenges, and those are difficult to maintain and no longer sufficient, so a real runtime is necessary for YouTube downloads. Deno is advised because it’s entirely self-contained and sandboxed, and has no network or filesystem access of any kind. Deno also happens to be a single, portable executable. As time progresses, it seems yt-dlp is slowly growing into a web browser just to be able to download YouTube videos. I wonder what kind of barriers YouTube will throw up next, and what possible solutions from yt-dlp might look like.

Legacy Update 1.12 released

If you’re still running old versions of Windows from Windows 2000 and up, either for restrocomputing purposes or because you need to keep an old piece of software running, you’ve most likely heard of Legacy Update. This tool allows you to keep Windows Update running on Windows versions no longer supported by the service, and has basically become a must-have for anyone still playing around with older Windows versions. The project released a fairly major update today. Legacy Update 1.12 features a significant rewrite of our ActiveX control, and a handful of other bug fixes. The rewrite allows us to more easily work on the project, and ensures we can continue providing stable releases for the foreseeable future, despite Microsoft recently breaking the Windows XP-compatible compiler included with Visual Studio 2022. ↫ Legacy Update 1.12 release notes The project switched sway from compiling with Visual C++ 2008 (and 2010, and 2017, and 2022…), which Microsoft recently broke, and now uses an open-source MinGW/GCC toolchain. This has cut the size of the binary in half, which is impressive considering it was already smaller than 1MB. This new version also adds a three-minute timer before performing any required restarts, and speeds up the installation of the slowest type of updates (.NET Frameworks) considerably.

Virtual Machine Markets Built on Token Economies

A new wave of decentralized cloud platforms is exploring ways to turn virtual machines and container instances into tradable digital assets. These systems allow users to stake tokens, rent computing resources, and guarantee uptime through token-based incentives. What was once managed by centralized operating systems is beginning to shift toward experimental, tokenized marketplaces where CPU, memory, and bandwidth can carry economic value. Digital assets no longer exist only in cloud systems. The same token mechanics that underpin decentralized computing now appear across multiple blockchain-based sectors, from data markets to entertainment and even gaming economies. In this wider landscape, 99Bitcoins reviews gambling with crypto in online gambling as part of an emerging ecosystem where value exchange happens through verifiable and transparent transactions rather than intermediaries. It illustrates how blockchain logic reshapes participation — not by promoting speculation, but by redefining trust through code and consensus. In these environments, tokens move beyond currency to act as access keys, loyalty markers, incentives, and performance metrics. Players, developers, and system operators interact in systems where every transaction is auditable, and rewards are governed by shared protocols instead of centralized oversight. The same infrastructure that secures decentralized finance or peer-to-peer gaming can inform how compute power and digital storage are distributed across open networks. This growing convergence between digital economies and technical architecture hints at a broader reorganization of the internet’s foundations. From tokenized marketplaces to programmable infrastructure, blockchain-based coordination continues to blur the line between resource, asset, and incentive. In this new structure, compute providers issue tokens that represent access rights or performance guarantees for virtual machines. Users can stake or hold tokens to reserve processing power, bid for priority access, or secure guaranteed uptime. It’s a radical blend of OS-level virtualization and market mechanics, where digital infrastructure becomes liquid and programmable. Instead of long-term contracts or static pricing, virtual resources move dynamically, traded like assets on a decentralized exchange. Several decentralized compute projects are already experimenting with this model. Networks are emerging that tokenize idle GPU and CPU capacity, letting operators rent their unused power to global users. Some of these experimental markets use smart contracts to verify availability, performance, and reliability without relying on central authorities. The idea is simple but powerful: transform global excess compute into a self-regulating, incentive-driven economy. Tokens play three major roles in these environments. They function as payment for virtual machine time and container leases, as staking instruments to prioritize workloads or signal demand, and as governance tools that enforce uptime and reliability. If a provider underperforms, staked tokens can be slashed. This transforms reliability from a promise into an enforceable economic mechanism. The more reliable the node, the more valuable its stake. Architecturally, proposed tokenized VM markets rely on four main components. A registry lists available machines and containers. A marketplace layer handles bids and leases through smart contracts. Tokens serve as both the transaction medium and performance bond. Finally, an automated monitoring system tracks uptime and resource performance to ensure transparency. Together, these parts build a self-sustaining cycle of demand and supply governed by code rather than corporate policy. This approach challenges the traditional cloud model, where centralized data centers dominate. Instead, decentralized platforms aggregate spare compute resources from thousands of contributors. They could reduce infrastructure waste, lower entry barriers for developers, and spread control across a global network. What emerges is an open computing fabric that rewards reliability, efficiency, and availability through transparent token economics. For operating systems and container orchestration layers, this shift could be transformative. Instead of static allocation rules, OS-level schedulers might one day integrate market signals directly into their decision-making. Tokens may eventually act as micropayments that dynamically steer resource distribution, allowing workloads to compete for compute power in real time. The result is a computing economy that operates in real time, balancing load and value through transparent incentives. Virtual machines operate as dynamic assets, containers as units of productivity, and the entire computing stack edges toward autonomous coordination. Token economies don’t just change how compute is bought or sold—they redefine how digital resources are organized and shared.

Would you trust Google to remain committed to Android on laptops and desktops?

It’s no secret that Google wants to bring Android to laptops and desktops, and is even sacrificing Chrome OS to get there. It seems this effort is gaining some serious traction lately, as evidenced by a conversation between Rick Osterloh, Google’s SVP of platforms and devices, and Qualcomm’s CEO, Christiano Amon, during Qualcomm’s Snapdragon Summit. Google may have just dropped its clearest hint yet that Android will soon power more than phones and tablets. At today’s Snapdragon Summit kickoff, Qualcomm CEO Cristiano Amon and Google’s SVP of Devices and Services Rick Osterloh discussed a new joint project that will directly impact personal computing. “In the past, we’ve always had very different systems between what we are building on PCs and what we are building on smartphones,” Osterloh said on stage. “We’ve embarked on a project to combine that. We are building together a common technical foundation for our products on PCs and desktop computing systems.” ↫ Adamya Sharma at Android Authority Amon eventually exclaimed that’s he’s seen the prototype devices, and that “it is incredible”. He added that “it delivers on the vision of convergence of mobile and PC. I cannot wait to have one.” Now, marketing nonsense aside, this further confirms that soon, you’ll be able to buy laptops running Android, and possibly even desktop systems running Android. The real question, though, is – would you want to? What’s the gain of buying an Android laptop over a traditional Windows or macOS laptop? Then there’s Google’s infamous fickle nature, launching and killing products seemingly randomly, without any clear long-term plans and commitments. Would you buy an expensive laptop running Android, knowing full well Google might discontinue or lose interest in its attempt to bring Android to laptops, leaving you with an unsupported device? I’m sire schools that bought into Chromebooks will gradually move over to the new Android laptops as Chrome OS features are merged into Android, but what about everyone else? I always welcome more players in the desktop space, and anything that can challenge Microsoft and Apple is welcome, but I’m just not sure if I have faith in Google sticking with it in the long run.

Benjamin Button reviews macOS

Apple’s first desktop operating system was Tahoe. Like any first version, it had a lot of issues. Users and critics flooded the web with negative reviews. While mostly stable under the hood, the outer shell — the visual user interface — was jarringly bad. Without much experience in desktop UX, Apple’s first OS looked like a Fisher-Price toy: heavily rounded corners, mismatched colors, inconsistent details and very low information density. Obviously, the tool was designed mostly for kids or perhaps light users or elderly people. Credit where credit is due: Apple had listened to their users and the next version – macOS Sequoia — shipped with lots of fixes. Border radius was heavily reduced, transparent glass-like panels replaced by less transparent ones, buttons made more serious and less toyish. Most system icons made more serious, too, with focus on more detail. Overall, it seemed like the 2nd version was a giant leap from infancy to teenage years. ↫ Rakhim Davletkali A top quality operating systems shitpost.

Exploring GrapheneOS’ secure allocator: hardened malloc

GrapheneOS is a security and privacy-focused mobile operating system based on a modified version of Android (AOSP). To enhance its protection, it integrates advanced security features, including its own memory allocator for libc: hardened malloc. Designed to be as robust as the operating system itself, this allocator specifically seeks to protect against memory corruption. This technical article details the internal workings of hardened malloc and the protection mechanisms it implements to prevent common memory corruption vulnerabilities. It is intended for a technical audience, particularly security researchers or exploit developers, who wish to gain an in-depth understanding of this allocator’s internals. ↫ Nicolas Stefanski at Synacktiv GrapheneOS is quite possibly the best way to keep your smartphone secure, and even law enforcement is not particularly amused that people are using it. If the choice is between security and convenience, GrapheneOS chooses security every time, and that’s the reason it’s favoured by many people who deeply care about (smartphone) security. The project’s social media accounts can be a bit… Much at times, but their dedication to security is without question, and if you want a secure smartphone, there’s really nowhere else to turn – unless you opt to trust the black box security approach from Apple. Sadly, GrapheneOS is effectively under attack not from criminals, but from Google itself. As Google tightens its grip on Android more and more, as we’ve been reporting on for years now, it will become ever harder for GrapheneOS to deliver the kind of security and fast update they’ve been able to deliver. I don’t know just how consequential Google’s increasing pressure is for GrapheneOS, but I doubt it’s making the lives of its developers any easier. It’s self-defeating, too; GrapheneOS has a long history of basically serving as a test best for highly advanced security features Google later implements for Android in general. A great example is the Memory Tagging Extension, a feature implemented by ARM in hardware, which GrapheneOS implements much more widely and extensively than Google does. This way, GrapheneOS users have basically been serving as testers to see if applications and other components experience any issues when using the feature, paving the way for Google to eventually, hopefully, follow in GrapheneOS’ footsteps. Google benefits from GrapheneOS, and trying to restrict its ability to properly support devices and its access to updates is shortsighted.

DXGI debugging: Microsoft put me on a list

Why does Space Station 14 crash with ANGLE on ARM64? 6 hours later… So. I’ve been continuing work on getting ARM64 builds out for Space Station 14. The thing I was working on yesterday were launcher builds, specifically a single download that supports both ARM64 and x64. I’d already gotten the game client itself running natively on ARM64, and it worked perfectly fine in my dev environment. I wrote all the new launcher code, am pretty sure I got it right. Zip it up, test it on ARM64, aaand… The game client crashes on Windows ARM64. Both in my VM and on Julian’s real Snapdragon X laptop. ↫ PJB at A stream of consciousness Debugging stories can be great fun to read, and this one is a prime example. Trust me, you’ll have no idea what the hell is going on here until you reach the very end, and it’s absolutely wild. Very few people are ever going to run into this exact same set of highly unlikely circumstances, but of course, with a platform as popular as Windows, someone was eventually bound to. Sidenote: the game in question looks quite interesting.

Yes, Redox can run on some smartphones

I had the pleasure of going to RustConf 2025 in Seattle this year. During the conference, I met lots of new people, but in particular, I had the pleasure of spending a large portion of the conference hanging out with Jeremy Soller of Redox and System76. Eventually, we got chatting about EFI and bootloaders, and my contributions to PostmarketOS, and my experience booting EFI-based operating systems (Linux) on smartphones using U-Boot. Redox OS is also booted via EFI, and so the nerdsnipe began. Could I run Redox OS on my smartphone the same way I could run PostmarketOS Linux? Spoilers, yes. ↫ Paul Sajna The hoops required to get this to work are, unsurprisingly, quite daunting, but it turns out it’s entirely possible to run the ARM build of Redox on a Qualcomm-based smartphone. The big caveat here is that there’s not much you can actually do with it, because among the various missing drivers is the one for touch input, so once you arrive at Redox’ login screen, you can’t go any further. Still, it’s quite impressive, and highlights both the amazing work done on the PostmarketOS/Linux side, as well as the Redox side.

MV 950 Toy: an emulator of the Metrovick 950, the first commercial transistor computer

After researching the first commercial transistor computer, the British Metrovick 950, Nina Kalinina wrote an emulator, simple assembler, and some additional “toys” (her word) so we can enjoy this machine today. First, what, exactly, is the Metrovick 950? Metrovick 950, the first commercial transistor computer, is an early British computer, released in 1956. It is a direct descendant of the Manchester Baby (1948), the first electronic stored-program computer ever. ↫ Nina Kalinina The Baby, formally known as Small-Scale Experimental Machine, was a foundation for the Manchester Mark I (1949). Mark I found commercial success as the Ferranti Mark I. A few years later, Manchester University built a variant of Mark I that used magnetic drum memory instead of Williams tubes and transistors instead of valves. This computer was called the Manchester Transistor Computer (1955). Engineers from Metropolitan-Vickers released a streamlined, somewhat simplified version of the Transistor Computer as Metrovick 950. The emulator she developed is “only” compatible on a source code level, and emulates “the CPU, a teleprinter with a paper tape punch/reader, a magnetic tape storage device, and a plotter”, at 200-300 operations per second. It’s complete enough you can play Lunar Lander on it, because is a computer you can’t play games on really a computer? Nina didn’t just create this emulator and its related components, but also wrote a ton of documentation to help you understand the machine and to get started. There’s an introduction to programming and using the Metrovick 950 emulator, additional notes on programming the emulator, and much more. She also posted a long thread on Fedi with a ton more details and background information, which is a great read, as well. This is amazing work, and interesting not just to programmers interested in ancient computers, but also to historians and people who really put the retro in retrocomputing.

Multikernel architecture proposed for Linux

A very exciting set of kernel patches have just been proposed for the Linux kernel, adding multikernel support to Linux. This patch series introduces multikernel architecture support, enabling multiple independent kernel instances to coexist and communicate on a single physical machine. Each kernel instance can run on dedicated CPU cores while sharing the underlying hardware resources. ↫ Cong Wang on the LKML The idea is that you can run multiple instances of the Linux kernel on different CPU cores using kexec, with a dedicated IPI framework taking care of communication between these kernels. The benefits for fault isolation and security is obvious, and it supposedly uses less resources than running virtual machines through kvm and similar technologies. The main feature I’m interested in is that this would potentially allow for “kernel handover”, in which the system goes from using one kernel to the other. I wonder if this would make it possible to implement a system similar to what Android currently uses for updates, where new versions are installed alongside the one you’re running right now, with the system switching over to the new version upon reboot. If you could do something similar with this technology without even having to reboot, that would be quite amazing and a massive improvement to the update experience. It’s obviously just a proposal for now, and there will be much, much discussion to follow I’m sure, but the possibilities are definitely exciting.

Why Speed Translates to Trust in the Online Space 

People notice speed more than they realize. Whether they’re ordering food online, watching a video, or checking out of an e-commerce store, that near-instant response gives a quiet kind of reassurance. It tells them, without saying a word, that the system behind the screen is working properly. When everything moves smoothly, people tend to believe the platform knows what it’s doing. Speed becomes less about impatience and more about reliability; it’s how a website or app earns a user’s confidence without ever asking for it outright. When things slow down, even slightly, the feeling changes. A spinning wheel or delayed confirmation sends a small jolt of uncertainty through the user’s mind. It’s subtle, but it’s enough. People start wondering if the system is secure or if something’s gone wrong in the background. Most companies understand this reaction now, which is why they spend so much time and money making sure their sites load quickly and transactions go through smoothly. Fast performance doesn’t just please customers; it convinces them they can trust the process. Online casinos show this relationship between speed and trust especially well. Players want games that run without lag, deposits that clear quickly, and withdrawals that arrive when promised. The platforms that do this consistently don’t just look professional. They build lasting reputations. That’s one reason many players pick trusted sites with the best payouts, where the speed of payments matches the fairness of the games themselves. These casinos often have their systems tested by independent reviewers to confirm both payout accuracy and security, showing that real credibility comes from proof, not promises. There’s also something psychological about how we respond to quick actions. When things happen instantly, it gives people a sense of control. A fast confirmation email or immediate transaction approval makes them feel safe, like the system is responsive and alive. Think about how quickly we lose patience when a message doesn’t send right away. That hesitation we feel isn’t really about time. It’s about trust. Slow responses leave room for worry, and in the digital space, worry spreads faster than anything else. The speed of a platform often mirrors how transparent it feels. A site that runs smoothly gives off the impression that its systems are well managed. Even users who know little about technology pick up on that. Industries that handle sensitive data (finance, entertainment, healthcare) depend heavily on this perception. When transactions lag or screens freeze, people begin to question what’s happening underneath. So speed becomes more than a technical achievement; it’s an emotional one that reassures users everything is in working order. Fast payments are one of the clearest examples of this idea. Digital wallets and cryptocurrency platforms, for instance, have won users over because transfers happen almost in real time. That pace builds comfort. People like knowing their money moves when they move. The influence of speed stretches far beyond finance. Social networks depend on it to keep people connected. When messages appear instantly and feeds refresh without effort, users feel present and engaged. But when those same tools slow down, even slightly, people lose interest or suspect something’s broken. We’ve grown accustomed to instant feedback, and that expectation has quietly become the baseline for trust online. Still, being fast isn’t enough by itself. A website that rushes through interactions but delivers half-finished results won’t hold anyone’s confidence for long. Reliability takes consistency, not just quickness. The companies that succeed online tend to combine performance with honesty. They respond quickly, yes, but they also follow through, fix problems, and keep communication open. Those qualities, together, make speed meaningful.  If there’s one lesson that stands out, it’s that quick service reflects genuine respect for people’s time. Every second saved tells the user that their experience matters. From confirming a payment to collecting winnings, that seamless, responsive flow builds a kind of trust no marketing campaign can replace. This efficiency becomes the quiet proof of reliability in a world where attention is short and expectations are high.

How Can AI Help You Make Better Investments

Photo by Steve Johnson on Unsplash The growth of AI in recent years has faced criticism in a number of fields, especially when it comes to generative AI that has been accused of plagiarizing from creatives, but its implementation across a variety of industries has transformed how they operate, improving efficiency and reducing costs, which can be passed on to consumers. When it comes to investments, AI can be an extremely useful tool that provides valuable insights for potential investment opportunities. This is true across the board and is exemplified in the crypto industry, which is enjoying a period of growth amidst regulatory restructuring. In the case of sites like Coinfutures.io, consumers can follow markets and make money by predicting whether cryptocurrencies will rise or fall in value, all without having to invest in specific coins. Where consumers might have been put off by traditional stock market investments, many feel more comfortable with cryptocurrencies that can be used as an alternative to traditional currencies or sat on as an investment. Improving access via mobile apps has also helped to open up the market, and many are now exploring what AI can offer when making investments while adhering to ethics relevant to its use. Automated Decision Making and Personalization The pressure of making investment decisions can be overwhelming at times, with results impacting consumers with the benefit of hindsight. Using AI algorithms to suggest opportunities based on data analysis can help to take personal feelings out of the equation and help people focus on the facts. It is also possible for users to personalize this by adding in specific requirements, making AI do all the heavy lifting, and resulting in a list of options with all the relevant data in an easily accessible form. Intensive Data Analytics Data analysis is at the heart of AI use in investment, as it is able to cover significantly more new and historical data that can help make a decision. Making an investment based purely on gut instinct relies on a lot of luck, whereas studying as much relevant data as possible will give investors a better idea about the potential investment opportunity, factors that might affect this, and the market as a whole. This would take people a significant amount of time, and even then, they might not be able to go over all the information available to them. AI can do this and collate it in a way that is manageable. High-Quality Forecasting Models Predicting the stock market, financial markets, cryptocurrencies, or any other investment opportunity is not an exact science. However, AI forecasting models are able to pull in data from every available source, run it against historical data, and come up with predictions based on fact rather than feeling. These predictions might not always work out exactly, but they can provide valuable information about how similar opportunities have reacted to different market conditions. Portfolio Automation The thought of handing over finances to AI might seem daunting for some, but it is possible to automate portfolios in a way that won’t get out of control. Parameters can be set that require investment opportunities to tick a certain number of boxes before investments are made, and the same is true of selling. AI automation can follow your instructions with more sentience than traditional computer programs, with ML technology helping it improve as it goes. Sentiment Analysis Basing investments purely on facts is one way to go, but making the most of all the available information includes sentiment analysis. This is a form of analysis carried out by AI language models to track the general feeling towards investment opportunities and markets. This can cover everything from analyzing breaking market news and expert opinions, to gauging the reaction of the regular consumers via social media. Risk and Fraud Detection The use of AI as a security tool has helped a wide variety of industries, and it is something that can be used to mitigate risk and identify potential fraudulent activities. Its use in websites, apps, and exchanges can help to protect accounts, and when used on a broader scale, can also help to assess the risk of investment opportunities.While care must be taken with how far we let AI go, especially with generative AI, there are definite applications that can benefit users and operators.

History of the GEM desktop environment

The 1980s saw a flurry of graphical user interfaces pop up, almost all of them in some way made by people who got to see the work done by Xerox. Today’s topic is no exception – GEM was developed by Lee Jay Lorenzen, who worked at Xerox and wished to create a cheaper, less resource-intensive alternative to the Xerox Star, which he got to do at DRI after leaving Xerox. His work was then shown off to Atari, who were interested in using it. The entire situation was pretty hectic for a while: DRI’s graphics group worked on the PC version of GEM on MS-DOS; Atari developers were porting it to Apple Lisas running CP/M-68K; and Loveman was building GEMDOS. Against all odds, they succeeded. The operating system for Atari ST consisting of GEM running on top of GEMDOS was named TOS which simply meant “the operating system”, although many believed “T” actually stood for “Tramiel”. Atari 520 ST, soon nicknamed “Jackintosh”, was introduced at the 1985 Consumer Electronics Show in Las Vegas and became an immediate hit. GEM ran smoothly on the powerful ST’s hardware, and there were no clones to worry about. Atari developed its branch of GEM independently of Digital Research until 1993, when the Atari ST line of computers was discontinued. ↫ Nemanja Trifunovic at Programming at the right level Other than through articles like these and the occasional virtual machine, I have no experience with the various failed graphical user interfaces of the 1980s, since I was too young at the time. Even from the current day, though, it’s easy to see how all of them can be traced back directly to the work done at Xerox, and just how much we owe to the people working there at the time. Now that the technology industry is as massive as it is, with the stakes being so high, it’s unlikely we’ll ever see a place like Xerox PARC ever again. Everything is secretive now, and if a line of research doesn’t obviously lead to massive short-term gains, it’s canned before it even starts. The golden age of wild, random computer research without a profit motive is clearly behind us, and that’s sad.

Dark patterns killed my wife’s Windows 11 installation

Last night, my wife looks up from her computer, troubled. She tells me she can’t log into her computer running Windows 11, as every time she enters the PIN code to her account, the login screen throws up a cryptic error: “Your credentials could not be verified”. She’s using the correct PIN code, so that surely isn’t it. We opt for the gold standard in troubleshooting and perform a quick reboot, but that doesn’t fix it. My initial instinct is that since she’s using an online account instead of a local one, perhaps Microsoft is having some server issues? A quick check online indicates that no, Microsoft’s servers seem to be running fine, and to be honest, I don’t even know if that would have an effect on logging into Windows in the first place. The Windows 11 login screen does give us a link to click in case you forget your PIN code. Despite the fact the PIN code she’s entering is correct, we try to go through this process to see if it goes anywhere. This is where things really start to get weird. A few dialogs flash in and out of existence, until it’s showing us a dialog telling us to insert a security USB key of some sort, which we don’t have. Dismissing it gives us an option to try other login methods, including a basic password login. This, too, doesn’t work; just like with the PIN code, Windows 11 claims the accurate, correct password my wife is entering is invalid (just to be safe, we tested it by logging into her Microsoft account on her phone, which works just fine). In the account selection menu in the bottom-left, an ominous new account mysteriously appears: WsiAccount. The next option we try is to actually change the PIN code. This doesn’t work either. Windows wants us to use a second factor using my wife’s phone number, but this throws up another weird error, this time claiming the SMS service to send the code isn’t working. A quick check online once again confirms the service seems to be working just fine for everybody else. I’m starting to get really stumped and frustrated. Of course, during all of this, we’re both searching the web to find anything that might help us figure out what’s going on. None of our searches bring up anything useful, and none of our findings seem to be related to or match up with the issue we’re having. While she’s looking at her phone and I’m browsing on my Fedora/KDE PC next to hers, she quickly mentions she’s getting a notification that OneDrive is full, which is odd, since she doesn’t use OneDrive for anything. We take this up as a quick sidequest, and we check up on her OneDrive account on her phone. As OneDrive loads, our jaws drop in amazement: a big banner warning is telling her she’s using over 5500% of her 5GB free account. We look at each other and burst out laughing. We exchange some confused words, and then we realise what is going on: my wife just got a brand new Samsung Galaxy S25, and Samsung has some sort of deal with Microsoft to integrate its services into Samsung’s variant of Android. Perhaps during the process of transferring data and applications from her old to her new phone, OneDrive syncing got turned on? A quick trip to the Samsung Gallery application confirms our suspicions: the phone is synchronising over 280GB of photos and videos to OneDrive. My wife was never asked for consent to turn this feature on, so it must’ve been turned on by default. We quickly turn it off, delete the 280GB of photos and videos from OneDrive, and move on to the real issue at hand. Since nothing seems to work, and none of what we find online brings us any closer to what’s going on with her Windows 11 installation, we figured it’s time to bring out the big guns. For the sake of brevity, let’s run through the things we tried. Booting into safe mode doesn’t work; we get the same login problems. Trying to uninstall the latest updates, an option in WinRE, doesn’t work, and throws up an unspecified error. We try to use a restore point, but despite knowing for 100% certain the feature to periodically create restore points is enabled, the only available restore point is from 2022, and is located on a drive other than her root drive (or “C:\” in Windows parlance). Using the reset option in WinRE doesn’t work either, as it also throws up an error, this time about not having enough free space. I also walk through a few more complex suggestions, like a few manual registry hacks related to the original error using cmd.exe in WinRE. None of it yields any results. It’s now approaching midnight, and we need to get up early to drop the kids off at preschool, so I tell my wife I’ll reinstall her copy of Windows 11 tomorrow. We’re out of ideas. The next day, I decide to give it one last go before opting for the trouble of going through a reinstallation. The one idea I still have left is to enable the hidden administrator account in Windows 11, which gives you password-free access to what is basically Windows’ root account. It involves booting into WinRE, loading up cmd.exe, and replacing utilman.exe in system32 with cmd.exe: If you then proceed to boot into Windows 11 and click on the Accessibility icon in the bottom-right, it will open “utilman.exe”, but since that’s just cmd.exe with the utilman.exe name, you get a command prompt to work with, right on the login screen. From here, you can launch regedit, find the correct key, change a REG_BINARY, save, and reboot. At the login screen, you’ll see a new “adminstrator” account with full access to your computer. During the various reboots, I do some more web searching, and I stumble upon a post on

Intel to build x86 CPUs with NVIDIA graphics, most likely spelling the end of ARC

Intel is in very dire straits, and as such, the company needs investments and partnerships more than anything. Today, NVIDIA and Intel announced just such a partnership, in which NVIDIA will invest $5 billion into the troubled chip giant, while the two companies will develop products that combine Intel’s x86 processors with NVIDIA’s GPUs. For data centers, Intel will build NVIDIA-custom x86 CPUs that NVIDIA will integrate into its AI infrastructure platforms and offer to the market. For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs. ↫ NVIDIA press release My immediate reaction to this news was to worry about the future of Intel’s ARC graphics efforts. Just as the latest crop of their ARC GPUs have received a ton of good press and positive feedback, with some of their cards becoming the go-to suggestion for a budget-friendly but almost on-par alternative to offerings from NVIDIA and AMD, it would be a huge blow to user choice and competition if Intel were to abandon the effort. I think this news pretty much spells the end for the ARC graphics effort. Making dedicated GPUs able to compete with AMD and NVIDIA must come at a pretty big financial cost for Intel, and I wouldn’t be surprised if they’ve been itching to find an excuse to can the whole project. With NVIDIA GPUs fulfilling the role of more powerful integrated GPUs, all Intel really needs is a skeleton crew developing the basic integrated GPUs for cheaper and non-gaming oriented devices, which would be a lot cheaper to maintain. For just $5 billion dollars, NVIDIA most likely just eliminated a budding competitor in the GPU space. That’s cheap.

Steam drops 32bit Windows support

All good things come to an end eventually, and that includes support for 32bit Windows in Steam. As of January 1 2026, Steam will stop supporting systems running 32-bit versions of Windows. Windows 10 32-bit is the only 32-bit version that is currently supported by Steam and is only in use on 0.01% of systems reported through the Steam Hardware Survey. Windows 10 64-bit will still be supported and 32-bit games will still run. ↫ Steam support article While existing installations will continue to work, they will no longer receive any Steam updates or support. Valve obviously advises the small sliver of users still using 32bit Windows – unbeknownst to them, I’m sure – to upgrade to a 64bit release. Upcoming versions of Steam will only work on 64bit systems.

GNOME 49 released

GNOME 49 has been released, and it’s got a lot of nice updates, improvements, and fixes for everyone. GNOME 49 finally replaces the ageing Totem video player with Showtime, and Evince, GNOME’s document viewer, is replaced by the new Papers. Both of these new applications bring a modern GTK4 user interface to replace their older GTK3 counterparts. Papers supports a ton of both document-oriented as well as comic book formats, and has annotation features. We’ve already touched on the extensive accessibility improvements in GNOME Calendar, but other applications have been improved as well, such as Maps, Software, and Web. Software’s improvements focus on improving the application’s performance, especially when dealing with Flatpaks from Flathub, while Web, GNOME’s web browser, comes with improved ad blocking and optional regional blocklists, better bookmark management, improved security features, and more. The remote desktop experience also saw a lot of work, with multitouch input support, extended virtual monitors, and relative mouse input. For developers, GNOME 49 comes with the new GTK 4.20, the latest version of Glib, and Libadwaita 1.8, released only a few days ago. It brings a brand new shortcuts information dialog as its most user-facing feature, on top of a whole bunch of other, developer-oriented features. GNOME 49 will find its way to your distribution of choice soon enough.