Google’s Generic Kernel Image now required on all Android form factors

New TVs that launch with Android TV 14 or later on Linux kernel 5.15 or higher will be required to meet Google’s Generic Kernel Image (GKI) requirements in order to pass certification!

This means that GKI is now enforced on all major Android form factors with AArch64 chipsets: handhelds, watches, automotive, & televisions.

↫ Mishaal Rahman

What this means is that all the major Android form factors will be running kernels that adhere to the GKI requirements, which means SoC and board support is not part of the core kernel, but instead achieved through loadable modules. This should, in theory, make it easier to provide long-term support.

Fedora intends to fully embrace “AI”, but doesn’t address sourcing or its environmental impact

All weekend, I’ve been mulling over a recent blog post by Fedora Project Leader Matthew Miller, which he wrote and published on behalf of the Fedora Council. Fedora (the KDE version) is my distribution of choice, I love using it, and I consider it the best distribution for desktop use, and not by a close margin either. As such, reading a blog post in which Fedora is announcing plans to make extensive use of “AI” was bound to make me a feel a little uneasy.

Miller states – correctly – that the “AI” space as it stands right now is dominated so much by hyperbole and over-the-top nonsense that it’s hard to judge the various technologies underpinning “AI” on merit alone. He continues that he believes that stripped of all the hyperbole and techbro bullshit, there’s “something significant, powerful”, and he wants to make “Fedora Linux the best community platform for AI”.

So, what exactly does that look like?

In addition to the big showy LLM-based tools for chat and code generation, these advances have brought big jumps for more tailored tasks: for translation, file search, home automation, and especially for accessibility (already a key part of our strategy). For example, open source speech synthesis has long lagged behind proprietary options. Now, what we have in Fedora is not even close to the realism, nuance, and flexibility of AI-generated speech.

↫ Matthew Miller

Some of these are things we can all agree are important and worthwhile, but lacking on the Linux desktop. If we can make use of technologies labelled as “AI” to improve, say, text-to-speech on Linux for those who require it for accessibility reasons, that’s universally a great thing. Translation, too, is, at its core, a form of accessibility, and if we can improve machine translations so that people who, for instance, don’t speak English gain more access to English content, or if we can make the vast libraries of knowledge locked into foreign languages accessible to more people, that’s all good news.

However, Fedora aims to take its use of “AI” even further, and wants to start using it in the process of developing, making, and distributing Fedora. This is where more and more red flags are starting to pop up for me, because I don’t feel like the processes and tasks they want to inject “AI” into are the kinds of processes and tasks where you want humans taken out of the equation.

We can use AI/ML as part of making the Fedora Linux OS. New tools could help with package automation and bug triage. They could note anomalies in test results and logs, maybe even help identify potential security issues. We can also create infrastructure-level features for our users. For example, package update descriptions aren’t usually very meaningful. We could automatically generate concise summaries of what’s new in each system update — not just for each package, but highlighting what’s important in the whole set, including upstream change information as well.

↫ Matthew Miller

Even the tools built atop billions and billions of euros of investments by Microsoft, Google, OpenAI, Facebook, and similar juggernauts are not exactly good at what they’re supposed to do, and suck at even the most basic of tasks of providing answers to simple questions. They lie, they make stuff up, they bug out and produce nonsense, they’re racist, and so on. I don’t want any of that garbage near the process of making and updating the operating system I rely on every day.

Miller laments how “AI” is currently a closed-source, black box affair, which obviously doesn’t align with Fedora’s values and goals. He doesn’t actually explain how Fedora’s use of “AI” is going to address this. They’re going to have to find ethical, open source models that are also of high quality, and that’s a lot easier said than done. Sourcing doesn’t even get a single mention in this blog post, even though I’m fairly sure that’s one of the two major issues many of us have with the current crop of “AI” tools.

The blog post also completely neglects to mention the environmental cost of training these “AI” tools. It costs an insane amount of electricity to train these new tools, and with climate change ever accelerating and the destruction of our environment visible all around us, not mentioning this problem when you’re leading a project like Fedora seems disingenuous at best, and malicious at worst.

While using “AI” to improve accessibility tools in Fedora and the wider Linux world is laudable, some of the other intended targets seem more worrisome, especially when you take into account that the blog post makes no mention of the two single biggest problems with “AI”: sourcing, and its environmental impact. If Fedora truly intends to fully embrace “AI”, it’s going to have to address these two problems first, because otherwise they’re just trying to latch onto the hype without really understanding the cost.

And that’s not something I want to hear from the leaders of my Linux distribution.

Framework’s software and firmware have been a mess, but it’s working on them

Framework puts a lot of effort into making its hardware easy to fix and upgrade and into making sure that hardware can stay useful down the line when it’s been replaced by something newer. But supporting that kind of reuse and recycling works best when paired with long-term software and firmware support, and on that front, Framework has been falling short.

Framework will need to step up its game, especially if it wants to sell more laptops to businesses—a lucrative slice of the PC industry that Framework is actively courting. By this summer or fall, we’ll have some idea of whether its efforts are succeeding.

↫ Andrew Cunningham at Ars Technica

A very painful read, and I’m disappointed to learn that the software support from Framework has been so lacklustre – or non-existent, to be more accurate. Leaving severel security vulnerabilities in firmware unpatched is a disgrace, and puts users at risk, while promising but not delivering updates that will unlock faster Thunderbolt speeds is just shitty. They have to do better, especially since their pitch is all about repairability and longevity.

This article has made me more weary of spending any money on Framework – not that I have the money for a new laptop, because reasons – and I feel more people will feel this way after reading this.

Radxa ROCK 5 ITX: a first look

A couple of weeks ago I wrote about the ROCK 5 ITX coming soon and since then, samples of the Rockchip RK3588-based Radxa ROCK 5 ITX have been landing on doorsteps (or service points, screw you, UPS) of a lucky group of people and somehow I was one of those, so here’s a first look at Radxa’s latest Single Board Computer in a Mini ITX form-factor!

It’s going to be a photo-heavy post and I make no apologies for that, it’s a very nice-looking PCB, with the black and gold colour scheme looking very stylish. I imagine that was a very conscious decision seeing as, as expected, they’re marketing this as a low-power desktop option and you probably don’t want a plain Jane motherboard taking pride of place in your new system, right?

↫ Bret Weber

Now this – this, my friends, is exactly what the doctor ordered. I can’t wait for standard, ATX motherboard sporting ARM processors to become more common and readily available, hopefully standardised better than what we’re used to from the ARM world. I want my next (non-gaming) machines to be ARM-powered, and that means we’re going to need more of these ATX ARM boards, spanning wider performance levels.

GestureX: control your Linux machine with hand gestures

GestureX enables you to control your Linux PC using hand gestures. You can assign specific commands or functionalities to different hand gestures, allowing for hands-free interaction with your computer.

↫ GestureX GitHub page

I personally see no use for any of this, but I’m sure there are some interesting accessibility uses for technology like this, which in and of itself make it a worthwhile endeavour to work on. Do note, though, that this is all beta, so there’s bound to be issues.

Apple’s mysterious fisheye projection

If you’ve read my first post about Spatial Video, the second about Encoding Spatial Video, or if you’ve used my command-line tool, you may recall a mention of Apple’s mysterious “fisheye” projection format. Mysterious because they’ve documented a CMProjectionType.fisheye enumeration with no elaboration, they stream their immersive Apple TV+ videos in this format, yet they’ve provided no method to produce or playback third-party content using this projection type.

Additionally, the format is undocumented, they haven’t responded to an open question on the Apple Discussion Forums asking for more detail, and they didn’t cover it in their WWDC23 sessions. As someone who has experience in this area – and a relentless curiosity – I’ve spent time digging-in to Apple’s fisheye projection format, and this post shares what I’ve learned.

↫ Mike Swanson

There is just so much cool technology crammed into the Vision Pro, from the crazy displays down to, apparently, the encoding format for spatial video. Too bad Apple seems to have forgotten that a technology is not a product, as even the most ardent Apple supporterts – like John Gruber, or the hosts of ATP – have stated their Vision Pro devices are lying unused, collecting dust, just months after launch.

Why good external SSDs are faster with Apple silicon

After several days testing the latest Express 1M2 enclosure from OWC, I have changed my recommendations for the best external SSDs. Previously I had chosen the relatively reliable Thunderbolt 3 up to 3 GB/s, even though few drives ever seemed capable of achieving that up to. If you’re still needing good performance with an Intel Mac, that makes sense.

But if you need best performance with an Apple silicon Mac, you’re far better off with a high-quality USB 40Gbps enclosure such as OWC’s Express 1M2, which should reliably return over 3 GB/s even through a compatible hub. I much prefer the word over to up to.

↫ Howard Oakley

If you have an Apple Silicon Mac, and you’re looking for an external drive – this is some good advice to follow.

Linux 6.10 to merge NTSYNC driver for emulating Windows NT synchronization primitives

Going through my usual scanning of all the “-next” Git subsystem branches of new code set to be introduced for the next Linux kernel merge window, a very notable addition was just queued up… Linux 6.10 is set to merge the NTSYNC driver for emulating the Microsoft Windows NT synchronization primitives within the kernel for allowing better performance with Valve’s Steam Play (Proton) and Wine of Windows games and other apps on Linux.

↫ Michael Larabel

The improvements to performance of games running under Proton this new driver will bring are legitimately insane. We’re looking at a game-changing addition to the Linux kernel here, and it’s no surprise, then, to see this effort being spearheaded by companies like Valve and CodeWeavers.

KDE’s Kate on all platforms

Kate, KDE’s programming-focused text editor, is, of course, a Qt application, and is also available on a variety of other platforms. Christoph Cullmann, one of the developers of Kate, published a short blog post with screenshots of Kate running on the three biggest platforms – Linux/BSD, Windows, and macOS. Sadly, while Haiku gets a mention, there’s no screenshot of the Haiku version of Kate.

Still, it’s interesting to see the family resemblance.

VMS Software guts its community licensing program

VMS Software, the company developing OpenVMS, has announced some considerable changes to its licensing program for hobbyists, and the news is, well, bad. The company claims that demand for hobbyist licenses has been so high that they were unable to process requests fast enough, and as such, that the program is not delivering the “intended benefits”. Despite this apparent high demand, contributions from the community, such as writing and porting open-source software, creating wiki articles, and providing assistance on their forums, “has not matched the scale of the program”.

Now, I want to stop them right here. The OpenVMS hobbyist program was riddled with roadblocks, restrictions, unclear instructions, restrictive licensing, and similar barriers to entry. As such, it’s entirely unsurprising that the community around a largely relic of an operating system – with all due respect – simply hasn’t grown enough to become self-sustainable. The blame here lies entirely with VMS Software itself, and not at all with whatever community managed to form around OpenVMS, despite the countless restrictions.

So, you’d expect them to expand the program, right? Perhaps embrace open source, or make the various versions and releases more freely and easily available?

No, they’re going to do the exact opposite. To address not getting enough out of their community, they’re going to limit that community’s options even more. First, they’re ending the community program for the Alpha and Itanium (which they call Integrity, since it covers HP’s Integrity machines), effective immediately, so they won’t be granting any new licenses for these architectures. Existing licenses will continue to work until 2025.

Effective immediately, we will discontinue offering new community licenses for non-commercial use for Alpha and Integrity. Existing holders of community licenses for these architectures will get updates for those licenses and retain their access to the Service Portal until March 2025 for Alpha and December 2025 for Integrity. All outstanding requests for Alpha and Integrity community licenses will be declined.

↫ VMS Software announcement

This sucks, but with both Alpha and Itanium being end-of-life, there’s at least some arguments that can be made for ending the program for these architectures. Much less defensible are the changes to x86-64 community licensing, which basically just come down to more bureaucracy for both users and VMS Software.

For x86 community licenses, we will be transitioning to a package-based distribution model (which will also replace the student license that used to be distributed as a FreeAXP emulator package). A vmdk of a system disk with OpenVMS V9.2-2 and compilers installed and licensed will be provided, along with instructions to create a virtual machine and the SYSTEM password. The license installed on that system will be valid for one year, at which point we will provide a new package. While this may entail some inconvenience for users, it enables us to continue offering licenses at no cost, ensuring accessibility without compromising our sustainability.

↫ VMS Software announcement

The vibe I’m getting from this announcement is that by offering some rudimentary and complicated form of community licensing, OpenVMS hoped to gain the advantages of a vibrant open source community, without all the downsides. They must’ve hoped that by throwing the community a bone, they’d get them to do a bunch of work for them, and now that this is not panning out, they’re taking their ball and going home. That’s entirely within their right, of course, but I doubt these changes are going to make anyone more excited to dig into OpenVMS.

All of this feels eerily similar to the attempts by QNX – before being acquired by BlackBerry – to do pretty much the same thing. QNX also tried a similar model where you needed to sign up and jump through a bunch of hoops to get QNX releases, and the company steeped it in talks of building a community, but of course it didn’t pan out because people are simply not interested in a one-way relationship where you’re working for free for a corporation who then takes your stuff and uses it to sell their, in this case, operating system.

This particular mistake is made time and time again, and it seems VMS Software simply did not learn this lesson.

Microsoft tests ads in the Start menu

Building on top of recent improvements like grouping recently installed apps and showing your frequently used apps, we are now trying out recommendations to help you discover great apps from the Microsoft Store under Recommended on the Start menu. This will appear only for Windows Insiders in the Beta Channel in the U.S. and will not apply to commercial devices (devices managed by organizations). This can be turned off by going to Settings > Personalization > Start and turning off the toggle for “Show recommendations for tips, app promotions, and more”. As a reminder, we regularly try out new experiences and concepts that may never get released with Windows Insiders to get feedback. Should you see this experience on the Start menu, let us know what you think. We are beginning to roll this out to a small set of Insiders in the Beta Channel at first.

↫ Amanda Langowski and Brandon LeBlanc

The Start menu, August 24, 1995 – April 12, 2024. You made it almost 30 years, buddy.

Do not use Kagi

For quite a while now, you might have noticed various people recommending a search engine called “Kagi”. From random people on the internet, to prominent bloggers like John Gruber and David Pierce, they’ve all been pushing this seemingly new search engine as a paid-for alternative to Google that respects your privacy. Over the past few months to a year, though, more and more cracks started to appear in Kagi’s image, and I’ve been meaning to assemble those cracks and tie a bow on them.

Well, it turns out I don’t have to, because lori (I’m not aware of their full name, so I’ll stick to lori) already did it for me in a blog post titled “Why I lost faith in Kagi“. Even though I knew all of these stories, and even though I was intending to list them in more or less the same way, it’s still damning to see it all laid out so well (both the story itself, as well as the lovely, accessible, approachable, and simple HTML, but that’s neither here nor there).

Lori’s summary hits on all the pain points (but you should really read the whole thing):

Between the absolute blase attitude towards privacy, the 100% dedication to AI being the future of search, and the completely misguided use of the company’s limited funds, I honestly can’t see Kagi as something I could ever recommend to people. Is the search good? I mean…it’s not really much better than any other search, it heavily leverages Bing like DDG and the other indie search platforms do, the only real killer feature it has to me is the ability to block domains from your results, which I can currently only do in other search engines via a user script that doesn’t help me on mobile. But what good is filtering out all of the AI generated spamblogs on a search platform that wants to spit more AI generated bullshit at me directly? Sure I can turn it off, but who’s to say that they won’t start using my data to fuel their own LLM? They already have an extremely skewed idea of what counts as PII or not. They could easily see using people’s searches as being “anonymized” and decide they’re fine to use, because their primary business isn’t search, it’s AI.

↫ lori at lori’s blog

The examples underpinning all these pain points are just baffling, like how the company was originally an “AI” company, made a search engine that charges people for Bing results, and now is going full mask-off with countless terrible, non-working, privacy-invasive “AI” tools. Or that thing where the company spent one third of their funding round of $670,000 on starting a T-shirt company in Germany (Kagi is US-based) to print 20,000 free T-shirts for their users that don’t even advertise Kagi. Or that thing where they claimed they “forgot” to pay sales tax for two years and had to raise prices to pay their back taxes. And I can just keep on going.

To make matters worse, after publication of the blog post, Kagi’s CEO started harassing lori over email, and despite lori stating repeatedly they wanted him to stop emailing them, he just kept on going. Never a good look.

The worst part of it, though, is the lack of understanding about what privacy means, while telling their users they are super serious about it. Add to that the CEO’s “trust me, bro” attitude, their deals with the shady and homophobic crypto company Brave, and many other things, and the conclusion is that, no, your data is not safe at Kagi at all, and with their primary business being “AI” and not search, you know exactly what that means.

Do not use Kagi.

Amazon virtually kills efforts to develop Alexa Skills, disappointing dozens

There was a time when it thought that Alexa would yield a robust ecosystem of apps, or Alexa Skills, that would make the voice assistant an integral part of users’ lives. Amazon envisioned tens of thousands of software developers building valued abilities for Alexa that would grow the voice assistant’s popularity—and help Amazon make some money.

But about seven years after launching a rewards program to encourage developers to build Skills, Alexa’s most preferred abilities are the basic ones, like checking the weather. And on June 30, Amazon will stop giving out the monthly Amazon Web Services credits that have made it free for third-party developers to build and host Alexa Skills. The company also recently told devs that its Alexa Developer Rewards program was ending, virtually disincentivizing third-party devs to build for Alexa.

↫ Scharon Harding at Ars Technica

I’ve never used Alexa – Amazon doesn’t really have a footprint in either The Netherlands or Sweden, so I never really had to care – but I always thought the Skills were the reason it was so loved. It seemingly makes no sense to me to start killing off this feature, but then, I’m assuming Amazon has the data to back up the fact people aren’t using them.

It sucks, I guess? Can someone who uses Alexa fill in the blanks for me here?

Discord is nuking Nintendo Switch emulator devs and their entire servers

Discord has shut down the Discord servers for the Nintendo Switch emulators Suyu and Sudachi and has completely disabled their lead developers’ accounts — and the company isn’t answering our questions about why it went that far. Both Suyu and Sudachi began as forks of Yuzu, the emulator that Nintendo sued out of existence on March 4th.

↫ Sean Hollister at The Verge

This is exactly what people were worried about when Nintendo and Yuzu settled for millions of dollars. Even though it’s a settlement and not a court ruling, and even tough the code to Yuzu is entirely unaffected by the settlement and freely shareable and usable by anyone, and even though emulators are legal – the chilling effect this settlement is having is absolutely undeniable. Here we have Discord going far beyond its own official policy, without even giving the affected parties any recourse. It’s absolutely wild, and highlights just how dangerous it is to rely on Discord for, well, anything.

I wish that for once, we’d actually see a case related to console emulation go to court in either the EU or the US, to make it even clearer that yes, unless you distribute copyrighted code like game ROMs or console firmware, emulators are entirely legal and without any risk. You know, a recent court ruling we could point to to dissuade bullies like Nintendo from threatening innocent developers and ruining their lives because of entirely legal activities.

And let me reiterate: don’t use Discord as for anything other than basic chat. This platform ain’t got your back.

DwarfFS: a read-only compression file system

DwarFS is a read-only file system with a focus on achieving very high compression ratios in particular for very redundant data.

[…]

DwarFS also doesn’t compromise on speed and for my use cases I’ve found it to be on par with or perform better than SquashFS. For my primary use case, DwarFS compression is an order of magnitude better than SquashFS compression, it’s 6 times faster to build the file system, it’s typically faster to access files on DwarFS and it uses less CPU resources.

↫ DwarfFS GitHub page

DwarfFS supports both Linux, macOS, and Windows, but macOS and Windows support is experimental at this point. It seems to have higher compression ratios at faster speeds than various alternatives, so if you have a use case for compression file systems – give DwarfFS a look.

OpenBSD is a cozy operating system

With the recent release of OpenBSD 7.5, I decided to run through my personal OpenBSD “installer” for laptop/desktop devices. The project is built off of the dwm tiling window manager and only installs a few basic packages. The last time I updated it was with the release of 7.3, so it’s been due for an minor rework.

While making these minor changes, I remembered how incredibly easy the entire install process for OpenBSD is and how cozy the entire operating system feels. All the core systems just work out the box. Yes, you need to “patch” in WiFi with a firmware update, so you’ll need an Ethernet connection during the initial setup. Yes, the default desktop environment is not intuitive or ideal for newcomers.

But the positives heavily outweigh the negatives (in my opinion).

↫ Bradley Taunt

OpenBSD has a very dedicated community, and I’ve noticed they tend to be very helpful and friendly. It’s making me curious about trying it out, and both this article and the helpful posts it links to will be a great way to start.

Android 15 Beta 1 is here, but details are still under wraps

After two months of developer previews, Google has finally released Android 15 Beta 1. While the beta usually offers more user-facing changes, Google is still pretty light on details with this build, giving us only a few more details on what we can expect. Instead, the company is pointing to Google I/O for more details, which will take place on May 14 this year, basically confirming that this is when we will get the second beta with more features.

↫ Manuel Vonau

There’s very little of interest in this beta, so unless you’re really into Android development, I’d wait out installing any betas until after Google I/O.

GNU Hurd ported to AArch64, and more Hurd news

Hurd, the kernel that is supposed to form the basis of the GNU operating system, is perpetually a research project that doesn’t get anywhere close to being a replacement for Linux, but that doesn’t mean the project doesn’t make progress and has a place in the world of operating systems. Their most recent major improvement has been porting GNU Hurd to AArch64, spearheaded by Hurd developer Sergey Bugaev.

Since then, however, I have been (some may say, relentlessly) working on filling in the missing piece, namely porting GNU Mach (with important help & contributions by Luca D.). I am happy to report that we now have an experimental port of GNU Mach that builds and works on AArch64! While that may sound impressive, note that various things about it are in an extremely basic, proof-of-concept state rather than being seriously production-ready; and also that Mach is a small kernel (indeed, a microkernel), and it was designed from the start (back in the 80s) to be portable, so most of the “buisness logic” functionality (virtual memory, IPC, tasks/threads/scheduler) is explicitly arch-independent.

Despite the scary “WIP proof-of-concept” status, there is enough functionality in Mach to run userland code, handle exceptions and syscalls, interact with the MMU to implement all the expected virtual memory semantics, schedule/switch tasks and threads, and so on. Moreover, all of GNU Mach’s userspace self-tests pass!

↫ Sergey Bugaev

On top of all this, glibc works on the AArch64 port, and several important Hurd servers work as well, namely ext2fs, exec, startup, auth, and proc, as a do a number of basic UNIX programs. This is an exceptional effort, and highlights that while people tend to make fun of Hurd, it’s got some real talent working on it that bring the platform forward. While we may not see any widely usable release any time soon, every bit of progress helps and is welcome.

Speaking of progress, the progress report for GNU Hurd covering the first quarter of 2024 has also been published, and it lists a number of other improvements and fixes made aside from the AArch64 port. For instance, the console will now use xkbcommon instead of X11 for handling keyboard layouts, which reduced code complexity a lot and improved keyboard layout coverage, to boot. The port of GDB to the 64 bit version of Hurd is also progressing, and SMP has seen a ton of fixes too.

Another awesome bit of news comes from, once again, Sergey Bugaev, as he announced a new Hurd distribution based on Alpine Linux. Work on this project has only recently begun, but he’s already had some success and about 299 Alpine packages are available. His reasons for starting this new project is that while Debian GNU/Hurd is a great base to work from for Hurd users and developers, Debian is also a bit strict and arcane in its packaging requirements, which might make sense for Debian GNU/Linux, but is annoying to work with when you’re trying to get a lot of low-level work done. For now, there’s no name yet, and he’s asking for help from the Hurd community for name ideas, hosting, and so on.

That’s a lot of GNU Hurd progress this quarter, and that’s good news.

Humane AI pins review confirm what we already expected: it’s useless trash

I didn’t want to spend too much time on this thing, but I feel like we can all use a good laugh at a stupid product hyped only by the tech media. The Verge reviewed the Humane AI pin, and entirely predictably, it’s a complete and utter trashfire.

But until all of that happens, and until the whole AI universe gets better, faster, and more functional, the AI Pin isn’t going to feel remotely close to being done. It’s a beta test, a prototype, a proof of concept that maybe someday there might be a killer device that does all of these things. I know with absolute certainty that the AI Pin is not that device. It’s not worth $700, or $24 a month, or all the time and energy and frustration that using it requires. It’s an exciting idea and an infuriating product. 

AI gadgets might one day be great. But this isn’t that day, and the AI Pin isn’t that product. I’ll take my phone back now, thanks.

↫ David Pierce at The Verge

It takes dozens of seconds to reply to any query, the battery is severely lacking, the answers you get are mostly wrong or useless, sending text messages is effectively broken, and tons of promised features don’t work because they’re not implemented. In another video review, MrMobile also shows the device overheating all the time, a problem that’s common to all of the devices. I don’t think trashfire is harsh enough to describe this junk.

So it begins: Microsoft starts showing full-screen ads about the end of Windows 10 support

We are about 18 months away from the end of mainstream Windows 10 support, but Microsoft thinks it is time to start nagging warning Windows 10 users about the inevitable. Users on Reddit report spotting a new full-screen ad with a notification that Windows 10 is about to reach its end of life in October 2025, even though it is still getting new features (there are even rumors about Microsoft re-opening the Windows Insider Program for Windows 10).

↫ Taras Buria at Neowin

I mean, I have a long history of crying foul over Windows being adware now, but I don’t think warning users that their operating system is losing support and that they should upgrade to a new version really constitutes an ad. Sure, technically it does, but I think we can all agree that such a warning is useful and informative.