ExectOS is a preemptive, reentrant multitasking operating system that implements the XT architecture which derives from NT architecture. It is modular, and consists of two main layers: microkernel and user modes. Its’ kernel mode has full access to the hardware and system resources and runs code in a protected memory area. It consists of executive services, which is itself made up on many modules that do specific tasks, a kernel and drivers. Unlike the NT, system does not feature a separate Hardware Abstraction Layer (HAL) between the physical hardware and the rest of the OS. Instead, XT architecture integrates a hardware specific code with the kernel. The user mode is made up of subsystems and it has been designed to run applications written for many different types of operating systems. This allows us to implement any environment subsystem to support applications that are strictly written to the corresponding standard (eg. DOS, or POSIX). Thanks to that ExectOS will allow to run existing software, including Win32 applications. ↫ ExectOS website What ExectOS seems to be is an implementation very close to what Windows NT originally was – implementing the theory of Windows NT, not the reality. It’s clearly still in very early development, but in theory, I really like the idea of what they’re trying to achieve here. Windows NT is, after all, in and of itself not a bad concept – it’s just been tarred and feathered by decades of mismanagement from Microsoft. Implementing something that closely resembles the original, minimalist theories behind NT could lead to an interesting operating system for sure. ExectOS is open source, contains its own boot loader, only runs on EFI, and installation on real hardware, while technically possible, is discouraged.
Today just so happens to be the 40th birthday of X, the venerable windowing system that’s on its way out, at least in the Linux world. From the original announcement by Robert W. Scheifler: I’ve spent the last couple weeks writing a window system for the VS100. I stole a fair amount of code from W, surrounded it with an asynchronous rather than a synchronous interface, and called it X. Overall performance appears to be about twice that of W. The code seems fairly solid at this point, although there are still some deficiencies to be fixed up. We at LCS have stopped using W, and are now actively building applications on X. Anyone else using W should seriously consider switching. This is not the ultimate window system, but I believe it is a good starting point for experimentation. Right at the moment there is a CLU (and an Argus) interface to X; a C interface is in the works. The three existing applications are a text editor (TED), an Argus I/O interface, and a primitive window manager. There is no documentation yet; anyone crazy enough to volunteer? I may get around to it eventually. ↫ Robert W. Scheifler Reading this announcement email made me wonder if way back then, in 1984, the year of my birth, there were also people poo-pooing this new thing called “X” for not having all the features W had. There must’ve people posting angry messages on various BBS servers about how X is dumb and useless since it doesn’t have their feature in W that allows them to use an acoustic modem to send a signal over their internal telephone system by slapping their terminal in just the right spot to activate their Betamax that’s hotwired into the telephone system. I mean, W was only about a year old at the time, so probably not, but there must’ve been a lot of complaining and whining about this newfangled X thing, and now, 40 years later, long after it has outgrown its usefulness, we’re again dealing with people so hell-bent on keeping an outdated system running but hoping – nay, demanding – others to do the actual work of maintaining it. X served its purpose. It took way too long, but we’ve moved on. Virtually every new Linux user since roughly 12-24 months ago will most likely never use X, and never even know what it was. They’re using a more modern, more stable, more performant, more secure, and better maintained system, leading to a better user experience, and that’s something we should all agree on is a good thing.
Framework, the company making modular, upgradeable, and repairable laptops, and DeepComputing, the same company that’s making the DC ROMA II RISC-V laptop we talked about last week, have announced something incredibly cool: a brand new RISC-V mainboard that fits right into existing Framework 13 laptops. Sporting a RISC-V StarFive JH7110 SoC, this groundbreaking Mainboard was independently designed and developed by DeepComputing. It’s the main component of the very first RISC-V laptop to run Canonical’s Ubuntu Desktop and Server, and the Fedora Desktop OS and represents the first independently developed Mainboard for a Framework Laptop. ↫ The DeepComputing website For a company that was predicted to fail by a popular Apple spokesperson, it seems Framework is doing remarkably well. This new mainboard is the first one not made by Framework itself, and is the clearest validation yet of the concept put into the market by the Framework team. I can’t recall the last time you could buy a laptop powered by one architecture, and then upgrade to an entirely different architecture down the line, just by replacing the mainboard. The news of this RISC-V mainboard has made me dream of other possibilities – like someone crazy enough to design, I don’t know, a POWER10 or POWER11 mainboard? Entirely impossible and unlikely due to heat constraints, but one may dream, right?
There’s incredibly good news for people who use accessibility tools on Linux, but who were facing serious, gamebreaking problems when trying to use Wayland. Matt Campbell, of the GNOME accessibility team, has been hard at work on an entirely new accessibility architecture for modern free desktops, and he’s got some impressive results to show for it already. I’ve now implemented enough of the new architecture that Orca is basically usable on Wayland with some real GTK 4 apps, including Nautilus, Text Editor, Podcasts, and the Fractal client for Matrix. Orca keyboard commands and keyboard learn mode work, with either Caps Lock or Insert as the Orca modifier. Mouse review also works more or less. Flat review is also working. The Orca command to left-click the current flat review item works for standard GTK 4 widgets. ↫ Matt Campbell One of the major goals of the project was to enable such accessibility support for Flatpak applications without having to pass an exception for the AT-SPI bus. what this means is that the new accessibility architecture can run as part of a Flatpak application without having to break out of their sandbox, which is obviously a hugely important feature to implement. There’s still a lot of work to be done, though. Something like the GNOME shell doesn’t yet support Newton, of course, so that’s still using the older, much slower AT-SPI bus. Wayland also doesn’t support mouse synthesizing yet, things like font, size, style, and colour aren’t exposed yet, and there’s a many more limitations due to this being such a new project. The project also isn’t trying to be GNOME-specific; Campbell wants to work with the other desktops to eventually end up with an accessibility architecture that is truly cross-desktop. The blog post further goes into great detail about implementation details, current and possible future shortcomings, and a lot more.
After the very successful release of KDE Plasma 6.0, which moved the entire desktop environment and most of its applications over to Qt 6, fixed a whole slow of bugs, and streamlined the entire KDE desktop and its applications, it’s now time for KDE Plasma 6.1, where we’re going to see a much stronger focus on new features. While it’s merely a point release, it’s still a big one. The tentpole new feature of Plasma 6.1 is access to remote Plasma desktops. You can go into Settings and log into any Plasma desktop, which is built entirely and directly into KDE’s own Wayland compositor, avoiding the use of third party applications of hacky extensions to X.org. Having such remote access built right into the desktop environment and its compositor itself is a much cleaner implementation than in the before time with X. Another feature that worked just fine under X but was still missing from KDE Plasma on Wayland is something they now call “persistent applications” – basically, KDE will now remember which windows you had open when you closed KDE or shut down your computer, and open them back up right where you left off when you log back in. It’s one of those things that got lost in the transition to Wayland, and having it back is really, really welcome. Speaking of Wayland, KDE Plasma 6.1 also introduces two major new rendering features. Explicit sync removes flickering and glitches most commonly seen on NVIDIA hardware, while triple buffering provides smoother animations and screen rendering. There’s more here, too, such as a completely reworked edit desktop view, support for controlling keyboard LED backlighting traditionally found in gaming laptops, and more. KDE Plasma 6.1 will find its way to your distribution of choice soon enough, but of course, you can compile and install it yourself, too.
I’ve always found the world of DOS versions and variants to be confusing, since most of it took place when I was very young (I’m from 1984) so I wasn’t paying much attention to computing quite yet, other than playing DOS games. One of the variants of DOS I never quite understood where it was from until much, much later, was DR-DOS. To this day, I pronounce this as “Doctor DOS”. If you’re also a little unclear on what, exactly, DR-DOS was, Bradford Morgan White has an excellent article detailing the origins and history of DR-DOS, making it very easy to get up to speed and expand your knowledge on DOS, which is surely a very marketable skill in the days of Electron and Electron for Developers. DR DOS was a great product. It was superior to other DOS versions in many ways, and it is certainly possible that it could have been more successful were it not for Microsoft Windows having been so wildly successful. Starting with Windows 95, the majority of computer users simply didn’t much care about which DOS loaded Windows so long as it worked. There’s quite a bit of lore regarding legal battles and copyrights surrounding CP/M and DOS involving Microsoft and Digital Research. This has been covered in previous articles to some extent, but I am not really certain how much would have changed had Microsoft and Digital Research got on. Gates and Kildall had been quite friendly at one point, and we know that the two mutually chose not to work together due to differences in business practices and beliefs. Kildall chose to be quite a bit more friendly and less competitive while Gates very much chose to be competitive and at times a bit ruthless. Additionally, Kildall sold DRI rather than continue the fight, and DRI had never really attempted to combine DR DOS with GEM as a cohesive product to fight Windows before Windows became the ultimate ruler of the OS market following Windows 3.1’s release. Still, it was an absolutely brilliant product and part of me will always feel that it ought to have won. ↫ Bradford Morgan White I can definitely imagine an alternative timeline in which Digital Research managed to combine DR-DOS with GEM in a more attractive way, stealing Microsoft’s thunder before Gates’ balls got rolling properly with Windows 3.x. It’s one of the many, many what-ifs in this sector, but not one you often hear or read about.
To lock subscribers into recurring monthly payments, Adobe would typically pre-select by default its most popular “annual paid monthly” plan, the FTC alleged. That subscription option locked users into an annual plan despite paying month to month. If they canceled after a two-week period, they’d owe Adobe an early termination fee (ETF) that costs 50 percent of their remaining annual subscription. The “material terms” of this fee are hidden during enrollment, the FTC claimed, only appearing in “disclosures that are designed to go unnoticed and that most consumers never see.” ↫ Ashley Belanger at Ars Technica There’s a sucker for every corporation, but I highly doubt there’s anyone out there who would consider this a fair business practice. This is so obviously designed to hide costs during sign-up, and then unveil them when the user considers quitting. If this is deemed legal or allowed, you can expect everyone to jump on this bandwagon to scam users out of their money. It goes further than this, though. According to the FTC, Adobe knew this practice was shady, but continued it anyway because altering it would negatively affect the bottom line. The FTC is actually targeting two Adobe executives directly, which is always nice to hear – it’s usually management that pushes such illegal practices through, leaving the lower ranks little choice but to comply or lose their job. Stuff like this is exactly why confidence in the major technology companies is at an all-time low.
Cinnamon, the popular GTK desktop environment developed by the Linux Mint project, pushed out Cinnamon 6.2 today, which will serve as the default desktop for Linux Mint 22. It’s a relatively minor release, but it does contain a major new feature which is actually quite welcome: a new GTK frontend for GNOME Online Accounts, part of the XApp project. This makes it possible to use the excellent GNOME Online Accounts framework, without having to resort to a GNOME application – and will come in very handy on other GTK desktops, too, like Xfce. The remainder of the changes consist of a slew of bugfixes, small new features, and nips and tucks here and there. Wayland support is still an in-progress effort for Cinnamon, so you’ll be stuck with X for now.
Less than a month after 3.5.0, IceWM is already shipping version 3.6.0. Once again not a major, earth-shattering release, it does contain at least one really cool feature that I think it pretty nifty: if you double-click on a window border, it will maximise just that side of the window. Pretty neat. For the rest, it’s small changes and bug fixes for this venerable window manager.
It seems that if you want to steer clear from having Facebook use your Facebook, WhatsApp, Instagram, etc. data for machine learning training, you might want to consider moving to the European Union. Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe. The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools. ↫ Ashley Belanger These are just the opening salvos of the legal war that’s brewing here, so who knows how it’s going to turn out. For now, though, European Union Facebook users are safe from Facebook’s machine learning training.
Way, way back in the cold and bleak days of 2021, I mentioned Vinix on OSNews, an operating system written in the V programming language. A few days ago, over on Mastodon, the official account for the V programming language sent out a screenshot showing Solitaite running on Vinix, showing off what the experimental operating system can do. The project doesn’t seem to really publish any changelogs or release notes, so it’s difficult to figure out what, exactly, is going on at the moment. The roadmap indicates they’ve already got a solid base going to work from, such as mlibc, bash, GCC/G++, X and an X window manager, and more – with things like Wayland, networking, and more on the roadmap.
I have a feeling Microsoft is really starting to feel some pressure about its plans to abandon Windows 10 next year. Data shows that 70% of Windows users are still using Windows 10, and this percentage has proven to be remarkably resilient, making it very likely that hundreds of millions of Windows users will be out of regular, mainstream support and security patches next year. It seems Microsoft is, therefore, turning up the PR campaign, this time by publishing a blog post about myths and misconceptions about Windows 11. The kind of supposed myths and misconceptions Microsoft details are exactly the kind of stuff corporations with large deployments worry about at night. For instance, Microsoft repeatedly bangs the drum on application compatibility, stating that despite the change in number – 10 to 11 – Windows 11 is built on the same base as its predecessor, and as such, touts 99.7% application compatibility. Furthermore, Microsoft adds that if businesses to suffer from an incompatibility, they can use something call App Assure – which I will intentionally mispronounce until the day I die because I’m apparently a child – to fix any issues. Apparently, the visual changes to the user interface in Windows 11 are also a cause of concern for businesses, as Microsoft dedicated an entire entry to this, citing a study that the visual changes do not negatively impact productivity. The blog post then goes on to explain how the changes are actually really great and enhance productivity – you know, the usual PR speak. There’s more in the blog post, and I have a feeling we’ll be seeing more and more of this kind of PR offensive as the cut-off date for Windows 10 support nears. Windows 10 users will probably also see more and more Windows 11 ads when using their computers, too, urging them to upgrade even when they very well cannot because of missing TPMs or unsupported processors. I don’t think any of these things will work to bring that 70% number down much over the next 12 months, and that’s a big problem for Microsoft. I’m not going to make any predictions, but I wouldn’t be surprised if Microsoft will simply be forced by, well, reality to extend the official support for Windows 10 well beyond 2025. Especially with all the recent investigations into Microsoft’s shoddy internal security culture, there’s just no way they can cut 70% of their users off from security updates and patches.
Stranded on a desert island; lost in the forest; stuck in the snow; injured and unable to get back to civilization. Human beings have used their ingenuity for millennia to try to signal for rescue. there’s been a progression of technological innovations: smoke signals, mirrors, a loud whistle, a portable radio, a mobile phone. With each invention, it’s been possible to venture a little farther from populated areas and still have peace of mind about being able to call for help. But once you get past the range of a terrestrial radio tower, whether it’s into the wilderness or out at sea, it starts to get more complicated and expensive to be able to call for rescue. In the next year or so, it’s going to become a lot simpler and less expensive. Probably enough to become ubiquitous. Hardware infrastructure is already in place, and the relevant software and service support is rolling out now. It’s been possible for decades for adventurers to keep in contact via satellite. The first commercial maritime satellite communications was launched in 1976. Globalstar and Iridium launched in the late 90s and drove down the device size and service cost of satellite phones. However, the service was a lot more expensive than cellular phone service, and not enough people were willing to pay for remote comms to be able to overcome the massive infrastructure costs, and both companies went bankrupt. Their investors lost their money, but the satellites still worked, so once the bankruptcies were hashed out they fulfilled their promise, as least technologically. On a parallel track, in the late 1980s International Cospas-Sarsat Programme was set up to develop a system for satellite aided search and rescue system that detects and locates emergency beacons activated by aircraft, ships and people engaged in recreational activities in remote areas, and then sends these distress alerts to search-and-rescue (SAR) authorities. Many types of beacons are available, and nowadays they send exact GPS coordinates along with the call for rescue. In the 2010s, the Satellite Emergency Notification Device or SEND device was brought to market. These are portable beacons that connect to the Globalstar and Iridium networks and allow people in remote areas not only to call for help in emergencies, but also to communicate via text messaging. Currently the two most popular SEND devices are the Garmin inReach Mini 2 and the Spot X. These devices cost $400 and $250 USD respectively, and require monthly service fees of $12-40. For someone undertaking a long and dangerous expedition into the backcountry, these are very reasonable costs, especially for someone who does it often. But for most people, it’s just not practical to pay for and carry a device like that “just in case.” In 2022, the iPhone 14 included a feature that was the first step in taking satellite-based communication into the mainstream. It allows iPhone users to share their location via Find My feature with new radio hardware that connects to the Globalstar service. So if you’re out adventuring, your friends can keep track of where you are. And if there’s an emergency, you can make an emergency SOS. It’s not just a generic Mayday: you can text specific details about your emergency and it will be transmitted to the local authorities. You can also choose to notify your personal emergency contacts. Last week, at WWDC, Apple announced the next stage: in iOS 18, iMessage users will be able to send text messages over satellite, using the same Globalstar network as its SOS features. Initially at least, this feature is expected to be free. With this expansion, iPhone users will have the basic functionality of a SPOT or inReach device, without special hardware or a monthly fee. SpaceX’s Starlink, which first offered service in 2021, has much higher bandwidth and lower latency than the Globalstar and Iridium networks. Starlink’s current offering requires a dinner plate sized antenna and conventional networking hardware to enable high bandwidth mobile internet. It’s great for a vehicle, but impractical for a backpacker. However, SpaceX has announced 2nd generation satellites that can connect to 1900MHz spectrum mobile phone radios, and T-Mobile has announced that it will be enabling the service for its customers in late 2024, and Apple, Google, and Samsung devices are confirmed to be supported. Initially, like Apple’s service, this will be restricted to text messaging and other low-bandwidth applications. Phone calls and higher bandwidth internet connectivity are promised in 2025. The other two big US carriers, AT&T and Verizon, have announced they will be partnering with a competing service, AST SpaceMobile, but it’s unlikely those plans will come to fruition very soon. Mobile phone users outside the US will also need to wait. Apple’s Message via satellite is only announced for US users, as is T-Mobile’s offering. So if you’re in the US, and have an iPhone, or are a T-Mobile subscriber with an Apple, Samsung, or Google device, you’ll soon be able to point your phone at the sky, even in remote areas, to call for help, give your friends an update on your expedition, or just stay in touch. Pretty soon, Tom Hanks won’t have to make friends with a volleyball when he crash lands on a deserted island, at least not until his battery dies.
Way, way, way back in 2009, we reported on a small hobby operating system called StreamOS – version 0.21-RC1 had just been released that day. StreamOS was a 32-bit operating system written in Object Pascal using the Free Pascal Compiler, running on top of FreeDOS. It turns out that its creator, Oleksandr Natalenko (yes, the same person), recovered the old code, and republished it on Codeberg for posterity. It’s not a complete history, rather a couple of larger breadcrumbs stuck together with git. I didn’t do source code management much back in the days, and there are still some intermediate dev bits scattered across my backup drive that I cannot even date properly, but three branches I pushed (along with binaries, btw; feel free to fire up that qemu of yours and see how it crashes!) should contain major parts of what was done. ↫ Oleksandr Natalenko It may not carry the same import as Doom for the SNES, but it’s still great to see such continuity 15 years apart. I hope Natalenko manages to recover the remaining bits and bobs too, because you may never know – someone might be interested in picking up this 15 year old baton.
The complete source code for the Super Nintendo Entertainment System (SNES) version of Doom has been released on archive.org. Although some of the code was partially released a few years ago, this is the first time the full source code has been made publicly available. ↫ Shaun James at GBAtemp The code was very close to being lost forever, down to a corrupted disk that had to be fixed. It’s crazy how much valuable, historically relevant code we’re just letting rot away for no reason.
Howard Oakley has written an interesting history of secure enclaves on the Mac, and when he touches upon “exclaves”, a new concept that doesn’t have a proper term yet, he mentions something interesting. While an enclave is a territory entirely surrounded by the territory of another state, an exclave is an isolated fragment of a state that exists separately from the main part of that state. Although exclave isn’t a term normally used in computing, macOS 14.4 introduced three kernel extensions concerned with exclaves. They seem to have appeared first in iOS 17, where they’re thought to code domains isolated from the kernel that protect key functions in macOS even when the kernel becomes compromised. This in turn suggests that Apple is in the process of refactoring the kernel into a central micro-kernel with protected exclaves. This has yet to be examined in Sequoia. ↫ Howard Oakley I’m not going to add too much here since I’m not well-versed enough in the world of macOS to add anything meaningful, but I do think it’s an interesting theory worth looking into by people who posses far more knowledge about this topic than I do.
Sometimes you come across a story that’s equally weird and delightful, and this is definitely one of them. Oleksandr Natalenko posted a link on Mastodon to a curious email sent to the Linux Kernel Mailing List, which apparently gets sent to the LKML every single year. The message is very straightforward. Is it possible to write a kernel module which, when loaded, will blow the PC speaker? ↫ R.F. Burns on the LKML Since this gets sent every year, it’s most likely some automated thing that’s more of a joke than a real request at this point. However, originally, there was a real historical reason behind the inquiry, as Schlemihl Schalmeier on Mastodon points out. They link to the original rationale behind the request, posted to the LKML after the request was first made, all the way back in 2007. At the time, the author was helping a small school system manage a number of Linux workstations, and the students there were abusing the sound cards on those workstations for shenanigans. They addressed this by only allowing users with root privileges access to the sound devices. However, kids are smart, and they started abusing the PC speaker instead, and even unloading the PC speaker kernel module didn’t help because the kids found ways to abuse the PC speaker outside of the operating system (the BIOS maybe? I have no idea). And so, the author notes, the school system wanted them to remove the PC speakers entirely, but this would be a very fiddly and time-consuming effort, since there were a lot of PCs, and of course, this would all have to be done on-site – unlike the earlier solutions which could all be done remotely. So, the idea was raised about seeing if there was a way to blow the PC speaker by loading a kernel module. If so, a mass-deployment of a kernel module overnight would take care of the PC speaker problem once and for all. ↫ R.F. Burns on the LKML So, that’s the original story behind the request. It’s honestly kind of ingenious, and it made me wonder if the author got a useful reply on the LKML, and if such a kernel module was ever created. The original thread didn’t seem particularly conclusive to me, and the later yearly instances of the request don’t seem to yield much either. It seems unlikely to me this is possible at all. Regardless, this is a very weird bit of Linux kernel lore, and I’d love to know if there’s more going on. Various parts of the original rationale seem dubious to me, such as the handwavy thing about abusing the PC speaker outside of the operating system, and what does “abusing” the PC speaker even mean in the first place? As Natalenko notes, it seems there’s more to this story, and I’d love to find out what it is.
Brussels is set to charge Apple over allegedly stifling competition on its mobile app store, the first time EU regulators have used new digital rules to target a Big Tech group. The European Commission has determined that the iPhone maker is not complying with obligations to allow app developers to “steer” users to offers outside its App Store without imposing fees on them, according to three people with close knowledge of its investigation. ↫ Javier Espinoza and Michael Acton This was always going to happen for as long as Apple’s malicious compliance kept dragging on. The rules in the Digital Markets Act are quite clear and simple, and despite the kind of close cooperation with EU lawmakers no normal EU citizen is ever going to get, Apple has been breaking this law from day one without any intent to comply. European Union regulators have given Apple far, far more leeway and assistance than any regular citizen of small business would get, and that has to stop. The possible fines under the DMA are massive. If Apple is found guilty, they could be fined for up to 10% of its global revenue, or 20% for repeated violations. This is no laughing matters, and this is not one of those cases where a company like Apple could calculate fines as a mere cost of doing business – this would have a material impact on the company’s numbers, and shareholders are definitely not going to like it if Apple gets fined such percentages. As these are preliminary findings, Apple could still implement changes, but if past behaviour is any indication, any possibly changes will just be ever more malicious compliance.
Former employee says software giant dismissed his warnings about a critical flaw because it feared losing government business. Russian hackers later used the weakness to breach the National Nuclear Security Administration, among others. ↫ Renee Dudley at ProPublica In light of Recall, a very dangerous game.
Google’s own Project Zero security research effort, which often finds and publishes vulnerabilities in both other companies’ and its own products, set its sights on Android once more, this time focusing on third-party kernel drivers. Android’s open-source ecosystem has led to an incredible diversity of manufacturers and vendors developing software that runs on a broad variety of hardware. This hardware requires supporting drivers, meaning that many different codebases carry the potential to compromise a significant segment of Android phones. There are recent public examples of third-party drivers containing serious vulnerabilities that are exploited on Android. While there exists a well-established body of public (and In-the-Wild) security research on Android GPU drivers, other chipset components may not be as frequently audited so this research sought to explore those drivers in greater detail. ↫ Seth Jenkins They found a whole host of security issues in these third-party kernel drivers in phones both from Google itself as well as from other companies. An interesting point the authors make is that because it’s getting ever harder to find 0-days in core Android, people with nefarious intent are looking at other parts of an Android system now, and these kernel drivers are an inviting avenue for them. They seem to focus mostly on GPU drivers, for now, but it stands to reason they’ll be targeting other drivers, too. As usual with Android, the discovered exploits were often fixed, but the patches took way, way too long to find their way to end users due to the OEMs lagging behind when it comes to sending those patches to users. The authors propose wider adoption of Android APEX to make it easier to OEMs to deliver kernel patches to users faster. I always like the Project Zero studies and articles, because they really take no prisoners, and whether they’re investigating someone else like Microsoft or Apple, or their own company Google, they go in hard, do not surgarcoat their findings, and apply the same standards to everyone.