When you’re shopping online, you’ll likely find yourself jumping between multiple tabs to read reviews and research prices. It can be cumbersome doing all that back and forth tab switching, and online comparison is something we hear users want help with. In the next few weeks, starting in the U.S., Chrome will introduce Tab compare, a new feature that presents an AI-generated overview of products from across multiple tabs, all in one place. Imagine you’re looking for a new Bluetooth portable speaker for an upcoming trip, but the product details and reviews are spread across different pages and websites. Soon, Chrome will offer to generate a comparison table by showing a suggestion next to your tabs. By bringing all the essential details — product specs, features, price, ratings — into one tab, you’ll be able to easily compare and make an informed decision without the endless tab switching. ↫ Parisa Tabriz Is this really what people want from their browser, or am I just completely out of touch? I’m not at all convinced the latter isn’t the case, but this just seems like a filler feature. Is this really what all the AI hype is about? Is this kind of nonsense the end game we’re killing the planet even harder for?
It seems to be bootloader season, because we’ve got another one – this time, a research project with very limited application for most people. SentinelBoot is a cryptographically secure bootloader aimed at enhancing boot flow safety of RISC-V through memory-safe principles, predominantly leveraging the Rust programming language with its ownership, borrowing, and lifetime constraints. Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality. SentinelBoot achieves these objectives with a 20.1% hashing overhead (approximately 0.27s additional runtime) when compared to an example U-Boot binary (mainline at time of development), and produces a resulting binary one-tenth the size of an example U-Boot binary with half the memory footprint. ↫ Lawrence Hunter SentinelBoot is a project undertaken at the University of Manchester, and its goal is probably clear from the description: to develop a more secure bootloader for RISC V devices. An additional element is that they looked specifically at devices that receive updates over-the-air, like smartphones. In addition, scenarios where an attacker has physical access to the device in question were not considered, for obvious reasons – in such cases, the attacker can just replace the bootloader altogether anyway, and no amount of fancy Rust code is going to save you there. The details of the implementation as described in the article are definitely a little bit over my head, but the gist seems to be that the project’s been able to achieve a much more secure boot process without giving up much in performance. This being a research project with an intentionally limited scope does mean it’s most just something that’ll immediately benefit all of us, but it’s these kinds of projects that can really push the state of the art and try out the viability of new ideas.
As you all know, I continue to use WordStar for DOS 7.0 as my word-processing program. It was last updated in December 1992, and the company that made it has been defunct for decades; the program is abandonware. There was no proper archive of WordStar for DOS 7.0 available online, so I decided to create one. I’ve put weeks of work into this. Included are not only full installs of the program (as well as images of the installation disks), but also plug-and-play solutions for running WordStar for DOS 7.0 under Windows, and also complete full-text-searchable PDF versions of all seven manuals that came with WordStar — over a thousand pages of documentation. ↫ Robert J. Sawyer WordStar for DOS is definitely a bit of a known entity in our circles for still being used by a number of world-famous authors. WordStar 4.0 is still being used by George R. R. Martin – assuming he’s still even working on The Winds of Winter – and there must be some sort of reason as to why it’s still so oddly popular. Thanks to this work by author Robert J. Sawyer, accessing and using version 7 of WordStar for DOS is now easier than ever. One of the reasons Sawyer set out to do this was making sure that if he passes away, the people responsible for his estate and works will have an easy way to access his writings. It’s refreshing to see an author think ahead this far, and it will surely help a ton of other people too, since there’s quite a few documents lingering around using the WordStar format.
That sure is a big news drop for a random Tuesday. A federal judge ruled that Google violated US antitrust law by maintaining a monopoly in the search and advertising markets. “After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly,” according to the court’s ruling, which you can read in full at the bottom of this story. “It has violated Section 2 of the Sherman Act.” ↫ Lauren Feiner at The Verge Among many other things, the judge mentions Google’s own admissions that the company can do pretty much whatever it wants with Google Search and its advertisement business, without having to worry about users opting to go elsewhere or ad buyers leaving the Google platform. Studies from inside Google itself made it very clear that Google could systematically make Search worse without it affecting user and/or usage numbers in any way, shape, or form – because users have nowhere else to realistically go. While the ability to raise prices at will without fear of losing customers is a sure sign of being a monopoly, so is being able to make a product worse without fear of losing customers, the judge argues. Google plans to appeal, obviously, and this ruling has nothing yet to say about potential remedies, so what, exactly, is going to change is as of yet unknown. Potential remedies will be handled during the next phase of the proceedings, with the wildest and most aggressive remedy being a potential break-up of Google, Alphabet, or whatever it’s called today. My sights are definitely set on a break-up – hopefully followed by Apple, Amazon, Facebook, and Microsoft – to create some much-needed breathing room into the technology market, and pave the way for a massive number of newcomers to compete on much fairer terms. Of note is that the judge also put yet another nail in the coffin of Google’s various exclusivity deals, most notable with Apple and, for our interests, with Mozilla. Google pays Apple well over 20 billion dollars a year to be the default search engine on iOS, and it pays about 80% of Mozilla’s revenue to be the default search engine in Firefox. According to the judge, such deals are anticompetitive. Mehta rejected Google’s arguments that its contracts with phone and browser makers like Apple were not exclusionary and therefore shouldn’t qualify it for liability under the Sherman Act. “The prospect of losing tens of billions in guaranteed revenue from Google — which presently come at little to no cost to Apple — disincentivizes Apple from launching its own search engine when it otherwise has built the capacity to do so,” he wrote. ↫ Lauren Feiner at The Verge If the end of these deals become part of the package of remedies, it will be a massive financial blow to Apple – 20 billion dollars a year is about 15% of Apple’s total annual operating profits, and I’m also pretty sure those Google billions are counted as part of Tim Cook’s much-vaunted services revenue, so losing it would definitely impact Apple directly where it hurts. Sure, it’s not like it’ll make Apple any less of a dangerous behemoth, but it will definitely have some explaining to do to investors. Much more worrisome, however, is the similar deal Google has with Mozilla. About 80% of Mozilla’s total revenue comes from a search deal with Google, and if that deal were to be dissolved, the consequences for Mozilla, and thus for Firefox, would be absolutely immense. This is something I’ve been warning about for years now, and the end of this deal would be yet another worry that I’ve voiced repeatedly becoming reality, right after Mozilla becoming an advertising company and making Firefox worse in the name of quick profits. One by one, every single concern I’ve voiced about the future of Firefox is becoming reality. Canonical, Fedora, KDE, GNOME, and many other stakeholders – ignore these developments at your own peril.
After a number of very bug security incidents involving Microsoft’s software, the company promised it would take steps to put security at the top of its list of priorities. Today we got another glimpse of the step it’s taking, since the company is going to take security into account during performance reviews. Kathleen Hogan, Microsoft’s chief people officer, has outlined what the company expects of employees in an internal memo obtained by The Verge. “Everyone at Microsoft will have security as a Core Priority,” says Hogan. “When faced with a tradeoff, the answer is clear and simple: security above all else.” A lack of security focus for Microsoft employees could impact promotions, merit-based salary increases, and bonuses. “Delivering impact for the Security Core Priority will be a key input for managers in determining impact and recommending rewards,” Microsoft is telling employees in an internal Microsoft FAQ on its new policy. ↫ Tom Warren at The Verge Now, I’ve never worked in a corporate environment or something even remotely close to it, but something about this feels off to me. Often, it seems that individual, lower-level employees know all too well they’re cutting corners, but they’re effectively forced to because management expects almost inhuman results from its workers. So, in the case of a technology company like Microsoft, this means workers are pushed to write as much code as possible, or to implement as many features as possible, and the only way to achieve the goals set by management is to take shortcuts – like not caring as much about code quality or security. In other words, I don’t see how Microsoft employees are supposed to make security their top priority, while also still having to achieve any unrealistic goals set by management and other higher-ups. What I’m missing from this memo and associated reporting is Microsoft telling its employees that if unrealistic targets, crunch, low pay, and other factors that contribute to cutting corners get in the way of putting security first, they have the freedom to choose security. If employees are not given such freedom, demanding even more from them without anything in return seems like a recipe for disaster to me, making this whole memo quite moot. We’ll have to see what this will amount to in practice, but with how horrible employees are treated in most industries these days, especially in countries with terrible union coverage and laughable labour protection laws like the US, I don’t have high hopes for this.
CP/M is turning 50 this year. The ancient Control Program for Microcomputers, or CP/M for short, has been enjoying a modest renaissance in recent years. By 21st century standards, it’s unimaginably tiny and simple. The whole OS fits into under 200 kB, and the resident bit of the kernel is only about 3 kB. Today, in the era of end-user OSes in the tens-of-gigabytes size range, this exerts a fascination to a certain kind of hobbyist. Back when it was new, though, this wasn’t minimalist – it was all that early hardware could support. ↫ Liam Proven I’m a little too young to have experienced CP/M as anything other than a retro platform – I’m from 1984, and we got our first computer in 1990 or so – but its importance and influence cannot be overstated. Many of the conventions set by CP/M made their way to the various DOS variants, and in turn, we still see some of those conventions in Windows today. Had Digital Research, the company CP/M creator Gary Kildall set up to sell CP/M, accepted the deal with IBM to make CP/M the default operating system for the then newly-created IBM PC, we’d be living in a very different world today. Digital Research would also create several other popular and/or influential software products beyond CP/M, such as DR DOS and GEM, as well as various other DOS variants and CP/M versions with DOS compatibility. It would eventually be acquired by Novell, where it faded into obscurity.
Not too long ago I linked to a blog post by long-time OSNews reader (and silver Patreon) and friend of mine Morgan, about how to set up OpenBSD as a workstation operating system – and in fact, I personally used that guide in my own OpenBSD journey. Well, Morgan’s back with another, similar article, this time covering FreeBSD. After going through the basic steps needed to make FreeBSD a bit more amenable to desktop use, Morgan notes about performance: Now let’s compare FreeBSD. Well, quite frankly, there is no comparison! FreeBSD just feels snappier and more responsive on the desktop; at the same 170Hz refresh it actually feels like 170Hz. Void Linux always felt fast enough and I thought it had no lag at all at that refresh rate, but comparing them side by side (FreeBSD installed on the NVMe drive, Void running from a USB 4 SSD with similar performance), FreeBSD is smooth as glass and I started noticing just the slightest lag/stutter on Void. The same holds true for Firefox; I use smooth scrolling and on FreeBSD it really is perfectly smooth. Similarly, Youtube performance is unreal, with no dropped frames at any resolution all the way up to 4Kp60, and the videos look so much smoother! ↫ Morgan/kaidenshi This is especially relevant for me personally, since the prime reason I switched my workstation back to Fedora KDE was OpenBSD’s performance issues. While those performance issues were entirely expected and the result of the operating system’s focus on security and hardening, it did mean it’s just not suitable for me as a workstation operating system, even if I like the internals and find it a joy to use, even under the hood. If FreeBSD delivers more solid desktop and workstation performance, it might be time I set up a FreeBSD KDE installation and see if it can handle my workstation’s 270Hz 4K display. As I keep reiterating – the BSD world has a lot to offer those wishing to run a UNIX-like workstation operating system, and it’s articles like these that help people get started. A lot of the steps taken may seem elementary to many of us, but for people coming from Linux or even Windows, they may be unfamiliar and daunting, so having it all laid out in a straightforward manner is quite helpful.
As uBlock Origin lead developer and maintainer Raymond Hill explained on Friday, this is the result of Google deprecating support for the Manifest v2 (MV2) extensions platform in favor of Manifest v3 (MV3). “uBO is a Manifest v2 extension, hence the warning in your Google Chrome browser. There is no Manifest v3 version of uBO, hence the browser will suggest alternative extensions as a replacement for uBO,” Hill explained. ↫ Sergiu Gatlan at Bleeping Computer If you’re still using Chrome, or any possible Chrome skins who have not committed to keeping Manifest v2 extensions enabled, it’s really high time to start thinking about jumping ship if ad blocking matters to you. Of course, we don’t know for how long Firefox will remain able to properly block ads either, but for now, it’s obviously the better choice for those of us who care about a better browsing experience. And just to reiterate: I fully support anyone’s right to block ads, even on OSNews. Your computer, your rules. There are a variety of other, better means to support OSNews – our Patreon, individual donations through Ko-Fi, or buying our merch – that are far better for us than ads will ever be.
Limine is an advanced, portable, multiprotocol bootloader that supports Linux, multiboot1 and 2, the native Limine boot protocol, and more. Limine is lightweight, elegant, fast, and the reference implementation of the Limine boot protocol. The Limine boot protocol’s main target audience is operating system and kernel developers that want to use a boot protocol which supports modern features in an elegant manner, that GRUB’s aging multiboot protocols do not (or do not properly). ↫ Limine website I wish trying out different bootloaders was an easier thing to do. Personally, since my systems only run Fedora Linux, I’d love to just move them all over to systemd-boot and not deal with GRUB at all anymore, but since it’s not supported by Fedora I’m worried updates might break the boot process at some point. On systems where only one operating system is installed, as a user I should really be given the choice to opt for the simplest, most basic boot sequence, even if it can’t boot any other operating systems or if it’s more limited than GRUB.
Following our recent work 5 with Ubuntu 24.04 LTS where we enabled frame pointers by default to improve debugging and profiling, we’re continuing our performance engineering efforts by evaluating the impact of O3 optimization in Ubuntu. O3 is a GCC optimization 14 level that applies more aggressive code transformations compared to the default O2 level. These include advanced function and the use of sophisticated algorithms aimed at enhancing execution speed. While O3 can increase binary size and compilation time, it has the potential to improve runtime performance. ↫ Ubuntu Discourse If these optimisations deliver performance improvements, and the only downside is larger binaries and longer compilation times, it seems like a bit of a no-brainer to enable these, assuming those mentioned downsides are within reason. Are there any downsides they’re not mentioning? Browsing around and doing some minor research it seems that -O3 optimisations may break some packages, and can even lead to performance degradation, defeating the purpose altogether. Looking at a set of benchmarks from Phoronix from a few years ago, in which the Linux kernel was compiled with either O2 and O3 and their performance compared, the results were effectively tied, making it seem not worth it at all. However, during these benchmarks, only the kernel was tested; everything else was compiled normally in both cases. Perhaps compiling the entire system with O3 will yield improvements in other parts of the system that do add up. For now, you can download unsupported Ubuntu ISOs compiled with O3 optimisations enabled to test them out.
Another month, another chunk of progress for the Servo rendering engine. The biggest addition is enabling table rendering to be spread across CPU cores. Parallel table layout is now enabled, spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo’s layout engine. ↫ Servo blog On top of this, there’s tons of improvements to the flexbox layout engine, support generic font families like ‘sans-serif’ and ‘monospace’ has been added, and Servo now supports OpenHarmony, the operating system developed by Huawei. This month also saw a lot of work on the development tools.
Most application on GNU/Linux by convention delegate to xdg-open when they need to open a file or a URL. This ensures consistent behavior between applications and desktop environments: URLs are always opened in our preferred browser, images are always opened in the same preferred viewer. However, there are situations when this consistent behavior is not desired: for example, if we need to override default browser just for one application and only temporarily. This is where xdg-override helps: it replaces xdg-open with itself to alter the behavior without changing system settings. ↫ xdg-override GitHub page I love this project ever since I came across it a few days ago. Not because I need it – I really don’t – but because of the story behind its creation. The author of the tool, Dmytro Kostiuchenko, wanted Slack, which he only uses for work, to only open his work browser – which is a different browser from his default browser. For example, imagine you normally use Firefox for everything, but for all your work-related things, you use Chrome. So, when you open a link sent to you in Slack by a colleague, you want that specific link to open in Chrome. Well, this is not easily achieved in Linux. Applications on Linux tend to use freedesktop.org’s xdg-open for this, which looks at the file mimeapps.list to learn which application opens which file type or URL. To solve Kostiuchenko’s issue, changing the variable $XDG_CONFIG_HOME just for Slack to point xdg-open to a different configuration file doesn’t work, because the setting will be inherited by everything else spwaned from Slack itself. Changing mimeapps.list doesn’t work either, of course, since that would affect all other applications, too. So, what’s the actual solution? We’d like also not to change xdg-open implementation globally in our system: ideally, the change should only affect Slack, not all other apps. But foremost, diverging from upstream is very unpractical. However, in the spirit of this solution, we can introduce a proxy implementation of xdg-open, which we’ll “inject” into Slack by adding it to PATH. ↫ Dmytro Kostiuchenko xdg-override takes this idea and runs with it: It is based on the idea described above, but the script won’t generate proxy implementation. Instead, xdg-override will copy itself to /tmp/xdg-override-$USER/xdg-open and will set a few $XDG_OVERRIDE_* variables and the $PATH. When xdg-override is invoked from this new location as xdg-open, it’ll operate in a different mode, parsing $XDG_OVERRIDE_MATCH and dispatching the call appropriately. I tested this script briefly, but automated tests are missing, so expect some rough edges and bugs. ↫ Dmytro Kostiuchenko I don’t fully understand how it works, but I get the overall gist of what it’s doing. I think it’s quite clever, and solves a very specific issue in a non-destructive way. While it’s not something most people will ever need, it feels like something that if you do need it, it will quickly become a default part of your toolbox or workflow.
Today, every Unix-like system can trace their ancestry back to the original Unix. That includes Linux, which uses the GNU tools – and the GNU tools are based on the Unix tools. Linux in 2024 is removed from the original Unix design, and for good reason – Linux supports architectures and tools not dreamt of during the original Unix era. But the core command line experience in Linux is still very similar to the Unix command line of the 1970s. The next time you use ls to list the files in a directory, remember that you’re using a command line that’s been with us for more than fifty years. ↫ Jim Hall An excellent overview of some of the more ancient UNIX commands that are still with us today. One thing I always appreciate when I dive into an operating system closer to “real” UNIX, like OpenBSD, or a actual UNIX, like HP-UX, is just how much more logical sense they make under the hood than a Linux system does. This is not a dunk on modern Linux – it has to cater to endless more modern needs than something ancient and dead like HP-UX – but what I learn while using these systems closer to the UNIX has made me appreciate proper UNIX more than I used to in the past. In what surely sounds like utter lunacy to system administrators who actually had to seriously administer HP-UX systems back in the day, I genuinely love using HP-UX, setting it up, configuring it, messing around with it, because it just makes so much more logical sense than the systems we use today. The knowledge gained from using BSD, HP-UX, and others, while not always directly applicable to Linux, does aid me in understanding certain Linux things better than I did before. What I’m trying to say is – go and load up an old UNIX, or at least a modern BSD. Aside from being great operating systems in their own right, they’re much easier to grasp than a modern Linux system, and you’ll learn a lot form the experience.
Android 14 introduced the ability for application stores to claim ownership over application updates, to ensure other installation sources won’t accidentally update applications they shouldn’t. What is still lacking, however, is for users to easily change the update ownership for applications. In other words, say you install an application by downloading an APK from GitHub, and later the application makes its way to F-Droid, you’ll get warning popups when F-Droid tries to update that application. That’s about to change, it seems, as Android Authority discovered that the Play Store application seems to be getting a new feature where it can take ownership of an application’s updates. A new flag spotted in the latest Google Play Store release suggests that users may see the option to install updates for apps downloaded from a different source. As you can see in the attached screenshots, the Play Store will show available updates for apps downloaded from different sources. On the app listing, you’ll also see a new “Update from Play” button that will switch the update ownership from the original source to the Play Store. ↫ Pranob Mehrotra at Android Authority Assuming this functionality is just an API other application stores can also tap into, this will be a great addition to Android for power users who use multiple application stores and want to properly manage which store updates what applications. It’s not something most people will ever really use or need, but if you’re the kind of person who does need it – it’ll become indispensable.
This is my second book written with Sphinx, after the new Learn TLA+. Sphinx uses a peculiar markup called reStructured Text (rST), which has a steeper learning curve than markdown. I only switched to it after writing a couple of books in markdown and deciding I needed something better. So I want to talk about why rst was that something. ↫ Hillel Wayne I’ve never liked Markdown – I find it quite arbitrary and unpleasant to look at, and the fact there’s countless variants that all differ a tiny bit doesn’t help – so even though I don’t actually use Markdown for anything, I always have a passing interest in possible alternatives, if only to see what other, different, and unique ideas are out there when it comes to relatively simple markup languages. Now, I’m quite sure reStructured Text isn’t for me either, since I feel like it’s far more powerful than Markdown, and serves a different, more complex purpose. That being said, I figured I’d highlight it here since it seems it may be interesting to some of you who work on documentation for your software projects or similar endeavours.
Serpent OS, a new Linux distribution with a completely custom package management system written in Rust, has released its very very rough pre-alpha release. They’ve been working on this for four years, and they’re making some interesting choices regarding packaging that I really like, at least on paper. This will of course appear to be a very rough (crap) prealpha ISO. Underneath the surface it is using the moss package manager, our very own package management solution written in Rust. Quite simply, every single transaction in moss generates a new filesystem tree (/usr) in a staging area as a full, stateless, OS transaction. When the package installs succeed, any transaction triggers are run in a private namespace (container) before finally activating the new /usr tree. Through our enforced stateless design, usr-merge, etc, we can atomically update the running OS with a single renameat2 call. As a neat aside, all OS content is deduplicated, meaning your last system transaction is still available on disk allowing offline rollbacks. ↫ Ikey Doherty Since this is only a very rough pre-alpha release, I don’t have much more to say at this point, but I do think it’s interesting enough to let y’all know about it. Even if you’re not the kind of person to dive into pre-alphas, I think you should keep an eye on Serpent OS, because I have a feeling they’re on to something valuable here.
Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN). One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed. ↫ Esther Shein at ACM I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our “work”, the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good “at computers”, and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers – but since I knew, and know, absolutely nothing about math, I couldn’t for the life of me explain how I got to the answers. Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren’t aware of the steps between problem and solution, you aren’t actually learning anything. By using ChatGPT, you’re not actually learning how to program or how to improve your skills – you’re just hitting the right buttons on a graphing calculator and writing down what’s on the screen, without understanding why or how. I can totally see how using ChatGPT for boring boilerplate code you’ve written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I’m just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.
Is machine learning, also known as “artificial intelligence”, really aiding workers and increasing productivity? A study by Upwork – which, as Baldur Bjarnason so helpfully points out, sells AI solutions and hence did not promote this study on its blog as it does with its other studies – reveals that this might not actually be the case. Nearly half (47%) of workers using AI say they have no idea how to achieve the productivity gains their employers expect. Over three in four (77%) say AI tools have decreased their productivity and added to their workload in at least one way. For example, survey respondents reported that they’re spending more time reviewing or moderating AI-generated content (39%), invest more time learning to use these tools (23%), and are now being asked to do more work (21%). Forty percent of employees feel their company is asking too much of them when it comes to AI. ↫ Upwork research This shouldn’t come as a surprise. We’re in a massive hype cycle when it comes to machine learning, and we’re being told it’s going to revolutionise work and lead to massive productivity gains. In practice, however, it seems these tools just can’t measure up to the hyped promises, and in fact is making people do less and work slower. There’s countless stories of managers being told by upper management to shove machine learning into everything, from products to employee workflows, whether it makes any sense to do so or not. I know from experience as a translator that machine learning can greatly improve my productivity, but the fact that there are certain types of tasks that benefit from ML, doesn’t mean every job suddenly thrives with it. I’m definitely starting to see some cracks in the hype cycle, and this study highlights a major one. I hope we can all come down to earth again, and really take a careful look at where ML makes sense and where it does not, instead of giving every worker a ChatGPT account and blanket demanding massive productivity gains that in no way match the reality on the office floor. And of course, despite demanding massive productivity increases, it’s not like workers are getting an equivalent increase in salary. We’ve seen massive productivity increases for decades now, while paychecks have not followed suit at all, and many people can actually buy less with their salary today than their parents could decades ago. Demands imposed by managers by introducing AI is only going to make this discrepancy even worse.
Logitech CEO Hanneke Faber talked about someting called the “forever mouse”, which would be, as the name implies, a mouse that customers could use for a very long time. While you may think this would mean an incredibly well-built mouse, or one that can be easily repaired, which Logitech already makes somewhat possible through a partnership with iFixIt, another option the company is thinking about is a subscription model. Yes. Faber said subscription software updates would mean that people wouldn’t need to worry about their mouse. The business model is similar to what Logitech already does with video conferencing services (Logitech’s B2B business includes Logitech Select, a subscription service offering things like apps, 24/7 support, and advanced RMA). Having to pay a regular fee for full use of a peripheral could deter customers, though. HP is trying a similar idea with rentable printers that require a monthly fee. The printers differ from the idea of the forever mouse in that the HP hardware belongs to HP, not the user. However, concerns around tracking and the addition of ongoing expenses are similar. ↫ Scharon Harding at Ars Technica Now, buying a mouse whose terrible software requires subscription models would still be a choice you can avoid, but my main immediately conjured up a far darker scenario. PC makers have a long history of adding crapware to their machines in return for payments from the producers of said crapware. I can totally see what’s going to happen next. You buy a brand new laptop, unbox it at home, and turn it on. Before you know it, a dialog pops up right after he crappy Windows out-of-box experience asking you to subscribe to your laptop’s touchpad software in order to unlock its more advanced features like gestures. But why stop there? The keyboard of that new laptop has RGB backlighting, but if you want to change its settings, you’re going to have to pay for another subscription. Your laptop’s display has additional features and modes for specific types of content and more settings sliders, but you’ll have to pay up to unlock them. And so on. I’m not saying this will happen, but I’m also not saying it won’t. I’m sorry for birthing this idea into the world.
Microsoft has published a post-mortem of the CrowdStrike incident, and goes into great depths to describe where, exactly, the error lies, and how it could lead to such massive problems. I can’t comment anything insightful on the technical details and code they show to illustrate all of this – I’ll leave that discussion up to you – but Microsoft also spends considerable amount of time explaining why security vendors are choosing to use kernel-mode drivers. Microsoft lists three major reasons why security vendors opt for using kernel modules, and none of them will come as a great surprise to OSNews readers: kernel drivers provide more visibility into the system than a userspace tool would, there are performance benefits, and they’re more resistant to tampering. The downsides are legion, too, of course, as any crash or similar issue in kernel mode has far-reaching consequences. The goal, then, according to Microsoft, is to balance the need for greater insight, performance, and tamper resistance with stability. And while the company doesn’t say it directly, this is clearly where CrowdStrike failed – and failed hard. While you would want a security tool like CrowdStrike to perform as little as possible in kernelspace, and conversely as much as possible in userspace, that’s not what CrowdStrike did. They are running a lot of stuff in kernelspace that really shouldn’t be there, such as the update mechanism and related tools. In total, CrowdStrike loads four kernel drivers, and much of their functionality can be run in userspace instead. It is possible today for security tools to balance security and reliability. For example, security vendors can use minimal sensors that run in kernel mode for data collection and enforcement limiting exposure to availability issues. The remainder of the key product functionality includes managing updates, parsing content, and other operations can occur isolated within user mode where recoverability is possible. This demonstrates the best practice of minimizing kernel usage while still maintaining a robust security posture and strong visibility. Windows provides several user mode protection approaches for anti-tampering, like Virtualization-based security (VBS) Enclaves and Protected Processes that vendors can use to protect their key security processes. Windows also provides ETW events and user-mode interfaces like Antimalware Scan Interface for event visibility. These robust mechanisms can be used to reduce the amount of kernel code needed to create a security solution, which balances security and robustness. ↫ David Weston, Vice President, Enterprise and OS Security at Microsoft In what is surely an unprecedented event, I agree with the CrowdStrike criticism bubbling under the surface of this post-mortem by Microsoft. Everything seems to point towards CrowdStrike stuffing way more things in kernelspace than is needed, and as such creating a far larger surface for things to go catastrophically wrong than needed. While Microsoft obviously isn’t going to openly and publicly throw CrowdStrike under the bus, it’s very clear what they’re hinting at here, and this is about as close to a public flogging we’re going to get. Microsoft’s post-portem further details a ton of work Microsoft has recently done, is doing, and will soon be doing to further strenghthen Windows’ security, to lessen the need for kernelspace security drivers even more, including adding support for Rust to the Windows kernel, which should also aid in mitigating some common problems present in other, older programming languages (while not being a silver bullet either, of course).