Keep OSNews alive by becoming a Patreon, by donating through Ko-Fi, or by buying merch!

Booting Sun SPARC servers

In early 2022 I got several Sun SPARC servers for free off of a FreeCycle ad: I was recently called out for not providing any sort of update on those devices… so here we go! ↫ Sidneys1.com Some information on booting old-style SPARC machines, as well as pretty pictures. Nice palate-cleanser if you’ve had to deal with something unpleasant this weekend. This world would be a better place if we all had our own Sun machines to play with when we get sad.

Chromium’s influence on Chromium alternatives

I don’t think most people realize how Firefox and Safari depend on Google for more than “just” revenue from default search engine deals and prototyping new web platform features. Off the top of my head, Safari and Firefox use the following Chromium libraries: libwebrtc, libbrotli, libvpx, libwebp, some color management libraries, libjxl (Chromium may eventually contribute a Rust JPEG-XL implementation to Firefox; it’s a hard image format to implement!), much of Safari’s cryptography (from BoringSSL), Firefox’s 2D renderer (Skia)…the list goes on. Much of Firefox’s security overhaul in recent years (process isolation, site isolation, user namespace sandboxes, effort on building with ControlFlowIntegrity) is directly inspired by Chromium’s architecture. ↫ Rohan “Seirdy” Kumar Definitely an interesting angle on the browser debate I hadn’t really stopped to think about before. The argument is that while Chromium’s dominance is not exactly great, the other side of the coin is that non-Chromium browsers also make use of a lot of Chromium code all of us benefit from, and without Google doing that work, Mozilla would have to do it by themselves, and let’s face it, it’s not like they’re in a great position to do so. I’m not saying I buy the argument, but it’s an argument nonetheless. I honestly wouldn’t mind a slower development pace for the web, since I feel a lot of energy and development goes into things making the web worse, not better. Redirecting some of that development into things users of the web would benefit from seems like a win to me, and with the dominant web engine Chromium being run by an advertising company, we all know where their focus lies, and it ain’t on us as users. I’m still firmly on the side of less Chromium, please.

Google’s ad-blocking crackdown underway

Google has gotten a bad reputation as of late for being a bit overzealous when it comes to fighting ad blockers. Most recently, it’s been spotted automatically turning off popular ad blocking extension uBlock Origin for some Google Chrome users. To a degree, that makes sense—Google makes its money off ads. But with malicious ads and data trackers all over the internet these days, users have legitimate reasons to want to block them. The uBlock Origin controversy is just one facet of a debate that goes back years, and it’s not isolated: your favorite ad blocker will likely be affected next. Here are the best ways to keep blocking ads now that Google is cracking down on ad blockers. ↫ Michelle Ehrhardt at LifeHacker Here’s the cold and harsh reality: ad blocking will become ever more difficult as time goes on. Not only is Google obviously fighting it, other browser makers will most likely follow suit. Microsoft is an advertising company, so Edge will follow suit in dropping Manifest v2 support. Apple is an advertising company, and will do whatever they can to make at least their own ads appear. Mozilla is an advertising company, too, now, and will continue to erode their users’ trust in favour of nebulous nonsense like privacy-respecting advertising in cooperation with Facebook. The best way to block ads is to move to blocking at the network level. Get a cheap computer or Raspberry Pi, set up Pi-Hole, and enjoy some of the best adblocking you’re ever going to get. It’s definitely more involved than just installing a browser extension, but it also happens to be much harder for advertising companies to combat. If you’re feeling generous, set up Pi-Holes for your parents, friends, and relatives. It’s worth it to make their browsing experience faster, safer, and more pleasant. And once again I’d like to reiterate that I have zero issues with anyone blocking the ads on OSNews. Your computer, your rules. It’s not like display ads are particularly profitable anyway, so I’d much rather you support us through Patreon or a one-time donation through Ko-Fi, which is a more direct way of ensuring OSNews continues to exist. Also note that the OSNews Matrix room – think IRC, but more modern, and fully end-to-end encrypted – is now up and running and accessible to all OSNews Patreons as well.

Qualcomm cancels its mini PC with the Snapdragon X Elite processor

Something odd happened to Qualcomm’s Snapdragon Dev Kit, an $899 mini PC powered by Windows 11 and the company’s latest Snapdragon X Elite processor. Qualcomm decided to abruptly discontinue the product, refund all orders (including for those with units on hand), and cease its support, claiming the device “has not met our usual standards of excellence.” ↫ Taras Buria at Neowin The launch of the Snapdragon X Pro and Elite chips seems to have mostly progressed well, but there have been a few hiccups for those of us who want ARM but aren’t interested in Windows and/or laptops. There’s this story, which is just odd all around, with an announced, sold, and even shipped product suddenly taken off the market, which I think at this point was the only non-laptop device with an X Elite or Pro chip. If you are interested in developing for Qualcomm’s new platform, but don’t want a laptop, you’re out of luck for now. Another note is that the SoC SKU in the Dev Kit was clocked a tiny bit higher than the laptop SKUs, which perhaps plays a role in its cancellation. The bigger hiccup is the problematic Linux bring-up, which is posing many more problems and is taking a lot longer than Qualcomm very publicly promised it would take. For now, if you want to run Linux on a Snapdragon X Elite or Pro device, you’re going to need a custom version of your distribution of choice, tailored to a specific laptop model, using a custom kernel. It’s an absolute mess and basically means that at this point in time, months and months after release, buying one of these to run Linux on them is a bad idea. Quite a few important bits will arrive with Linux 6.12 to supposedly greatly improve the experience, but seeing is believing. Qualcomm made a lot of grandiose promises about Linux support, and they simply haven’t delivered.

Go Plan9 memo, speeding up calculations 450%

I want to take advantage of Go’s concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD, Same Instruction Muliple Data . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go’s compiler does not utilizes simd, and I cound not find a general propose simd package that I liked. I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? ↫ Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don’t fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.

How to install Windows 11 on supported and unsupported PCs, 24H2 edition

We’ve pulled together all kinds of resources to create a comprehensive guide to installing and upgrading to Windows 11. This includes advice and some step-by-step instructions for turning on officially required features like your TPM and Secure Boot, as well as official and unofficial ways to skirt the system-requirement checks on “unsupported” PCs, because Microsoft is not your parent and therefore cannot tell you what to do. There are some changes in the 24H2 update that will keep you from running it on every ancient system that could run Windows 10, and there are new hardware requirements for some of the operating system’s new generative AI features. We’ve updated our guide with everything you need to know. ↫ Andrew Cunningham at Ars Technica In the before time, the things you needed to do to make Windows somewhat usable mostly came down to installing applications replicating features other operating systems had been enjoying for decades, but as time went on and Windows 10 came out, users now also had to deal with disabling a ton of telemetry, deleting preinstalled adware, dodge the various dark patterns around Edge, and more. You have wonder if it was all worth it, but alas, Windows 10 at least looked like Windows, if you squinted. With Windows 11, Microsoft really ramped up the steps users have to take to make it usable. There’s all of the above, but now you also have to deal with an ever-increasing number of ads, even more upsells and Edge dark patterns, even more data gathering, and the various hacks you have to employ to install it on perfectly fine and capable hardware. With Windows 10’s support ending next year, a lot of users are in a rough spot, since they can’t install Windows 11 without resorting to hacks, and they can’t keep using Windows 10 if they want to keep getting updates. And here comes 24H2, which makes it all even worse. Not only have various avenues to make Windows 11 installable on capable hardware been closed, it also piles on a whole bunch of “AI” garbage, and accompanying upsells and dark patterns, Windows users are going to have to deal with. Who doesn’t want Copilot regurgitating nonsense in their operating system’s search tool, or have Paint strongly suggest it will “improve” your quick doodle to illustrate something to a friend with that unique AI Style™ we all love and enjoy so much? Stay strong out there, Windows folks. Maybe it’ll get better. We’re rooting for you.

The costs of the i386 to x86-64 upgrade

If you read my previous article on DOS memory models, you may have dismissed everything I wrote as “legacy cruft from the 1990s that nobody cares about any longer”. After all, computers have evolved from sporting 8-bit processors to 64-bit processors and, on the way, the amount of memory that these computers can leverage has grown orders of magnitude: the 8086, a 16-bit machine with a 20-bit address space, could only use 1MB of memory while today’s 64-bit machines can theoretically access 16EB. All of this growth has been in service of ever-growing programs. But… even if programs are now more sophisticated than they were before, do they all really require access to a 64-bit address space? Has the growth from 8 to 64 bits been a net positive in performance terms? Let’s try to answer those questions to find some very surprising answers. But first, some theory. ↫ Julio Merino It’s not quite weekend yet, but I’m still calling this some light reading for the weekend.

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts. In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports. ↫ Kevin Purdy at Ars Technica It’s a welcome collection of changes and features to better align Android’ theft and personal privacy protection with how thieves steal phones in this day and age. I’m not sure I understand all of them, though – the Private Space, where you can drop applications to lock them behind an additional pin code, confuses me, since everyone can see it’s there. I assumed Private Space would also give people in vulnerable positions – victims of abuse, journalists, dissidents, etc. – the option to truly hide parts of their life to protect their safety, but it doesn’t seem to work that way. Android 15 will also use “AI” to recognise when a device is yanked out of your hands and lock it instantly, which is a great use case for “AI” that actually benefits people. Of course, it will be even more useful once thieves are aware this feature exists, so that they won’t even try to steal your phone in the first place, but since this is Android, it’ll be a while before Android 15 makes its way to enough users for it to matter.

Huawei’s Android-free ‘HarmonyOS NEXT’ will go live next week

Earlier this year we talked about Huawei’s HarmonyOS NEXT, which is most likely the only serious competitor to Android and iOS in the world. HarmonyOS started out as a mere Android skin, but over time Huawei invested heavily into the platform to expand it into a full-blown, custom operating system with a custom programming language, and it seems the company is finally ready to take the plunge and release HarmonyOS NEXT into the wild. It’s indicated that HarmonyOS made up 17% of China’s smartphone market in Q1 of 2024. That’s a significant amount of potential devices breaking off from Android in a market dominated by either it or iOS. HarmonyOS NEXT is set to begin rolling out to Huawei devices next week. The OS will first come to the Mate 60, Mate X5, and MatePad Pro on October 15. ↫ Andrew Romero at 9To5Google Huawei has been hard at work making sure there’s no ‘application gap’ for people using HarmonyOS NEXT, claiming it has 10000 applications ready to go that cover “99.9%” of their users’ use case. That’s quite impressive, but of course, we’ll have to wait and see if the numbers line up with the reality on the ground for Chinese consumers. Here in the est HarmonyOS NEXT is unlikely to gain any serious traction, but that doesn’t mean I would mind taking a look at the platform if at all possible. It’s honestly not surprising the most serious attempt at creating a third mobile ecosystem is coming from China, because here in the west the market is so grossly rusted shut we’re going to be stuck with Android and iOS until the day I die.

Google is preparing to let you run Linux apps on Android, just like Chrome OS

Engineers at Google started work on a new Terminal app for Android a couple of weeks ago. This Terminal app is part of the Android Virtualization Framework (AVF) and contains a WebView that connects to a Linux virtual machine via a local IP address, allowing you to run Linux commands from the Android host. Initially, you had to manually enable this Terminal app using a shell command and then configure the Linux VM yourself. However, in recent days, Google began work on integrating the Terminal app into Android as well as turning it into an all-in-one app for running a Linux distro in a VM. ↫ Mishaal Rahman at Android Authority There already are a variety of ways to do this today, but having it as a supported feature implemented by Google is very welcome. This is also going to greatly increase the number of spammy articles and lazy YouTube videos telling you how to “run Ubuntu on your phone”, which I’m not particularly looking forward to.

A Google breakup is on the table, say DOJ lawyers

Next up in my backlog of news to cover: the US Department of Justice’s proposed remedies for Google’s monopolistic abuse. Now that Judge Amit Mehta has found Google is a monopolist, lawyers for the Department of Justice have begun proposing solutions to correct the company’s illegal behavior and restore competition to the market for search engines. In a new 32-page filing (included below), they said they are considering both “behavioral and structural remedies.“ That covers everything from applying a consent decree to keep an eye on the company’s behavior to forcing it to sell off parts of its business, such as Chrome, Android, or Google Play. ↫ Richard Lawler at The Verge While I think it would be a great idea to break Google up, such an action taken in a vacuum seems to be rather pointless. Say Google is forced to spin off Android into a separate company – how is that relatively small Android, Inc. going to compete with the behemoth that is Apple and its iOS to which such restrictions do not apply? How is Chrome Ltd. going to survive Microsoft’s continued attempts at forcing Edge down our collective throats? Being a dedicated browser maker is working out great for Firefox, right? This is the problem with piecemeal, retroactive measures to try and “correct” a market position that you have known for years is being abused – sure, this would knock Google down a peg, but other, even larger megacorporations like Apple or Microsoft will be the ones to benefit most, not any possible new companies or startups. This is exactly why a market-wide, equally-applied set of rules and regulations, like the European Union’s Digital Markets Act, is a far better and more sustainable approach. Unless similar remedies are applied to Google’s massive competitors, these Google-specific remedies will most likely only make things worse, not better, for the American consumer.

Internet Archive hacked and victim of DDoS attacks

Internet Archive’s “The Wayback Machine” has suffered a data breach after a threat actor compromised the website and stole a user authentication database containing 31 million unique records. News of the breach began circulating Wednesday afternoon after visitors to archive.org began seeing a JavaScript alert created by the hacker, stating that the Internet Archive was breached. “Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!,” reads a JavaScript alert shown on the compromised archive.org site. ↫ Lawrence Abrams at Bleeping Computer To make matters worse, the Internet Archive was also suffering from waves of distributed denial-of-service attacks, forcing the IA to take down the site while strengthening everything up. It seems the attackers have no real motivation, other than the fact they can, but it’s interesting, shall we say, that the Internet Archive has been under legal assault by big publishers for years now, too. I highly doubt the two are related in any way, but it’s an interesting note nonetheless. I’m still catching up on all the various tech news stories, but this one was hard to miss. A lot of people are rightfully angry and dismayed about this, since attacking the Internet Archive like this kind of feels like throwing Molotov cocktails at a local library – there’s literally not a single reason to do so, and the only people you’re going to hurt are underpaid librarians and chill people who just want to read some books. Whomever is behind this are just assholes, no ifs and buts about it.

Goodbye Windows 7

I finally seem to be recovering from a nasty flu that is now wreaking havoc all across my tiny Arctic town – better now than when we hit -40 I guess – so let’s talk about something that’s not going to recover because it actually just fucking died: Windows 7. For nearly everyone, support for Windows 7 ended on January 14th, 2020. However, if you were a business who needed more time to migrate off of it because your CEO didn’t listen to the begging and pleading IT department until a week before the deadline, Microsoft did have an option for you. Businesses could pay to get up to 3 years of extra security updates. This pushes the EOL date for Windows 7 to January 10th, 2023. Okay but that’s still nearly 2 years earlier than October 8th, 2024? ↫ The Cool Blog I’d like to solve the puzzle! It’s POSReady, isn’t it? Of course it is! Windows Embedded POSReady’s support finally ended a few days ago, and this means that for all intents and purposes, Windows 7 is well and truly dead. In case you happen to be a paleontologist, think of Windows Embedded POSReady adding an extra two years of support to Windows 7 as the mammoths who managed to survive on Wrangel until as late as only 4000 years ago. Windows 7 was one of the good ones, for sure, and all else being equal, I’d choose it over any of the releases that cam after. It feels like Windows 7 was the last release designed primarily for users of the Windows platform, whereas later releases were designed more to nickle and dime people with services, ads, and upsells that greatly cheapened the operating system. I doubt we’ll ever see such a return to form again, so Windows 7 might as well be the last truly beloved Windows release. If you’re still using Windows 7 – please don’t, unless you’re doing it for the retrocomputing thrill. I know Windows 8, 10, and 11 are scary, and as much as it pains me to say this, you’re better off with 10 or 11 at this point, if only for security concerns.

OS/2 TCPBEUI name resolution

Sometimes I have the following problem to deal with: An OS/2 system uses NetBIOS over TCP/IP (aka TCPBEUI) and should communicate with a SMB server (likewise using TCPBEUI) on a different subnet. This does not work on OS/2 out of the box without a little bit of help. ↫ Michal Necasek My 40° fever certainly isn’t helping, but goes way over my head. Still, it seems like an invaluable article for a small group of people, and anyone playing with OS/2 and networking from here on out can refer back this excellent and detailed explanation.

Unlocking Potential Through the Impact of Dedicated Development Teams on Material Digitization

In today’s world, everything is turning digital: manufacturing, retail, and agriculture. The global digital transformation market is set to reach a worth of $1009.8 billion by 2025, according to a report from Grand View Research, and this is one of the many reasons why technology has turned out to be the go-to method for streamlining operations, creating efficiency, and unlocking new possibilities. Development teams-specialized groups of tech talents-are at the heart of this transformation, moving material digitisation forward. Their influence is experienced across many industries, redefining how firms approach innovation, sustainability, and customer interaction. The Role of Dedicated Development Teams in Material Digitization The consistency, expertise, and focus that dedicated development teams can bring often provide the necessary impetus for an in-depth tackle of these complexities of material digitisation. It is not all about coding; in fact, it is about teams made up of project managers, analysts, engineers, and designers who integrate digital technologies into material handling and processing. Why a Dedicated Team? Choosing a dedicated team model for digitisation projects offers several advantages: Driving Innovation and Efficiency Dedicated development teams have been making revolutionary contributions to material digitisation. They digitise conventional materials and, in the process, create completely new avenues for innovation and efficiency in handling them. Case Studies of Success Navigating Challenges Together Of course, material digitisation comes with its problems. Data security, integration into existing systems, and the guarantee of actual-to-life digital material representation are specific difficulties facing most committed development teams. Partnering with an it outstaffing company can enhance their skill and teamwork, contributing to overcoming these setbacks. Overcoming Data Security Concerns Among the most critical issues in any digitisation project is data security. This develops dedicated teams with solid measures for protection, including encryption and secure access controls to digital materials. Additionally, regular audits of updates are needed in security to locate weaknesses that emerging threats could use. By prioritizing data security, organizations earn user trust and ensure the conduction of their services according to regulatory standards. Seamless Integration With Existing Systems Similarly, dedicated teams work at seamlessly integrating these into existing systems so that the digital materials can be put to practical use. In most cases, this demands bespoke API development or middleware solutions that will make the data flow across platforms smooth and unhindered. Rigorous testing and validation are thus required to establish that all systems communicate effectively and that data integrity is not compromised. Here, integration means increased productivity and an enhanced ability on the part of users to apply digital resources more usefully. The Multifaceted Benefits of Material Digitization However, dedicated development teams touch material digitisation well beyond operational efficiencies, driving it toward greener pastures and personalisation. Sustainability Through Digitization By digitizing materials, companies can reduce waste and optimize resources. For example, digital inventory systems prevent overproduction and excess inventory through efficient demand forecasting. This helps not only the environment but also the company’s bottom line. Besides, real-time data analytics enable organizations to make more informed decisions and respond promptly to various changes in markets and industries. Being sustainable in practice would enable companies to remain competitive in their respective industries. Enhancing Customer Engagement Material digitisation also opens up several new opportunities related to customer experiences. New immersive experiences offered by VR and AR enable the customer to try out a product virtually before buying it. Not only will this improve the buying experience, but it will also help develop a better brand relationship. Moreover, personalized experiences can also be built based on user preference, which genuinely makes a customer feel unique and understood. Hence, businesses can create customer loyalty and reinforce purchases by offering memorable and unique interactions. The Road Ahead: Collaborating for a Digitized Future Material digitisation is an ongoing journey full of potential and challenges. Companies need to continue their exploration, as the role of dedicated development teams will become much more important. Specialized teams are not simple service providers but strategic partners in innovation that help businesses navigate the complexities of the digital landscape. A Collaborative Ecosystem The digitisation of materials needs an ecosystem approach in which businesses, developers, and even end-users will work together. Encouraging open communication, feedback, and co-innovation leads to more practical digitisation solutions. For continuous improvement, various forms of partnership across different sectors will facilitate stakeholders’ use of diversified experience and insight. This collaborative approach accelerates the development of new technologies and ensures solutions that fit real user needs. Staying Ahead of the Curve Keeping one’s head above water is only possible with continuous learning and adaptation in a continuously changing digital world. The development teams should continually explore new technologies, methodologies, and practices to ensure that the digitisation of materials meets current needs but will also address future trends and opportunities. This allows teams to be more proactive in introducing innovative solutions that maximize efficiency and improve the user experience. With a culture of continuous improvement, organizations will be in leadership positions in their industry and prepared for whatever complications arise from the ever-changing digital landscape. Conclusion The influence of dedicated development teams goes deep and wide in material digitization. Pledged to expertise, innovation, and a perspective for the future, they are fostering industries down the value chain to unlock new potentials, efficiency, and sustainability while making the customer experience more engaging. No doubt, this team and business collaboration will form a cornerstone of this journey in digital transformation as it pertains to the way we interact with materials in our everyday lives.

KDE Plasma 6.2 released

Entirely coincidentally, the KDE team released Plasma 6.2 yesterday, the latest release in the well-received 6.x series. As the version number implies, it’s not a groundbreaking release, but it does contain a number of improvements that are very welcome to a few specific, often underserved groups. For instance, 6.2 overhauls the Accessibility settings panel, and ads, among other things, colourblindness filters for a variety of types of colourblindness. This condition affects roughly 8-9% of the population, so it’s an important new feature. Another group of people served by Plasma 6.2 are artists. Plasma 6.2 includes a smorgasbord of new features for users of drawing tablets. Open System Settings and look for Drawing Tablet to see various tools for configuring drawing tablets. New in Plasma 6.2: a tablet calibration wizard and test mode; a feature to define the area of the screen that your tablet covers (the whole screen or a section); and the option to re-bind pen buttons to different kinds of mouse clicks. ↫ KDE Plasma 6.2 release announcement Artists and regular users alike can now also enjoy better colour management, more complete HDR support, a tone-mapping feature in Kwin, and much more. Power management has been improved as well, so you can now manage brightness per individual monitor, control which application block going to sleep, and so on. There’s also the usual array of bug fixes, UI tweaks, and so on. Plasma 6.2 is already available in at least Fedora and openSUSE, and it will find its way to your distribution soon enough, too.

Why I use KDE

Over the decades, my primary operating system of choice has changed a few times. As a wee child of six years old, we got out first PC through one of those employer buy-a-PC programs, where an employer would subsidize its employees buying PCs for use in the home. The goal here was simple: if people get comfortable with a computer in their private life, they’ll also get comfortable with it in their professional life. And so, through my mother’s employer, we got a brand new 286 desktop running MS-DOS and Windows 3.0. I still have the massive and detailed manuals and original installation floppies it came with. So, my first operating system of ‘choice’ was MS-DOS, and to a far lesser extent Windows 3.0. As my childhood progressed, we got progressively better computers, and the new Windows versions that came with it – Windows 95, 98, and yes, even ME, which I remarkably liked just fine. Starting with Windows 95, DOS became an afterthought, and with my schools, too, being entirely Windows-only, my teenage years were all Windows, all the time. So, when I bought my first own, brand new computer – instead of old 386 machines my parents took home from work – right around when Windows XP came out, I bought a totally legal copy of Windows XP from some dude at school that somehow came on a CD-R with a handwritten label but was really totally legit you guys. I didn’t like Windows XP at all, and immediately started looking for alternatives, trying out Mandrake Linux before discovering something called BeOS – and despite BeOS already being over by that point, I had found my operating system of choice. I tried to make it last as long as the BeOS community would let me, but that wasn’t very long. The next step was a move to the Mac, something that was quite rare in The Netherlands at that time. During that same time, Microsoft released Windows Server 2003, the actually good version of Windows XP, and a vibrant community of people, including myself, started using it as a desktop operating system instead. I continued using this mix of Mac OS X and Windows – even Vista – for a long time, while having various iterations of Linux installed on the side. I eventually lost interest in Mac OS X because Apple lost interest in it (I think around the Snow Leopard era?), and years later, six or seven years ago or so, I moved to Linux exclusively, fully ditching Windows even for gaming like four or so years ago when Valve’s Proton started picking up steam. Nowadays all my machines run Fedora KDE, which I consider to be by far the best desktop operating system experience you can get today. Over the last few years or so, I’ve noticed something fun and interesting in how I set up my machines: you can find hints of my operating system history all over my preferred setup and settings. I picked up all kinds of usage patterns and expectations from all those different operating systems, and I’d like to enable as many of those as possible in my computing environment. In a way, my setup is a reflection of the operating systems I used in the past, an archaeological record of my computing history, an evolutionary tree of good traits that survived, and bad traits bred out. Taking a look at my bare desktop, you’ll instantly pick up on the fact I used to use Mac OS X for a long time. The Mac OS X-like dock at the bottom of the screen has been my preferred way of opening and managing running applications since I first got an iBook G4 more than 20 years ago, and to this day I find it far superior to any alternatives. KDE lets me easily recreate a proper dock, without having to resort to any third-party dock applications. I never liked the magnification trick Mac OS X wowed audiences with when it was new, so I don’t use it. The next dead giveaway I used to be a Mac OS X user a long time ago is the top bar, which shares quite a few elements with the Mac OS X menubar, while also containing elements not found in Mac OS X. I keep the KDE equivalent of a start menu there, a button that brings up my home folder in a KDE folder view, a show desktop button that’s mostly there for aesthetic reasons, KDE’s global menubar widget for that Mac OS X feel, a system tray, the clock, and then a close button that opens up a custom system menu with shutdown/reboot/etc. commands and some shortcuts to system tools. Another feature coming straight from my days using Mac OS X is KDE’s equivalent of Exposé, called Overview, without which I wouldn’t know how to find a window if my life depended on it. I bind it to the top-left hotcorner for easy access with my mouse, while the bottom-right hotcorner is set to show my desktop (and the reason why I technically don’t really need that show desktop button I mentioned earlier). I fiddled with the hot corner trigger timings so that they fire virtually instantly. Waiting on my computer is so ’90s. It’s not really possible to see in screenshots, but my stint using BeOS as my main operating system back when that was a thing you could do also shines through, specifically in the way I manage windows. In BeOS, double-clicking a titlebar tab would minimise a window, and right-clicking the tab would send the window to the bottom of the Z-stack. I haven’t maximised a non-video window in several decades, so I find double-clicking a titlebar to maximise a window utterly baffling, and a ridiculous Windows-ism I want nothing to do with. Once again, KDE lets me set this up exactly the way I want, and I genuinely feel lost when I can’t manipulate my windows in this

OpenBSD 7.6 released

OpenBSD 7.6, the release in which every single line of the original code form the first release has been edited or removed, has been released. There’s a lot of changes, new features, bug fixes, and more in 7.6, but for desktop users, the biggest new feature is undoubtedly hardware-accelerated video decoding through VA-API. Or, as the changelog puts it: Imported libva 2.22.0, an implementation for VA-API (video acceleration API). VA-API provides access to graphics hardware acceleration capabilities for video processing. ↫ OpenBSD 7.6 release announcement This is a massive improvement for anyone using OpenBSD for desktop use, especially on power-constrained devices like laptops. Problematic video playback was one of the reasons I went back to Fedora KDE after running OpenBSD on my workstation, and it seems this would greatly improve that situation. I can’t wait until I find some time to reinstall OpenBSD and see how much difference this will make for me personally. There’s more, of course. OpenBSD 7.6 starts the bring-up for Snapdragon X Elite devices, and in general comes with a whole slew of low-level improvements for the ARM64 architecture. AMD64 systems don’t have to feel left out, thanks to AVX-512 support, several power management improvements to make sleep function more optimally, and several other low-level improvements I don’t fully understand. RISC-V, PowerPC, MIPS, and other architectures also saw small numbers of improvements. The changelog is vast, so be sure to dig through it to see if your pet bug has been addressed, or support for your hardware has been improved. OpenBSD users will know how to upgrade, and for new installations, head on over to the download page.

Google must crack open Android for third-party stores, rules Epic judge

Late last year, Google’s Play Store was ruled to be a monopoly in the US, and today the judge in that case has set out what Google must do to address this situation. Today, Judge James Donato issued his final ruling in Epic v. Google, ordering Google to effectively open up the Google Play app store to competition for three whole years. Google will have to distribute rival third-party app stores within Google Play, and it must give rival third-party app stores access to the full catalog of Google Play apps, unless developers opt out individually. ↫ Sean Hollister at The Verge On top of these rather big changes, Google also cannot mandate the use of Google’s own billing solution, nor can it prohibit developers from informing users of other ways to download and/or pay for an application. Furthermore, Google can’t make sweetheart deals with device makers to entice them to install the Play Store or to block them from installing other stores, and Google can’t pay developers to only use the Play Store or not use other stores. It’s a rather comprehensive set of remedies that will remain in force for three years. Many of these remedies are taken straight from the European Union’s Digital Markets Act, but they will be far less effective since they’re only applied to one company, and only for three years. On top of that, Google can appeal, and the company has already stated that it’s going to ask for an immediate stay on these remedies, and if they get that stay, the remedies won’t have to be implemented any time soon. This legal tussling is far from over, and does very little to protect consumer choice. A clear law that simply prohibits this kind of market abuse, like the DMA, is much fairer to everyone involved, and creates a consistent level playing field for everyone, instead of only affecting random companies based on the whims of something as unpredictable as juries. In other words, I don’t think much is going to change in the United States after this ruling, and we’ll likely be hearing more back and forths in the court room for years to come, all while US consumers are being harmed. It’s better than nothing in lieu of a working Congress actually doing, well, anything, but that’s not saying much.

macOS 15.0 now UNIX 03-certified

You have to wonder how meaningful this news is in 2024, but macOS 15.0 Sequoia running on either Apple Silicon or Intel processors is now UNIX 03-certified. The UNIX 03 Product Standard is the mark for systems conforming to Version 3 of the Single UNIX Specification. It is a significantly enhanced version of the UNIX 98 Product Standard. The mandatory enhancements include alignment with ISO/IEC 9989:1999 C Programming Language, IEEE Std 1003.1-2001 and ISO/IEC 9945:2002. This Product Standard includes the following mandatory Product Standards: Internationalized System Calls and Libraries Extended V3,Commands and Utilities V4, C Language V2, and Internationalized Terminal Interfaces. ↫ UNIX 03 page The questionable usefulness of this news stems from a variety of factors. The UNIX 03 specification hails from the before time of 2002, when UNIX-proper still had some footholds in the market and being a UNIX meant something to the industry. These days, Linux has pretty much taken over the traditional UNIX market, and UNIX certification seems to have all but lost its value. Only one operating system can boast to conform to the latest UNIX specification – AIX is UNIX V7 and 03-certified – while macOS and HP-UX are only UNIX 03-certified. OpenWare, UnixWare, and z/OS only conform to even older standards. On top of all this, it seems being UNIX-certified by The Open Group feels a lot like a pay-to-play scheme, making it unlikely that community efforts like, say, FreeBSD, Debian, or similarly popular server operating systems could ever achieve UNIX-certification even if they wanted to. This makes the whole UNIX-certification world feel more like the dying vestiges of a job security program than something meaningful for an operating system to aspire to. In any even, you can now write a program that compiles and runs on all two UNIX 03-certified operating systems, as long as it only uses POSIX APIs.