New Raspberry Pi OS switches everyone over to Wayland

The slow rise of Wayland hasn’t really been slow anymore for years now, and today another major part of the Linux ecosystem is making the jump from X to Wayland. So we made the decision to switch. For most of this year, we have been working on porting labwc to the Raspberry Pi Desktop. This has very much been a collaborative process with the developers of both labwc and wlroots: both have helped us immensely with their support as we contribute features and optimisations needed for our desktop. After much optimisation for our hardware, we have reached the point where labwc desktops run just as fast as X on older Raspberry Pi models. Today, we make the switch with our latest desktop image: Raspberry Pi Desktop now runs Wayland by default across all models. ↫ Simon Long Raspberry Pi Desktop already used Wayland on some of the newer models, through the use of Wayfire. However, it turned out Wayfire wasn’t a good fit for the older Pi models, and Wayfire’x development direction would move it even further away from that goal, which is obviously important to the Raspberry Pi Foundation. They eventually settled on using labwc instead, which can also be used on older Pi models. As such, all Pi models will now switch to using Wayland with the latest update to the operating system. This new update also brings vastly improved touchscreen support, a rewritten panel application that won’t keep removed plugins in memory, a new display configuration utility, and more.

Best Windows Settings for Gaming

Is your Windows PC not performing well when gaming? Understand how to optimize the main Windows settings you will never touch again, such as Game Mode, Power Options, and Driver Updates, step-by-step for a smoother gaming experience. Maximize Your PC Performance Gaming on computers is a popular choice for players across genres, whether playing video poker, exploring large RPG worlds, or engaging in fierce MOBA matches.  The performance of these games is often determined by how well your system is optimized.  Minor changes to Windows settings can significantly improve gameplay by ensuring that your computer allocates resources efficiently.  Game Mode, featured in Windows 10 and later, prioritizes gaming operations for improved performance and frame rates.  These adjustments, combined with regular updates, revised power settings, and eliminating unneeded background operations, ensure that your PC is ready for uninterrupted, high-performance gaming sessions in any genre. Enable Game Mode Game Mode is a feature built into Windows 10 and 11 that optimizes your computer’s gaming resources by reducing the impact of other apps.  To enable Game Mode, start in ‘Settings’. Then, select the ‘Gaming’ button from the menu options. Finally, head to the game mode settings, where you can enable it.  The functionality works by reducing background tasks and stopping updates while you play a game. Game Mode is a feature that works wonders when it comes to improving your gaming experience.  This is especially useful when playing resource-intensive current games since it helps maintain an optimal performance level that could contribute to reaching at least 60 frames per second (FPS), which adds smoothness to game motion. Game Mode makes tasks like managing system resources much easier, leading to increased frame rates and reduced latency, which are important perks when playing online.  Be it speedy engagements with opponent shooters or long trips through large open-world settings, Game Mode gives gamers the competitive edge for a deeper dive into their virtual worlds. Adjust Power Settings for Maximum Performance Optimizing display settings and power configurations is key to increasing PC gaming performance, especially on gaming laptops.  It is possible to prioritize system performance. A high-performance power plan allocates most power to the system instead of a longer battery life.  It’s easy to enable. Go to Settings > System > Power & Sleep, open Additional Power Settings, and choose High Performance.  A suitable method of avoiding power reduction is getting power from an external power supply. If the option isn’t there, check under Show Additional Plans or create your plan.  Changing these settings means that your setup can execute resource-demanding programs, resulting in a smoother and more fascinating experience. Optimize Graphics Settings in Nvidia Control Panel Modifying some settings on the Nvidia Control Panel may maximize gaming performance.  Select ‘Prefer maximum performance’ from the power management settings menu to ensure that your graphics card always runs at total capacity and can even kick in during unexpected moments. Getting the other settings, such as texture filtering and monitor refresh rates, right helps to improve the visual quality and smoother the gameplay.  The preferred one is the highest available. Reducing stutters consequently allows us to have the smooth experience we have been waiting for.  Moreover, Nvidia also announced that G-SYNC would reduce stuttering and screen tearing problems and thus increase the overall gaming experience. Turning off VSync may result in better frames per second but may cause screen tearing. Selecting Single Display Performance Mode if you typically game on one display is another adjustment to improve gaming performance.  Keeping these settings tuned and installing GPU drivers regularly is critical to ensuring that your graphics card performs at its best across all games.  Disable Unnecessary Background Processes Extraneous background tasks can degrade gaming performance as they draw necessary system resources.  To improve your gaming experience, remove non-essential software and free up resources for smoother gameplay. The Task Manager window shows you how many applications are running when you right-click the taskbar, select Task Manager, and go to the Processes tab.  If you right-click on one of the redundant processes within the list and click on ‘End Task’, you will claim back your system resource. The Startup tab in Task Manager allows users to disable unnecessary programs during startup, which can increase startup times and spare resources for the whole system. Update Graphics Card Drivers You need to ensure that your video drivers are updated to get the highest performance out of your graphics card.  If the user ignores this fact, there may be glitches, drops in frame rates, or even system crashes during play.  Graphics card updates that increase responsiveness, speed, and the general quality of functioning of a system will thus provide a great core for gaming. Use software utilities such as GeForce Experience for Nvidia cards or Radeon Software for AMD cards to conveniently handle graphics card driver upgrades.  These programs will detect your GPU model and download recent driver releases. Consistent updates can increase frames per second by up to 23%, significantly improving gameplay quality.  Make a habit of regularly checking for the latest driver updates to avoid common gaming problems and guarantee the smooth running of the system.  This proactive move not only leads to a high-quality gaming experience but also helps in keeping the game up and running without any disruptions. Configure Display Settings Optimizing display settings is vital for achieving the best gaming performance. Make sure your monitor’s refresh rate is at its highest to enhance visual smoothness while playing games. Enabling hardware-accelerated GPU scheduling can lower latency and improve graphics output.  To activate this feature, navigate through Settings > System > Display > Graphics settings and switch on the hardware-accelerated GPU scheduling option. You can reduce the resolution of the monitor you are using to relieve the GPU and simultaneously make the games smoother.  It is advisable to set the monitor to the recommended resolution in order to make the display sharper and improve gaming performance and overall efficiency. Disable Windows Notifications and Game Bar The Xbox Game Bar and Windows notifications

TDE R14.1.3 released, and KDE developers hold impromptu TDE installfest at Akademy 2024

The Trinity Desktop Environment, a fork of the last release in the KDE 3.x series, has just released their latest version, R14.1.3. Despite its rather small version number change, it contains some very welcome new features. TDE started the process of integrating the XDG Desktop Portal API, which will bring a lot of welcome integration with applications from the wider ecosystem. There’s also a brand new touchpad settings module, which was something I was sorely missing when I tried out TDE a few months ago. Furthermore, there’s of course a ton of bugfixes and improvements, but also things like support for tiling windows, some new theme and colour scheme options, and a lot more. Not too long ago, when KDE’s Akademy 2024 took place, a really fun impromptu event happened. A number of KDE developers got together – I think in a restaurant or coffee place – and ended up organising an unplanned TDE installation party. Several photos floated around Mastodon of KDE developers using TDE, and after a few fun interactions between KDE and TDE developers on Mastodon, TDE developers ended up being invited to next year’s Akademy. We’ll have to wait and see if the schedules line up, but if any of this can lead to both projects benefiting from some jolly cooperation, it can only be seen as a good thing. Regardless, TDE is an excellent project with a very clear goal, and they’re making steady progress all the time. It’s not a fast-paced environment chasing the latest and greatest technologies, but instead builds upon a solid foundation, bringing it into modern world where it makes sense. If you like KDE 3.x, TDE is going to be perfect for you.

World’s first Haiku ransomware/malware

There’s many ways to judge if an operating system has made it to the big leagues, and one of the more unpleasant ones is the availability of malware. Haiku, the increasingly capable and daily-driveable successor to BeOS, is now officially a mainstream operating system, as it just had its first piece of malware. HaikuRansomware is an experimental ransomware project designed for educational and investigative purposes. Inspired by the art of poetry and the challenge of cryptography, this malware encrypts files with a custom extension and provides a ransom note with a poetic touch. This is a proof of concept aimed to push the boundaries of how creative ransomware can be designed. ↫ HaikuRansomware’s GitHub page Now this is obviously a bit of a tongue-in-cheek, experimental kind of thing, but it’s still something quite unique to happen to Haiku. I’m not entirely sure how the ransomware is supposed to spread, but my guess would be through social engineering. With Haiku being a relatively small project, and one wherein every user runs as root – baron, in BeOS parlance – I’m sure anything run through social engineering can do some serious damage without many guardrails in place. Don’t quote me on that, though, as Haiku may have more advanced guardrails and mitigations in place than classic BeOS did. This proof-of-concept has no ill intent, and is more intended as an art project to highlight what you can do with encryption and ransomware on Haiku today, and I definitely like the art-focused approach of the author.

What’s new in POSIX 2024 – XCU

As of the previous release of POSIX, the Austin Group gained more control over the specification, having it be more working group oriented, and they got to work making the POSIX specification more modern. POSIX 2024 is the first release that bears the fruits of this labor, and as such, the changes made to it are particularly interesting, as they will define the direction of the specification going forwards. This is what this article is about! Well, mostly. POSIX is composed of a couple of sections. Notably XBD (Base Definitions, which talk about things like what a file is, how regular expressions work, etc), XSH (System Interfaces, the C API that defines POSIX’s internals), and XCU (which defines the shell command language, and the standard utilities available for the system). There’s also XRAT, which explains the rationale of the authors, but it’s less relevant for our purposes today. XBD and XRAT are both interesting as context for XSH and XCU, but those are the real meat of the specification. This article will focus on the XCU section, in particular the utilities part of that section. If you’re more interested in the XSH section, there’s an excellent summary page by sortix’s Jonas Termansen that you can read here. ↫ im tosti The weekend isn’t over yet, so here’s some more light reading.

The MIPS ‘ThinkPad’ and the unreleased Commodore HHC-4

Old Vintage Computing Research, by the incredibly knowledgeable Cameron Kaiser, is one of the best resources on the web about genuinely obscure retrocomputing, often diving quite deep in topics nobody else covers – or even can cover, considering how rare some of the hardware Kaiser covers is. I link to Old VCR all the time, and today I’ve got two more great articles by Kaiser for you. First, we’ve got the more well-known – relatively speaking – of the two devices covered today, and that’s the MIPS ThinkPad, officially known as the IBM WorkPad z50. This was a Windows CE 2.11 device powered by a NEC VR4120 MIPS processor, running at 131 Mhz, released in 1999. Astute readers might note the WorkPad branding, which IBM also used for several rebranded Palm Pilots. Kaiser goes into his usual great detail covering this device, with tons of photos, and I couldn’t stop reading for a second. There’s so much good information in here I have no clue what to highlight, but since OSNews has OS in the name, this section makes sense to focus on: The desktop shortcuts are pre-populated in ROM along with a whole bunch of applications. The marquee set that came on H/PC Pro machines was Microsoft Pocket Office (Pocket Word, Pocket Excel, Pocket Access and Pocket PowerPoint), Pocket Outlook (Calendar, Contacts, Inbox and Tasks) and Pocket Internet Explorer, but Microsoft also included Calculator, InkWriter (not too useful on the z50 without a touch screen), Microsoft Voice Recorder, World Clock, ActiveSync (a la Palm HotSync), PC Link (direct connect, not networked), Remote Networking, Terminal (serial port and modem), Windows Explorer and, of course, Solitaire. IBM additionally licensed and included some of bSquare’s software suite, including bFAX Pro for sending and receiving faxes with the softmodem, bPRINT for printing and bUSEFUL Backup Plus for system backups, along with a battery calibrator and a Rapid Access quick configuration tool. There is also a CMD.EXE command shell, though it too is smaller and less functional than its desktop counterpart. ↫ Old Vintage Computing Research Using especially these older versions of Windows CE is a wild experience, because you can clearly tell Microsoft was trying really hard to make it look and feel like ‘normal’ Windows, but as anyone who used Windows CE back then can attest, it was a rather poor imitation with a ton of weird limitations and design decisions borne from the limited hardware it was designed to run on. I absolutely adore the various incarnations of Windows CE and associated graphical shells it ran – especially the PocketPC days – but there’s no denying it always felt quite clunky. Moving on, the second Old VCR article I’m covering today is more difficult for me to write about, since I am too young to have any experience with the 8 bit era – save for some experience with the MSX platform as a wee child – so I have no affinity for machines like the Commodore 64 and similar machines from that era. And, well, this article just so happens to be covering something called the Commodore HHC-4. Once upon a time (and that time was Winter CES 1983), Commodore announced what was to be their one and only handheld computer, the Commodore HHC-4. It was never released and never seen again, at least not in that form. But it turns out that not only did the HHC-4 actually exist, it also wasn’t manufactured by Commodore — it was a Toshiba. Like Superman had Clark Kent, the Commodore HHC-4 had a secret identity too: the Toshiba Pasopia Mini IHC-8000, the very first portable computer Toshiba ever made. And like Clark Kent was Superman with glasses, compare the real device to the Commodore marketing photo and you can see that it’s the very same machine modulo a plastic palette swap. Of course there’s more to the story than that. ↫ Old Vintage Computing Research Of course, Kaiser hunted down an IHC-8000, and details his experiences with the little handheld, calculator-like machine. It turns out it’s most likely using some unspecified in-house Toshiba architecture, running at a few hundred kHz, and it’s apparently quite sluggish. It never made it to market in Commodore livery, most likely because of its abysmal performance. The amount of work required to make this little machine more capable and competitive probably couldn’t be recouped by its intended list price, Kaiser argues.

A brief history of Mac firmware

Firmware, software that’s intimately involved with hardware at a low level, has changed radically with each of the different processor architectures used in Macs. ↫ Howard Oakley A quick but still detailed overview of the various approach to Mac firmware Apple has employed over the years, from the original 68k firmware and Mac OS ROMs, to the modern Apple M-specific approach.

What can Windows 10 users do once support ends in October 2025?

There’s a date looming on the horizon for the vast majority of Windows users. While Windows 11 has been out for a long time now, most Windows users are using Windows 10 – about 63% – while Windows 11 is used by only about 33% of Windows users. In October 2025, however, support for Windows 10 will end, leaving two-thirds of Windows users without the kind of updates they need to keep their system secure and running smoothly. Considering Microsoft is in a lot of hot water over its security practices once again lately, this must be a major headache for the company. The core of the problem is that Windows 11 has a number of very strict hardware requirements that are mostly entirely arbitrary, and make it impossible for huge swaths of Windows 10 users to upgrade to Windows 11 even if they wanted to. And that is a problem in and of itself too: people don’t seem to like Windows 11 very much, and definitely prefer to stick to Windows 10 even if they can upgrade. It’s going to be quite difficult for Microsoft to convince those people to upgrade, which likely won’t happen until these people buy a new machine, which in turn in something that just isn’t necessary as often as it used to be. That first group of users – the ones who want to upgrade, but can’t – do have unofficial options, a collection of hacks to jank Windows 11 into installing on unsupported hardware. This comes with a number of warnings from Microsoft, so you may wonder how much of a valid option this really is. Ars Technica has been running Windows 11 on some unsupported machines for a while, and concludes that while it’s problem-free in day-to-day use, there’s a big caveat you won’t notice until it’s time for a feature update. These won’t install without going through the same hacks you needed to use when you first installed Windows 11 and manually downloading the update in question. This essentially means you’ll need to repeat the steps for doing a new unsupported Windows 11 install every time you want to upgrade. As we detail in our guide, that’s relatively simple if your PC has Secure Boot and a TPM but doesn’t have a supported processor. Make a simple registry tweak, download the Installation Assistant or an ISO file to run Setup from, and the Windows 11 installer will let you off with a warning and then proceed normally, leaving your files and apps in place. Without Secure Boot or a TPM, though, installing these upgrades in place is more difficult. Trying to run an upgrade install from within Windows just means the system will yell at you about the things your PC is missing. Booting from a USB drive that has been doctored to overlook the requirements will help you do a clean install, but it will delete all your existing files and apps. ↫ Andrew Cunningham at Ars Technica The only way around this that may work is yet another hack, which tricks the update into thinking it’s installing Windows Server, which seems to have less strict requirements. This way, you may be able to perform an upgrade from one Windows 11 version to the next without losing all your data and requiring a fresh installation. It’s one hell of a hack that no sane person should have to resort to, but it looks like it might be an inevitability for many. October 2025 is going to be a slaughter for Windows users, and as such, I wouldn’t be surprised to see Microsoft postponing this date considerably to give the two-thirds of Windows users more time to move to Windows 11 through their regular hardware replacements cycles. I simply can’t imagine Microsoft leaving the vast majority of its Windows users completely unprotected. Spare a thought for our Windows 10-using friends. They’re going to need it.

A deep dive into Linux’s new mseal syscall

If you love exploit mitigations, you may have heard of a new system call named mseal landing into the Linux kernel’s 6.10 release, providing a protection called “memory sealing.” Beyond notes from the authors, very little information about this mitigation exists. In this blog post, we’ll explain what this syscall is, including how it’s different from prior memory protection schemes and how it works in the kernel to protect virtual memory. We’ll also describe the particular exploit scenarios that mseal helps stop in Linux userspace, such as stopping malicious permissions tampering and preventing memory unmapping attacks. ↫ Alan Cao The goal of mseal is to, well, literally seal a part of memory and protect its contents from being tampered with. It makes regions of memory immutable so that while a program is running, its memory contents cannot be modified by malicious actors. This article goes into great detail about this new feature, explains how it works, and what it means for security in the Linux kernel. Excellent light reading for the weekend.

Contractors training Amazon, Meta and Microsoft’s AI systems left without pay after Appen moves to new platform

One-third of payments to contractors training AI systems used by companies such as Amazon, Meta and Microsoft have not been paid on time after the Australian company Appen moved to a new worker management platform. Appen employs 1 million contractors who speak more than 500 languages and are based in 200 countries. They work to label photographs, text, audio and other data to improve AI systems used by the large tech companies and have been referred to as “ghost workers” – the unseen human labour involved in training systems people use every day. ↫ Josh Taylor at The Guardian It’s crazy that if you peel back the layers on top of a lot of tools and features sold to us as “artificial intelligence”, you’ll quite often find underpaid workers doing the labour technology companies are telling us are done by computers running machine learning algorithms. The fact that so many of them are either deeply underpaid or, as in this case, not even paid at all, while companies like Google, Apple, Microsoft, and OpenAI are raking in ungodly amounts of profits, is deeply disturbing. It’s deeply immoral on so many levels, and just adds to the uncomfortable feeling people have with “AI”. Again I’d like to reiterate I’m not intrinsically opposed to the current crop of artificial intelligence tools – I just want these mega corporations to respect the rights of artists, and not use their works without permission to earn immense amounts of money. On top of that, I don’t think it should be legal for them to lie about how their tools really work under the hood, and the workers who really do the work claimed to be done by “AI” to be properly paid. Is any of that really too much to ask? Fix these issues, and I’ll stop putting quotation marks around “AI”.

Microsoft improves Windows’ update experience, and announces support for MIDI 2.0 and a new audio driver for professionals

Windows 11, version 24H2 represents significant improvements to the already robust update foundation of Windows. With the latest version, you get reduced installation time, restart time, and central processing unit (CPU) usage for Windows monthly updates. Additionally, enhancements to the handling of feature updates further reduce download sizes for most endpoints by extending conditional downloads to include Microsoft Edge. Let’s take a closer look at these advancements. ↫ Steve DiAcetis at the Windows IT Pro Blog Now this is the kind of stuff we want to see in new Windows releases. Updating Windows feels like a slow, archaic, and resource-intensive process, whereas on, say, my Fedora machines it’s such an effortless, lightweight process I barely even notice it’s happening. This is an area where Windows can make some huge strides that materially affect people – Windows updates are a meme – and it’s great to see Microsoft working on this instead of shoving more ads onto Windows users’ desktops. In this case, Microsoft managed to reduce installation time, make reboots faster, and lower CPU and RAM usage through a variety of measures roughly falling in one of three groups: improved parallel processing, faster and optimised reading of update manifests, and more optimal use of available memory. We’re looking at some considerable improvements here, such as a 45% reduction in installation time, 15-25% less CPU usage, and more. Excellent work. On a related note, at the Qualcomm Snapdragon Summit, Microsoft also unveiled a number of audio improvements for Windows on ARM that will eventually also make their way to Windows on x86. I’m not exactly an expert on audio, but from what I understand the Windows audio stack is robust and capable, and what Microsoft announced today will improve the stack even further. For instance, support for MIDI 2.0 is coming to Windows, with backwards compatibility for MIDI 1.0 devices and APIs, and Microsoft worked together with Yamaha and Qualcomm to develop a new USB Audio Class 2 Driver. In the company’s blog post, Microsoft explains that the current USB Audio Class 2 driver in Windows is geared towards consumer audio applications, and doesn’t fulfill the needs of professional audio engineers. This current driver does not support the standard professional software has standardised on – ASIO – forcing people to download custom, third-party kernel drivers to get this functionality. That’s not great for anybody, and as such they’re working on a new driver. The new driver will support the devices that our current USB Audio Class 2 driver supports, but will increase support for high-IO-count interfaces with an option for low-latency for musician scenarios. It will have an ASIO interface so all the existing DAWs on Windows can use it, and it will support the interface being used by Windows and the DAW application at the same time, like a few ASIO drivers do today. And, of course, it will handle power management events on the new CPUs. ↫ Pete Brown at the Dev Blogs The code for this driver will be published as open source on GitHub, so that anyone still opting to make a specialised driver can use Microsoft’s code to see how things are done. That’s a great move, and one that I think we’ll be seeing more often from Microsoft. This is great news for audio professionals using Windows.

Solving the mystery of ARM7TDMI multiply carry flag

The processor in the Game Boy Advance, the ARM7TDMI, has a weird characteristic where the carry flag is set to a “meaningless value” after a multiplication operation. What this means is that software cannot and should not rely on the value of the carry flag after multiplication executes. It can be set to anything. Any value. 0, 1, a horse, whatever. This has been a source of memes in the emulator development community for a few years – people would frequently joke about how the implementation of the carry flag may as well be cpu.flags.c = rand() & 1;. And they had a point – the carry flag seemed to defy all patterns; nobody understood why it behaves the way it does. But the one thing we did know, was that the carry flag seemed to be deterministic. That is, under the same set of inputs to a multiply instruction, the flag would be set to the same value. This was big news, because it meant that understanding the carry flag could give us key insight into how this CPU performs multiplication. And just to get this out of the way, the carry flag’s behavior after multiplication isn’t an important detail to emulate at all. Software doesn’t rely on it. And if software did rely on it, then screw the developers who wrote that software. But the carry flag is a meme, and it’s a really tough puzzle, and that was motivation enough for me to give it a go. Little did I know it’d take 3 years of on and off work. ↫ bean machine Please don’t make me understand any of this.

bhyve on FreeBSD and VM live migration: quo vadis?

When I think about bhyve Live Migration, it’s something I encounter almost daily in my consulting calls. VMware’s struggles with Broadcom’s licensing issues have been a frequent topic, even as we approach the end of 2024. It’s surprising that many customers still feel uncertain about how to navigate this mess. While VMware has been a mainstay in enterprise environments for years, these ongoing issues make customers nervous. And they should be – it’s hard to rely on something when even the licensing situation feels volatile. Now, as much as I’m a die-hard FreeBSD fan, I have to admit that FreeBSD still falls short when it comes to virtualization – at least from an enterprise perspective. In these environments, it’s not just about running a VM; it’s about having the flexibility and capabilities to manage workloads without interruption. Years ago, open-source solutions like KVM (e.g., Proxmox) and Xen (e.g., XCP-ng) introduced features like live migration, where you can move VMs between hosts with zero downtime. Even more recently, solutions like SUSE Harvester (utilizing KubeVirt for running VMs) have shown that this is now an essential part of any virtualization ecosystem. ↫ gyptazy FreeBSD has bhyve, but the part where it falls short, according to gyptazy, is the tool’s lack of live migration. While competitors and alternatives allow for virtual machines to be migrated without downtime, bhyve users still need to shut down their VMs, interrupt all connections, and thus experience a period of downtime before everything is back up and running again. This is simply not acceptable in most enterprise environments, and as such, bhyve is not an option for most users of that type. Luckily for enterprise FreeBSD users, things are improving. Live migration of bhyve virtual machines is being worked on, and basic live migration is now supported, but with limitations. For instance, only virtual machines with a maximum of 3GB could be migrated live, but that limit has been raised in recent years to 13 to 14GB, which is a lot more palatable. There are also some issues with memory corruption, as well as some other issues. Still, it’s a massive feat to have live migration at all, and it seems to be improving every year. The linked article goes into much greater detail about where things stand, so if you’re interested in keeping up with the latest progress regarding bhyve’s live migration capabilities, it’s a great place to start.

Qualcomm announces Snapdragon 8 Elite flagship smartphone SoC

At the Snapdragon Summit today, Qualcomm is officially announcing the Snapdragon 8 Elite, its flagship SoC for smartphones. The Snapdragon 8 Elite is a major upgrade from its predecessor, with improvements across the board. Qualcomm is also changing its naming scheme for its flagship SoCs from Snapdragon 8 Gen X to Snapdragon X Elite. ↫ Pradeep Viswanathan at Neowin It’s wild – but not entirely unexpected – how we always seem to end up in a situation in technology where crucial components, such as the operating system or processor, are made by one, or at most two, companies. While there are a few other smartphone system-on-a-chip vendors, they’re mostly relegated to low-end devices, and can’t compete on the high end, where the money is, at all. It’s sadness. Speaking of our mobile SoC overlords, they seem to be in a bit of a pickle when it comes to their core business of, well, selling SoCs. In short, Qualcomm bought Nuvia to use its technology to build the current crop of Snapdragon X Elite and Pro laptop chips. According to ARM, Qualcomm does not have an ARM license to do so, and as such, a flurry of lawsuits between the two companies followed. ARM is now cancelling certain Qualcomm ARM licenses, arguing specifically its laptop Snapdragon X chips should be destroyed. What we’re looking at here is two industry giants engaged in very public, and very expensive, contract negotiations, using the legal system as their arbiter. This will eventually fizzle out into a new agreement between the two companies with renewed terms and conditions – and flows of money – but until that dust has settled, be prepared for an endless flurry of doomerist news items about this story. As for us normal people? We don’t have to worry one bit about this legal nonsense. It’s not like we have any choice in smartphone chips anyway.

/tmp should not exist

I commented on Lobsters that /tmp is usually a bad idea, which caused some surprise. I suppose /tmp security bugs were common in the 1990s when I was learning Unix, but they are pretty rare now so I can see why less grizzled hackers might not be familiar with the problems. I guess that’s some kind of success, but sadly the fixes have left behind a lot of scar tissue because they didn’t address the underlying problem: /tmp should not exist. ↫ Tony Finch Not only is this an excellent, cohesive, and convincing argument against the existence of /tmp, it also contains some nice historical context as to why things are the way they are. Even without the arguments against /tmp, though, it just seems entirely more logical, cleaner, and sensible to have /tmp directories per user in per user locations. While I never would’ve been able to so eloquently explain the problem as Finch does, it just feels wrong to have every user resort to the exact same directory for temporary files, like a complex confluence of bad decisions you just know is going to cause problems, even if you don’t quite understand the intricate interplay.

Apple’s AirPods Pro hearing health features are as good as they sound

Apple announced a trio of major new hearing health features for the AirPods Pro 2 in September, including clinical-grade hearing aid functionality, a hearing test, and more robust hearing protection. All three will roll out next week with the release of iOS 18.1, and they could mark a watershed moment for hearing health awareness. Apple is about to instantly turn the world’s most popular earbuds into an over-the-counter hearing aid. ↫ Chris Welch at The Verge Rightfully so, most of us here have a lot of issues with the major technology companies and the way they do business, but every now and then, even they accidentally stumble into doing something good for the world. AirPods are already a success story, and gaining access to hearing aid-level features at their price point is an absolute game changer for a lot of people with hearing issues – and for a lot of people who don’t even yet know they have hearing issues in the first place. If you have people in your life with hearing issues, or whom you suspect may have hearing issues, gifting them AirPods this Christmas season may just be a perfect gift. Yes, I too think hearing aids should be a thing nobody has to pay for and which should just be part of your country’s universal healthcare coverage – assuming you have such a thing – but this is not a bad option as a replacement.

System76 unveils ARM Ampere Altra workstation

System76, purveyor of Linux computers, distributions, and now also desktop environments, has just unveiled its latest top-end workstation, but this time, it’s not an x86 machine. They’ve been working together with Ampere to build a workstation based around Ampere’s Altra ARM processors: the Thelio Astra. Phoronix, fine purveyor of Linux-focused benchmarks, were lucky enough to benchmark one, and has more information on the new workstation. System76 designed the Thelio Astra in collaboration with Ampere Computing. The System76 Thelio Astra makes use of Ampere Altra processors up to the Ampere Altra Max 128-core ARMv8 processor that in turn supports 8-channel DDR4 ECC memory. The Thelio Astra can be configured with up to 512GB of system memory, choice of Ampere Altra processors, up to NVIDIA RTX 6000 Ada Generation graphics, dual 10 Gigabit Ethernet, and up to 16TB of PCIe 4.0 NVMe SSD storage. System76 designed the Thelio Astra ARM64 workstation to be complemented by NVIDIA graphics given the pervasiveness of NVIDIA GPUs/accelerators for artificial intelligence and machine learning workloads. The Astra is contained within System76’s custom-designed, in-house-manufactured Thelio chassis. Pricing on the System76 Thelio Astra will start out at $3,299 USD with the 64-core Ampere Altra Q64-22 processor, 2 x 32GB of ECC DDR4-3200 memory, 500GB NVMe SSD, and NVIDIA A402 graphics card. ↫ Michael Larabel This pricing is actually remarkably favourable considering the hardware you’re getting. System76 and its employees have been dropping hints for a while now they were working on an ARM variant of their Thelio workstation, and knowing some of the prices others are asking, I definitely expected the base price to hit $5000, so this is a pleasant surprise. With the Altra processors getting a tiny bit long in the tooth, you do notice some oddities here, specifically the DDR4 RAM instead of the modern DDR5, as well as the lack of PCIe 5.0. The problem is that while the Altra has a successor in the AmpereOne processor, its availability is quite limited, and most of them probably end up in datacentres and expensive servers for big tech companies. This newer variant does come with DDR5 and PCIe 5.0 support, but doesn’t yet have a lower core count version, so even if it were readily available it might simply push the price too far up. Regardless, the Altra is still a ridiculously powerful processor, and at anywhere between 64 and 128 cores, it’s got power to spare. The Thelio Astra will be available come 12 November, and while I would perform a considerable number of eyebrow-raising acts to get my hands on one, it’s unlikely System76 will ship one over for a review. Edit: here’s an excellent and detailed reply to our Mastodon account from an owner of an Ampere Altra workstation, highlighting some of the challenges related to your choice of GPU. Required reading if you’re interested in a machine like this.

The Impact of Accidents on Insurance Rates

How a vehicle’s accident history influences insurance premiums holds the key to making a better decision about car purchase. Higher accident-proneness translates to a higher premium amount; hence, the impact on the overall cost of owning and running the car is enormous. This article explores the relationship between a vehicle’s accident history and the premium for insurance owing to that vehicle. It underlines the importance of knowing it before purchasing it. Consumers must also determine how old accidents might impact the ability of the car to protect the occupants or last long enough without significant failures. Moreover, some insurance companies will not insure or charge higher deductibles on such vehicles because of the history of high-severity crashes. Knowing about such indirect impacts can help consumers negotiate terms and avoid surprise financial liabilities. How Accident History Affects Insurance Premiums Insurance companies take a risk when they calculate premiums. The car’s history of accidents can reflect a part of that risk: the more accidents a vehicle has been associated with, the higher the premium. A car in several accidents may be considered high-risk, whereas a vehicle with no accident history would be given lower rates. Potential buyers should thoroughly research a vehicle’s history to make informed decisions. Resources like a Texas VIN Lookup can provide essential insights into whether the car has been involved in any accidents. By entering the Vehicle Identification Number (VIN) into a reliable service, buyers can access detailed reports that outline the car’s past, including any reported accidents and their severity. The Role of Insurance Companies Insurance companies consider a variety of factors when determining premiums, including: Understanding these factors can help consumers anticipate the costs of insuring a used vehicle. The Importance of Vehicle History Reports These reports contain important information about the involvement of the vehicle in accidents in the past, the status of the title, and other information that could affect the insurance rates. Services like StatReport give detailed VIN checks showing if the motor vehicle has been involved in an accident and the resultant damage. The reports sometimes include previous owners, service records, accident history, etc. This transparency allows likely buyers to make informed decisions about the vehicles they put into consideration. Knowing a vehicle’s history will help buyers avoid buying cars with concealed problems that can cost them many resources in repairs. By looking at the report, they will also know if the asking price is to the book. Hidden Costs of Accident-Damaged Vehicles Buying a car with an accident history may cost more due to insurance; there may also be some hidden costs. Some of the possible problems include: Checking for Stolen Vehicles In addition to examining accident history, checking whether the vehicle has been reported stolen is crucial. A stolen vehicle check is essential in ensuring you are not inadvertently purchasing a car with legal complications. Many VIN check services provide this information as part of their reports. Knowing whether a vehicle has been reported stolen can save you from potential legal issues and financial losses. If you buy a stolen vehicle unknowingly, you could lose both your investment and the car itself once law enforcement gets involved. Understanding State-Specific Regulations Each state has regulations regarding how accidents affect insurance rates and what information must be disclosed when selling a vehicle. Familiarising yourself with these regulations is vital for protecting your rights as a consumer. For example, some states require sellers to disclose any known accidents or damage when selling a used car. Understanding these laws can empower buyers to ask the right questions and demand transparency from sellers. The Role of Professional Inspections Moreover, in addition to the vehicle history report, each used car being sold to a potential buyer should be taken for a professional inspection. A qualified mechanic might reveal prospective issues related to accidents that have taken place with the car in the past, which might not be visible if one were to conduct a casual inspection. A careful examination might reveal poor repairs or structural damage that could impact safety and performance. This additional level of scrutiny safeguards the buyer by ensuring that he is informed of what he is getting himself into before making any financial commitment. Conclusion They have a significant relationship with insurance rates. While the happening of an accident increases the rate, the effect of an accident on insurance is one of the most misunderstood subjects for a vehicle. Buyers need to understand, while making a purchase decision, how the accident history affects the premium for a car. The Texas VIN Lookup services, entire vehicle history reports, and the like come into play here to help buyers avoid hidden costs and probable risks in buying a pre-owned or secondhand car. Accidents in its history impact insurance rates, resale prices, and the cost of keeping the car on the road. The confidence that comes with researching a vehicle for purchase, combined with seeking a professional inspection of it, allows buyers to make appropriate selections conducive to their financial goals and personal safety concerns. Knowledge is power- the more informed one is about these factors, the better a used vehicle purchase decision can be made.

Microsoft maintains its own Windows debloat scripts on GitHub

It’s no secret that a default Windows installation is… Hefty. In more ways than one, Windows is a bit on the obese side of the spectrum, from taking up a lot of disk space, to requiring hefty system requirements (artificial or not), to coming with a lot of stuff preinstalled not everyone wants to have to deal with. As such, there’s a huge cottage industry of applications, scripts, modified installers, custom ISOs, and more, that try to slim Windows down to a more manageable size. As it turns out, even Microsoft itself wants in on this action. The company that develops and sells Windows also provides a Windows debloat script. Over on GitHub, Microsoft maintains a repository of scripts simplify setting up Windows as a development environment, and amid the collection of scripts we find RemoveDefaultApps.ps1, a PowerShell script to “Uninstall unnecessary applications that come with Windows out of the box”. The script is about two years old, and as such it includes a few applications no longer part of Windows, but looking through the list is a sad reminder of the kind of junk Windows comes with, most notably mobile casino games for children like Bubble Witch and March of Empires, but also other nonsense like the Mixed Reality Portal or Duolingo. It also removes something called “ActiproSoftwareLLC“, which are apparently a set of third-party, non-Microsoft UI controls for WPF? Which comes preinstalled with Windows sometimes? What is even happening over there? The entire set of scripts makes use of Chocolatey wrapped in Boxstarter, which is “a wrapper for Chocolatey and includes features like managing reboots for you”, because of course, the people at Microsoft working on Windows can’t be bothered to fix application management and required reboots themselves. Silly me, expecting Microsoft’s Windows developers to address these shortcomings internally instead of using third-party tools. The repository seems to be mostly defunct, but the fact it even exists in the first place is such a damning indictment of the state of Windows. People keep telling us Windows is fine, but if even Microsoft itself needs to resort to scripts and third-party tools to make it usable, I find it hard to take claims of Windows being fine seriously in any way, shape, or form.

Booting Sun SPARC servers

In early 2022 I got several Sun SPARC servers for free off of a FreeCycle ad: I was recently called out for not providing any sort of update on those devices… so here we go! ↫ Sidneys1.com Some information on booting old-style SPARC machines, as well as pretty pictures. Nice palate-cleanser if you’ve had to deal with something unpleasant this weekend. This world would be a better place if we all had our own Sun machines to play with when we get sad.