Liam Proven posted a good summary of the importance of the PDP and VAX series of computers on his blog. Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It’s entitled “First new vax in …30 years?” Someone posted it on Hackernews. One of the comments said, roughly, that they didn’t see the significance and could someone “explain it like I’m a Computer Science undergrad.” This is my attempt to reply… Um. Now I feel like I’m 106 instead of “just” 53. OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families… and both were from the same company.
Kernel extensions have long been one of the most powerful and dangerous features of macOS. They enable Apple and third-party developers to support the rich range of hardware available both within and connected to Macs, to add new features such as software firewalls and security protection, and to modify the behaviour of macOS by rerouting sound output to apps, and so on. With those comes the price that kernel extensions can readily cause the kernel to panic, can conflict with one another and with macOS, and most of all are a security nightmare. For those who develop malicious software, they’re the next best thing to installing their own malicious kernel. For some years now, Apple has been encouraging third-party developers to move away from kernel extensions to equivalents which run at a user level rather than in Ring 1. However, it has only been in the last year or so that Apple has provided sufficient support for this to be feasible. Coupled with the fact that M1 Macs have to be run at a reduced level of security to be able to load third-party kernel extensions, almost all software and hardware which used to rely on kernel extensions should now be switching to Apple’s new alternatives such as system extensions. This article explains the differences these make to the user. A good, detailed look at what Apple is doing with kernel extensions in macOS.
Alphabet Inc.’s Google was sued by three dozen states alleging that the company illegally abused its power over the sale and distribution of apps through the Google Play store on mobile devices. State attorneys general said in a complaint filed Wednesday in federal court in San Francisco that Google used anticompetitive tactics to thwart competition and ensure that developers have no choice but to go through the Google Play store to reach users. It then collects an “extravagant” commission of up to 30% on app purchases, the states said. These lawsuits will keep on coming, and eventually one will be won – and it’s going to send shockwaves across the industry. I can’t wait.
Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed! Sometimes, it’s the obscure changes that can have a big impact. This change will speed up the installation of .deb packages.
Microsoft’s free upgrade offer for Windows 7 and Windows 8.1 users ended way back in 2016, but you can still upgrade to Windows 10. As expected, Microsoft says it will continue to support Windows 11 users upgrading from Windows 7 or Windows 8.1 as long as they meet the minimum system requirements. However, there’s a catch – Windows 7 to Windows 11 upgrade could wipe your apps, settings and customizations. That’s because a proper direct upgrade path is not available for Windows 7/8.1 users, according to a support document from Lenovo, which was published on June 24 and spotted by us earlier today. This isn’t entirely unreasonable. Windows 7 was released 12 years ago, and received its last (and only) service pack 10 years ago. Mainstream support ended 2015, six years ago. I think this is a fair point to say – no more in-place upgrades. I mean, I think most people do fresh Windows installations anyway.
The famous open source audio manipulation program was acquired by a company named Muse Group two months ago. The same company owns other projects in its portfolio such as Ultimate Guitar (Famous website for Guitar enthuisasts) and MuseScore (Open source music notation software). Ever since, Audacity has been a heated topic. The parent company is a multi-national company and it has been trying to start a data-collection mechanism in the software. While Audacity is nothing more than a desktop program, its developers want to make it phone home with various data taken from users’ machines. This is a sad situation all around – but at the same time, it highlights the incredibly strength, resilience, and unique qualities of open source. The new owner of Audacity might want to turn it into spyware, but unlike with proprietary software, we don’t just have to sit back and take it. Various forks have already been made, and a few months from now, one or possibly a few of those will come out on top as the proper continuation of the project.
Anders Magnusson, writing on the Port-vax NetBSD mailing list: Some time ago I ended up in an architectural discussion (risc vs cisc etc…) and started to think about vax. Even though the vax is considered the “ultimate cisc” I wondered if its cleanliness and nice instruction set still could be implemented efficient enough. Well, the only way to know would be to try to implement it 🙂 I had an 15-year-old demo board with a small low-end FPGA (Xilinx XC3S400), so I just had to learn Verilog and try to implement something. And it just passed EVKAA.EXE: Along with the development of a VAX implementation in an FPGA, discussions arose about possible 64-bit extensions: For userspace; the vax architecture itself leave the door open for expanding the word size. The instructions are all defined to use only the part of a register it needs, so adding a bunch of ‘Q’ instructions are a no-brainer. Argument reference will work as before. The JMP/JSR/RET/… might need a Q counterpart, since it suddenly store/require 8 bytes instead of 4. Kernel; the hardware structures (SCB, PCB, …) must all be expanded. Memory management changed (but the existing leave much to wish for anyway). All this is probably a quite simple update to the architecture. It’s nice to see people still putting work and effort into what is nearly a half-century old, and otherwise obsolete, instruction set.
For simplicity, let’s say you have a single-CPU system that supports “dynamic frequency scaling”, a feature that allows software to instruct the CPU to run at a lower speed, commonly known as “CPU throttling”. Assume for this scenario that the CPU has been throttled to half-speed for whatever reason, could be thermal, could be energy efficiency, could be due to workload. Finally, let’s say that there’s a program that is CPU-intensive, calculating the Maldebrot set or something. The question is: What percentage CPU usage should performance monitoring tools report? Should it report 100%, or 50%? This is like asking what side of the bed is the front, and which side is the back – you can make valid arguments either way, and nobody is wrong or right.
We already know that Windows 11 Home will require a Microsoft account (MSA) at the beginning of the installation process. What Microsoft hasn’t publicized is whether it’s possible to log in with just a local account. It is, but only with Windows 11 Pro. A source close to Microsoft has now told us that the only way to avoid using an MSA is with Windows 11 Pro. According to our source, users who buy or own a PC with Windows 11 Pro may choose to use either a local account or an MSA from the very beginning of the installation process. The Windows 11 Home MSA requirement isn’t permanent, just unavoidable. Microsoft will allow the user to transition to a local account once the Windows 11 Home installation process has completed. Retail versions of Windows 11 Home will offer the same experience. So you’re going to need an online account to install Windows 10 Home. If, for some reason, you truly want Windows 11, I’d suggest waiting a few months and get some cheap OEM license for Windows 11 Pro from eBay for a few dollars to save yourself the hassle.
After announcing that OnePlus and Oppo would be merging more teams behind the scenes, the inevitable has happened. OnePlus has just announced that OxygenOS and ColorOS will come closer together, though with the benefit of OnePlus devices getting three years or more of Android updates. In a forum post today, OnePlus explains that the sub-brand of Oppo is “working on integrating the codebase of OxygenOS and ColorOS.” Apparently, the change will go unnoticed because it is happening behind the scenes. OnePlus’ OxygenOS has always been the darling among Android fans, because not only is it very close to stock Android, it also has great performance and (usually) a good update schedule. Oppo’s ColorOS, on the other hand, is none of that. I’m very skeptical of this merger turning out for the better for OnePlus users.
Now, as Qualcomm looks to push 5G connectivity into laptops, it is pairing modems with a powerful central processor unit, or CPU, Amon said. Instead of using computing core blueprints from longtime partner Arm Ltd, as it now does for smartphones, Qualcomm concluded it needed custom-designed chips if its customers were to rival new laptops from Apple. As head of Qualcomm’s chip division, Amon this year led the $1.4 billion acquisition of startup Nuvia, whose ex-Apple founders help design some those Apple laptop chips before leaving to form the startup. Qualcom will start selling Nuvia-based laptop chips next year. The processor industry is scrambling to catch up to Apple, and every Intel and AMD OEM is looking for something that can give the same or merely vaguely similar kind of performance and power draw in laptops like the M1. Qualcomm is claiming here that they can, and will – this year, without relying on Arm. Bold claim.
For as long as Android has been around, Android apps have been launched in the APK format (which stands for Android Package). However, in 2018, Google introduced a new format called Android App Bundles, or AAB (with the filename *.aab). Google touted that this new format would result in smaller app file sizes and easier ways to control various aspects of apps. Of the millions of apps on the Google Play Store, thousands of them already use the AAB system. Today, Google announced that the AAB format will now officially replace Android APKs. This means that starting in August of this year, all new apps submitted to the Google Play Store must come in the AAB format. Apps that are currently APKs can stay that way — at least for now. Alright, where’s the catch? There’s going to be a catch, right? Unlike APKs, Android App Bundles cannot exist outside of Google Play and cannot be distributed outside of it. This means that developers switching from APK to App Bundles can no longer provide the exact same package or experience on other app sources unless they opt to maintain a separate APK version. This naturally puts third-party app stores at a disadvantage, but Google will most likely play up the Play Store’s security as a major reason to avoid those sources anyway. There it is! Of course any technological step forward in the modern monopolised world of technology has to come with anti-consumer features or limitations that take control away from users. It’s like a law.
Today, we are launching a technical preview of GitHub Copilot, a new AI pair programmer that helps you write better code. GitHub Copilot draws context from the code you’re working on, suggesting whole lines or entire functions. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code—to help you complete your work faster. Sounds like a cool and useful feature, but this does raise some interesting questions about the code it generates. Sure, generated code might be entirely new, but what about possible cases where the code it “generates” is just taken from the existing projects the AI was trained on? The AI was trained on open source code available on GitHub, including a lot of code licensed under, for instance, the GPL. GitHub says in the Copilot FAQ: GitHub Copilot is a code synthesizer, not a search engine: the vast majority of the code that it suggests is uniquely generated and has never been seen before. We found that about 0.1% of the time, the suggestion may contain some snippets that are verbatim from the training set. Here is an in-depth study on the model’s behavior. Many of these cases happen when you don’t provide sufficient context (in particular, when editing an empty file), or when there is a common, perhaps even universal, solution to the problem. We are building an origin tracker to help detect the rare instances of code that is repeated from the training set, to help you make good real-time decisions about GitHub Copilot’s suggestions. That 0.1% may not sound like a lot, but that’s misleading – another way to put it is that out of every 1000 suggestions Copilot makes, 1 is copy/pasted code someone has written and selected a license for, and that license must, of course, be respected. On top of that, it’s hard to argue that code generated from a set of existing open source code doesn’t constitute a derivative work, and is thus covered by the copyright open source licenses are based on. I am not a lawyer, so I’m not going to argue Copilot is definitively a massive GPL violation, but as a layman, on the face of it, it definitely feels like a tool that’s going to strip a lot of code from their licenses – without consent and permission of the code’s authors.
Microsoft gave its digital imprimatur to a rootkit that decrypted encrypted communications and sent them to attacker-controlled servers, the company and outside researchers said. The blunder allowed the malware to be installed on Windows machines without users receiving a security warning or needing to take additional steps. For the past 13 years, Microsoft has required third-party drivers and other code that runs in the Windows kernel to be tested and digitally signed by the OS maker to ensure stability and security. Without a Microsoft certificate, these types of programs can’t be installed by default. One of the reasons Windows 11’s hardware requirements are so stringent is because Microsoft wants to force Trusted Platform Modules and Secure Boot down everyone’s throat, in the name of security. This way, Windows users can feel secure in knowing Microsoft looks out for them, and will prevent malware and viruses from… I can’t keep writing this with a straight face.
With Canonical announcing Ubuntu support for so much new hardware, the announcement of Ubuntu ported to a new architecture can go unnoticed. But today, we have a big one. Working with the leading RISC-V core IP designer and development board manufacturer, SiFive, we are proud to announce the first Ubuntu release for two of the most prominent SiFive boards, Unmatched and Unleashed. This is great news for RISC-V and open source hardware in general. Of course, Linux on RISC-V moves forward with or without the support or major distributions, but having Ubuntu, probably the most popular Linux distribution in the world, on board is a major boon for the architecture.
ARM64EC is a new application binary interface (ABI) for Windows 11 on ARM that runs with native speed and is interoperable with x64. An app, process, or even a module can freely mix and match ARM64EC and x64 as needed. The ARM64EC code in the app will run natively while any x64 code will run using Windows 11 on ARM’s built-in emulation. The ARM64EC ABI differs slightly from the existing ARM64 ABI in ways that make it binary compatible with x64 code. Specifically, the ARM64EC ABI follows x64 software conventions including calling convention, stack usage, and data alignment, making ARM64EC and x64 interoperable. Apps built as ARM64EC may contain x64 code but do not have to, since ARM64EC is its own complete, first-class ABI for Windows. Another tool in the toolbox for Windows developers who wish to treat ARM64 as a first-class citizen.
Microsoft has published a blog post, trying to dispel some of the confusion around Windows 11’s system requirements. First and foremost, the company makes it clear that TPM 2.0 and 8th generation Intel and 2nd generation Ryzen are hard floors. Microsoft adds that based on the feedback during Windows 11’s testing process, support for 7th generation Intel and 1st generation Ryzen processors might be added. Using the principles above, we are confident that devices running on Intel 8th generation processors and AMD Zen 2 as well as Qualcomm 7 and 8 Series will meet our principles around security and reliability and minimum system requirements for Windows 11. As we release to Windows Insiders and partner with our OEMs, we will test to identify devices running on Intel 7th generation and AMD Zen 1 that may meet our principles. There are ways around these hard floors, through registry hacks and custom Windows 11 ISOs, but updates might break those, and who knows if Microsoft will plug those holes.
A federal court on Monday dismissed the Federal Trade Commission’s antitrust complaint against Facebook, as well as a parallel case brought by 48 state attorneys general, dealing a major setback to the agency’s complaint, which could have resulted in Facebook divesting Instagram and WhatsApp. However, the court ruled Monday that the FTC failed to prove its main contention and the cornerstone of the case: that Facebook holds monopoly power in the U.S. personal social networking market. I mean, I hear Friendster and MySpace are the bomb.
One example of this was the parallel universe of FireWire hubs. If you think of FireWire as “a big USB” then a hub wouldn’t seem so strange, but FireWire was actually meant to replace SCSI. SCSI and FireWire are peer-to-peer: any device on the bus can talk to any other device, unlike USB where each bus has at most one host and the host does all the initiation of data transfer. (USB On-The-Go still has one host and one host only; it just allows certain devices like your mobile phone to swing both ways.) The point-to-point capabilities of USB 3 notwithstanding, a USB hub has one upstream port for the host and multiple downstream ports for the devices. A FireWire hub, however, is like getting a longer internal SCSI cable; more devices simply exist on the same bus. Connecting multiple FireWire hubs just makes a bigger bus because all the ports are the same. Everything you ever wanted to know about FireWire hubs, with lots of examples.
In my first story on the unveiling of Windows 11, I remarked that the system requirements remained largely unchanged from Windows 10. Well, as it turns out, I couldn’t have been more wrong. Since the announcement, Microsoft has been incredibly obtuse and back-and-forth about the system requirements for Windows 11, and at this point, it seems like nobody has any clue anymore what’s true and what isn’t. Windows 11 is arriving later this year as a free upgrade for Windows 10 users, but many are discovering that their hardware isn’t compatible. Microsoft has altered its minimum hardware requirements, and it’s the CPU changes that are most surprising here. Windows 11 will only officially support 8th Gen and newer Intel Core processors, alongside Apollo Lake and newer Pentium and Celeron processors. Windows 11 will also only officially support AMD Ryzen 2000 and newer processors, and 2nd Gen or newer EPYC chips. That’s one hell of a hard cutoff, and one that seems entirely arbitrary. There’s nothing in Windows 11 that a first generation Ryzen or 6th or 7th generation Intel Core processor cannot handle, so why rule them out? A lot of people just assume Windows 11 will work on older processors than those listed, but there’s no confirmation from Microsoft that this is the case. Aside from processor support, there’s another aspect that Microsoft is vague about: does Windows 11 require TPM 2.0 or TPM 1.2? Do you need a hardware TPM, or will a firmware TPM, available in about every modern x86 processor but turned off by default, suffice? Nobody seems to have the answers, and it’s leading to a lot of speculation ad uncertainty. The same applies to Secure Boot and UEFI – Microsoft lists both of them as requirements, but most news stories online just assume Microsoft doesn’t truly think of them as requirements, more as suggestions. There’s a lot of uncertainty in the air here for Windows users.