When AMD announced that its new Zen 3 core was a ground-up redesign and offered complete performance leadership, we had to ask them to confirm if that’s exactly what they said. Despite being less than 10% the size of Intel, and very close to folding as a company in 2015, the bets that AMD made in that timeframe with its next generation Zen microarchitecture and Ryzen designs are now coming to fruition. Zen 3 and the new Ryzen 5000 processors, for the desktop market, are the realization of those goals: not only performance per watt and performance per dollar leaders, but absolute performance leadership in every segment. We’ve gone into the new microarchitecture and tested the new processors. AMD is the new king, and we have the data to show it. AMD didn’t lie – these new processors are insanely good, and insanely good value, to boot. If you’re building a new PC today – AMD is the only logical choice. What a time to be alive.
Last month’s news that IBM would do a Hewlett-Packard and divide into two—an IT consultancy and a buzzword compliance unit—marks the end of “business as usual” for yet another of the great workstation companies. There really isn’t much left when it comes to proper workstations. All the major players left the market, have been shut down, or have been bought out (and shut down) – Sun, IBM, SGI, and countless others. Of course, some of them may still make workstations in the sense of powerful Xeon machines, but workstations in the sense of top-to-bottom custom architecture, like SGI’s crossbar switch technology and all the custom architectures us mere mortals couldn’t afford, are no longer being made in large numbers. And it shows. Go on eBay to try and get your hand on a used and old SGI or Sun workstation, and be prepared to pay out of your nose for highly outdated and effectively useless hardware. The number of these machines still on the used market is dwindling, and with no new machines entering the used market, it’s going to become ever harder for us enthusiasts to get our hands on these sorts of exciting machines.
In the latest preview builds, Microsoft has removed all shortcuts that allowed you to access the retired pages of the Control Panel. In other words, you can no longer right-click within the File Explorer and select ‘Properties’ to open the retired ‘System’ page of the Control Panel. Likewise, Microsoft has even blocked CLSID-based IDs and third-party apps. Open Shell and Classic Shell, are also no longer able to launch the hidden System applet of the Control Panel. Now, when a user tries to open the retired Control Panel page, they are brought to the About page instead. This is a good thing. The weird, split-personality nature of Windows is odd, uneccesary, and needlessly complicated, and it’s high time Microsoft fully commits to something for once when it comes to Windows. Whether or not the ‘modern’ path is the one most OSNews readers want Microsoft to take is a different matter altogether.
This release comes with new styles providing better look and feel (Baghira, Domino, Ia Ora), new widgets (KoolDock and TastyMenu), new utilities (KXMLEditor, Mathemagics, Qalculate) and new applications (Codeine, TDEDocker, TDEPacman). It also adds support for Xine 1.2.10, improves compatibility with PulseAudio, fixes various bugs, adds support for brightness control from keyboard and integrates CVE-2020-17507 to prevent buffer overflow in XBM parsers. I both want and do not want to run the Trinity Desktop Environment. It harkens back to simpler times, but I’m not entirely sure that’s what people actually want.
The Wine program for running Windows games/applications on Linux and other platforms can run on a number of different architectures, but Wine doesn’t handle the emulation of running Windows x86/x64 binaries on other architectures like 64-bit ARM or PowerPC. But that’s what the Wine-based Hangover is about with currently allowing those conventional Windows binaries to run on AArch64 (ARM64) and 64-bit POWER too. Hangover started out with a focus on Windows x64 binaries on ARM64 in looking at the possible use-case of running Windows software on ARM mobile devices and more. This year with the help of Raptor Computing Systems there has been Hangover support added for IBM POWER 64-bit. It would be really amazing if Linux on POWER could make use of WINE like regular x86 Linux users can. It’s a long way off, still, but progress is being made.
In 1978, a memory chip stored just 16 kilobits of data. To make a 32-kilobit memory chip, Mostek came up with the idea of putting two 16K chips onto a carrier the size of a standard integrated circuit, creating the first memory module, the MK4332 “RAM-pak”. This module allowed computer manufacturers to double the density of their memory systems and by 1982, Mostek had sold over 3 million modules. The Apple III is the best-known system that used these memory modules. A deep dive into these interesting chips.
Today may be Halloween, but what Intel is up to is no trick. Almost a year after showing off their alpha silicon, Intel’s first discrete GPU in over two decades has been released and is now shipping in OEM laptops. The first of several planned products using the DG1 GPU, Intel’s initial outing in their new era of discrete graphics is in the laptop space, where today they are launching their Iris Xe MAX graphics solution. Designed to complement Intel’s Xe-LP integrated graphics in their new Tiger Lake CPUs, Xe MAX will be showing up in thin-and-light laptops as an upgraded graphics option, and with a focus on mobile creation. With AMD stepping up to the plate with their latest high-end cards, it’s very welcome to see Intel attacking the lower end of the market. They have a roadmap to move up, though, so we might very well end up with three graphics card makers to choose from – a luxury we haven’t seen in about twenty years.
But let’s not bury the lede here: after several days of screaming, ranting and scaring the cat with various failures, this blog post is finally being typed in a fully profile-guided and link-time optimized Firefox 82 tuned for POWER9 little-endian. Although it multiplies compile time by nearly a factor of 3 and the build process intermittently can consume a terrifying amount of memory, the PGO-LTO build is roughly 25% faster than the LTO-only build, which was already 4% faster than the “baseline” -O3 -mcpu=power9 build. That’s worth an 84-minute coffee break! (-j24 on a dual-8 Talos II , 64GB RAM.) This whole post is a ringing endorsement of Firefox and why the technology landscape – especially the alternative operating systems and hardware platforms landscape – needs Firefox. There really isn’t any other viable option. Chromium? Chromium is open source, but a lot of its important functionality is hidden behind a needlessly complex process of setting up and registering API keys that’s about as intuitive as designing an atomic bomb from scratch on a deserted island. On top of that, Chromium is still a Google project, and as Google’s reluctance to support important features on Linux shows, Chromium is designed for Google’s interests – nobody else’s. WebKit? WebKit requires developers to build an entire web browser around it from scratch. While that can lead to awesome applications, it also means replicating every single bit of functionality users have come to expect from their browsers. Things like bookmark and tab sync, extensions, and so on – all have to be built and maintained form scratch. Firefox is the only complete package someone can port to another platform and end up with a complete browser package. Sure, it’s definitely not an easy undertaking to port a program as complex as Firefox, but in a lot of cases, it’s probably easier than porting WebKit/Blink and building a browser around it from scratch.
We talked about the state of X.org earlier this week, and the wider discussion was picked up by Adam Jackson, who works at Red Hat as the X.Org Server release manager, and has been heavily involved with X development for many years. There’s been some recent discussion about whether the X server is abandonware. As the person arguably most responsible for its care and feeding over the last 15 years or so, I feel like I have something to say about that. So, is Xorg abandoned? To the extent that that means using it to actually control the display, and not just keep X apps running, I’d say yes. But xserver is more than xfree86. Xwayland, Xwin, Xephyr, Xvnc, Xvfb: these are projects with real value that we should not give up. A better way to say it is that we can finally abandon xfree86. Seems like a fair and honest assessment.
FreeBSD 12.2 has been released. Changes include updates for the wireless stack for better 802.11n and 802.11ac support, the latest versions of OpenSSL and OpenSSH, and much more. On top of the changes comes an announcement in the release notes of a change for the i386 versions of FreeBSD, starting with FreeBSD 13.0. Starting with FreeBSD-13.0, the default CPUTYPE for the i386 architecture will change from 486 to 686. This means that, by default, binaries produced will require a 686-class CPU, including but not limited to binaries provided by the FreeBSD Release Engineering team. FreeBSD 13.0 will continue to support older CPUs, however users needing this functionality will need to build their own releases for official support. This won’t affect most users, but people with very specific needs should take note.
Preparing to close out a major month of announcements for AMD – and to open the door to the next era of architectures across the company – AMD wrapped up its final keynote presentation of the month by announcing their Radeon RX 6000 series of video cards. Hosted once more by AMD CEO Dr. Lisa Su, AMD’s hour-long keynote revealed the first three parts in AMD’s new RDNA2 architecture video card family: the Radeon RX 6800, 6800 XT, and 6900 XT. The core of AMD’s new high-end video card lineup, AMD means to do battle with the best of the best out of arch-rival NVIDIA. And we’ll get to see first-hand if AMD can retake the high-end market on November 18th, when the first two cards hit retail shelves. AMD’s forthcoming video card launch has been a long time coming for the company, and one they’ve been teasing particularly heavily. For AMD, the Radeon RX 6000 series represents the culmination of efforts from across the company as everyone from the GPU architecture team and the semi-custom SoC team to the Zen CPU team has played a role in developing AMD’s latest GPU technology. All the while, these new cards are AMD’s best chance in at least half a decade to finally catch up to NVIDIA at the high-end of the video card market. So understandably, the company is jazzed – and in more than just a marketing manner – about what the RX 6000 means. If AMD’s promises and performance comparisons shown today hold up, these new Radeon cards put AMD right back in the game with NVIDIA, going toe-to-toe with NVIDIA’s latest RTZ 30×0 cards – all the way up to the 3090, at lower prices and lower power consumption. Of course, those are just promises and charts, but AMD has proven itself lately to be fairly accurate and fair when announcing new products. If the promises hold up, Dr. Lisa Su and her team will have not only stomped all over Intel, but will also be ready to stomp all over NVIDIA, especially if they manage to follow a similar trajectory as they did with the Zen line of processors. If you are in the market for a new mid to high-end PC, you haven’t had this many viable options in a long, long time.
Speaking of the Amiga: Thirty five years ago I became an Amiga user. One of the first, actually. This is a meandering and reminiscent post of sorts, written to mark the Amiga’s 35th anniversary and the 35 years I have known and loved the system. The Amiga is such an odd platform. Against every single odd ever created, it is still around, it still has an incredibly dedicated community maintaining, upgrading, and expanding both the hardware and software of not only the classic Amiga, but also the ‘modern’ Amiga OS 4 platform. And on top of all that, there’s MorphOS steadily improving every single release, and AROS as the open source alternative. The dedication the loyal Amiga fanbase displays every single day for 35 years now is inspiring. I’ve extensively tested, explored, and used both Amigs OS 4 and MorphOS, and while neither of those click with me in any way, I can’t help but admire the Amiga community as a whole – the usual warts that go with vibrant communities and all. Here’s to another 35 years, you crazy bastards.
Retrohax.net got their hands on an extremely rare motherboard replacement for the Amiga 1000 – the Amiga 1000 Phoenix Enhanced motherboard. It’s difficult to say exactly how many of these were made, but some people claim around 200, while others peg the number at around 2000. Either way, they are rare. They set around to get it to work, which required a lot of work. There’s tons of photos in the article, and you can go to this forum post for another user who came to own one of these rare motherboards for more information.
With the release of Qt 6.0 upcoming, let’s see what has happened since Qt 5.15. It will not be possible to cover every detail of the graphics stack improvements for Qt Quick here, let alone dive into the vast amount of Qt Quick 3D features, many of which are new or improved in Qt 6.0. Rather, the aim is just to give an overview of what can be expected from the graphics stack perspective when Qt 6.0 ships later this year. Exactly what is says on the tin. Especially Qt developers will obviously want to read this.
This webpage describes the MIOS Project. MIOS is a chip-for-chip replacement of the BIOS (Basic Input Output System) on the IBM 5150 Personal Computer. On the IBM PC the BIOS is contained in a ROM IC Chip located on the motherboard at socket location U33. The IC is socketed and can be replaced with a custom ROM containing custom code. The purpose of this project is to explore controlling the IBM PC hardware in non-standard ways. The purpose is not to replace the BIOS with another BIOS that does exactly the same thing! We are going to describe how MIOS works by describing the path we took for development. Amazingly cool project. I’m not entirely sure for how long it’s been around, but that doesn’t make it any less awesome.
Arca Noae’s approach to supporting GPT will be multi-phased, with the first phase of development currently underway and anticipated for release with ArcaOS 5.1. The design specification of our initial GPT support is to allow for partitions up to the current 2TB maximum size, with multiple partitions of this size possible on disks larger than 2TB. Our specification further provides that ArcaOS be able to create, delete, and modify GPT partitions which are identified by their GUIDs as being “OS/2-type” partitions, and lastly, that GPT support be available for both traditional BIOS (for data volumes) and UEFI-based systems (for boot and data volumes). This is one of the biggest hurdles for ArcaOS to overcome, and I’m glad they’ve committed to tackling it. Having to partition an entire disk in legacy MBR just to be able to run ArcaOS on real hardware is a major barrier to entry.
During the upcoming months, Jérôme is going to overhaul the Mm (Memory Manager) and Cc (Cache Controller) components of the kernel. Both of them are core parts of the operating system, which are involved in every memory request and file operation. Improving them is expected to have a substantial effect on the overall stability and performance of ReactOS. Always nice to see small projects gather the funds to hire a developer to do work.
Why has it taken until the last few years for speech recognition to be adopted in day-to-day use? The technology has many hidden industrial applications, but as a real-time user interface for day-to-day use, i.e. talking to your computer, adoption has been unbelievably slow. When I was studying in the 90s, I read about a sort of reverse Turing test, which demonstrated one reason why. Volunteers believed they were talking to a computer, but responses were actually provided by a human being typing “behind the curtain”. The observations and subsequent interviews showed that, back then, people simply didn’t like it. So, what’s the problem? We have a Google Home in the house, and we basically only use it to set kitchen timers and find out the outside temperature (so we know how many layers to put on – we live on the arctic circle, and -25-30°C is normal). That’s it. I don’t see much of a use for anything else, as our computers and smartphones are both easier to use and faster than any voice assistant or voice input. The key to modern voice assistants is that they are basically glorified command line interfaces – they need a command and parameters. What makes them so hard to use is that these commands and parameters are pretty much entirely undiscoverable and ever-changing, unlike actual command line interfaces where they are easily discoverable and static. If voice input and voice assistants really want to take off, we’ll need to make some serious advances in not just recording our voices and mapping them to commands and parameters, but in actually understanding what we as humans are saying. We’re a long way off from that.
System76 recently unveiled their latest entirely in-house Linux workstation, the Thelio Mega – a quad-GPU Threadripper monster with a custom case and cooling solution. System76’s CEO and founder Carl Richell penned a blog post about the design process of the Thelio Mega, including some performance, temperature, and noise comparisons. Early this year, we set off to engineer our workstation version of a Le Mans Hypercar. It started with a challenge: Engineer a quad-GPU workstation that doesn’t thermal throttle any of the GPUs. Three GPUs is pretty easy. Stack the forth one in there and it’s a completely different animal. Months of work and thousands of engineering hours later we accomplished our goal. Every detail was scrutinized. Every part is of the highest quality. And new factory capabilities, like milling, enabled us to introduce unique solutions to design challenges. The result is Thelio Mega. A compact, high-performance quad-GPU system that’s quiet enough to sit on your desk. I’m currently wrapping up a review of the Bonobo WS, and if at all possible, I’ll see if I can get a Thelio Mega for review, too (desktops like this, which are usually custom-built for each customer, are a bit harder to get for reviews).
After two tweets that I made last week, playing around with UEFI and Rust, some people asked to publish a blog post explaining how to create a UEFI application fully written in Rust and demonstrate all the testing environment. So todays objective it’s to create a UEFI application in Rust that prints out the memory map filtered by usable memory (described as conventional memory by the UEFI specification). But before putting the hands at work let’s review some concepts first. uefi-rs is a Rust wrapper for UEFI.