Hardware Archive
We’ve reviewed the most powerful BBC Micro model B disc protection scheme I found, across an audit of most of the copy protected discs released for the machine. It’s clever in that you don’t need specialized hardware to create the disc, or read the disc. But you’re going to struggle to duplicate the disc. Copy protection schemes from the ’80 and early ’90s are fascinating, and this one is no exception.
The new CPU configuration gives the new SoC a good uplift in performance, although it’s admittedly less of a jump than I had hoped for this generation of Cortex-X1 designs, and I do think Qualcomm won’t be able to retain the performance crown for this generation of Android-SoCs, with the performance gap against Apple’s SoCs also narrowing less than we had hoped for. On the GPU side, the new 35% performance uplift is extremely impressive. If Qualcomm is really able to maintain similar power figures this generation, it should allow the Snapdragon 888 to retake the performance crown in mobile, and actually retain it for the majority of 2021. At this point it feels like we’re far beyond the point of diminishing returns for smartphones, but with ARM moving to general purpose computers, there’s still a lot of performance gains to be made. I want a Linux-based competitor to Apple’s M1-based Macs, as Linux is perfectly suited for architecture transitions like this.
Ars Technica summarises and looks at the various claims made by Micro Magic about their RISC-V core. Micro Magic Inc.—a small electronic design firm in Sunnyvale, California—has produced a prototype CPU that is several times more efficient than world-leading competitors, while retaining reasonable raw performance. We first noticed Micro Magic’s claims earlier this week, when EE Times reported on the company’s new prototype CPU, which appears to be the fastest RISC-V CPU in the world. Micro Magic adviser Andy Huang claimed the CPU could produce 13,000 CoreMarks (more on that later) at 5GHz and 1.1V while also putting out 11,000 CoreMarks at 4.25GHz—the latter all while consuming only 200mW. Huang demonstrated the CPU—running on an Odroid board—to EE Times at 4.327GHz/0.8V and 5.19GHz/1.1V. Later the same week, Micro Magic announced the same CPU could produce over 8,000 CoreMarks at 3GHz while consuming only 69mW of power. I have some major reservations about all of these claims, mostly because of the lack of benchmarks that more accurately track real-world usage. Extraordinary claims requite extraordinary evidence, and I feel like some vague photos just doesn’t to the trick of convincing me. Then again, last time I said anything about an upcoming processor, I was off by a million miles, so what do I know?
A few weeks ago, we linked to an article that went in-depth into UEFI, and today, we have a follow-up. But the recent activity reminded me that there was one thing I couldn’t figure out how to do at the time: Enumerate all the available UEFI variables from within Windows. If you remember, Windows has API calls to get and set UEFI variable values, but not to enumerate them. So I started doing some more research to see if there was any way to do that – it’s obviously possible as the UEFI specs describe it, a UEFI shell can easily do it, and Linux does it (via a file system). My research took me to a place I wouldn’t have expected. We can always go deeper.
Earlier this year, we reviewed System76’s Lemur Pro, a laptop designed for portability and long battery life. This time around, we’re going entirely the opposite direction with the System76 Bonobo WS – a mobile workstation that looks like a laptop (if you squint), but packs some of the fastest desktop-grade hardware available on the market. Specifications System76 sent us the latest version of the Bonobo WS, with some truly bonkers specifications for what is, technically, a laptop (sort of), at a total price of $4315.22. This mobile workstation comes with an Intel Core i9-10900K, which has 10 cores and 20 threads and runs at 5.3 Ghz – and this is not a constrained mobile chip, but the full desktop processor. It’s paired with an 8GB RTX 2080 Super graphics card – which, again, is the desktop part, not the mobile version. It has 32 GB of RAM configured in dual-channel at 3200 Mhz. To top it off, I configured it with a 250 GB NVMe drive for the operating system, and an additional 1 TB NVME drive for storage and other stuff. Both of these drives have a theoretical sequential read and write speeds of 3500 MB/s and 2300 MB/s respectively. The Bonobo WS comes with a 17.3″ display, and I opted for the 1080p 144Hz version, since the 4K option was not yet available at the time of setting up the review unit. The 4K option, which I would normally recommend on a display of this size, might not make a lot of sense here since most people interested in a niche mobile workstation like this will most likely be using external displays anyway, making the splurge for the 4K option a bit moot, especially since it’s a mere 60 Hz panel. There’s a few other specifications we need to mention – specifically the weight and battery life of a massive computer like this one. The base weight is roughly 3.8 kg, and its dimensions are 43.43 × 399.03 × 319.02 mm (height × width × depth). While this machine can technically be classified as a laptop, the mobile workstation moniker is a far more apt description. This is not a machine for carrying from classroom to classroom – this is a machine that most users will use in just two, possible three places, and don’t move very often. Another reason for that is battery life. A machine with this much power requires a lot of juice, and the 97 Wh battery isn’t going to give you a lot of unplugged time to work. You’ll spend all of your time plugged into not one, but two power sockets, as this machine requires two huge power bricks. It even comes with an adorable rubber thing that ties the two power bricks together in a way that maintains some space between them for cooling and safety purposes. So not only do you have to lug around the massive machine itself, but also the two giant power bricks. As this is a mobile workstation, the ports situation is excellent. It has a USB 3.2 Gen 2×2 (ugh)/Thunderbolt 3 port (type C), 3 USB 3.2 Gen 2 (type A) ports, and a MicroSD card slot. For your external display needs, we’ve got a full-size HDMI port, 2 mini DisplayPorts (1.4) and a DisplayPort (1.4) over USB type C. Furthermore, there’s an Ethernet port, the usual audio jacks (microphone and headphones, and one also has an optical connection), and the obligatory Kensington lock. Of course, there’s wireless networking support through an Intel dual-band WiFi 6 chip, as well as Bluetooth support. Hardware The hardware of this machine is entirely dictated by its internals, since cramming this much desktop power in a computer that weighs less than 4 kg doesn’t leave you with much room to mess around. The entire design is dictated by the required cooling, and there are vents all over the place. This is not a pretty or attractive machine – but it doesn’t need to be. People who need this much mobile power to lug around don’t care about what it looks like, how thin it is, or how aluminium the aluminium is – they need this power to be properly cooled, and if that means more thickness or more vents, then please don’t skimp. If you care about form over function – which is an entirely legitimate criterion, by the way, and don’t let anybody tell you otherwise – there are other devices to choose from. While the laptop does have some RGB flourishes here and there, they’re not overly present or distracting, and the ability to switch between several colours for the keyboard lighting is very nice to have, since I find the generic white light most laptops use to not always be ideal. You can cycle through the various lighting options with a key combination. The keyboard has a little bit more key travel than I’m used to from most laptops, probably owed to its chunky size leaving more room for the keys to travel. The keys have a bit of wobble, but not enough to cause me to miss keystrokes. I am not a fan of the font used on the keyboard, but that’s a mere matter of taste. The trackpad is decent, feels fine enough, and works great with Linux (obviously). In what I first thought was a blast from the past, the laptop has physical buttons for right and left click underneath the trackpad. However, after a little bit of use, I realised just how nice it was to have actual, physical buttons, and not a diving board or – god forbid – a trackpad that only supports tapping. Of course, it’s not nearly as good as Apple’s force touch trackpad that simulates an eerily realistic click wherever you press, but it does the job just fine. That being said, though, much like with the display, I doubt many people who need a machine like this will really care. They’ll most likely not only have
Last month’s news that IBM would do a Hewlett-Packard and divide into two—an IT consultancy and a buzzword compliance unit—marks the end of “business as usual” for yet another of the great workstation companies. There really isn’t much left when it comes to proper workstations. All the major players left the market, have been shut down, or have been bought out (and shut down) – Sun, IBM, SGI, and countless others. Of course, some of them may still make workstations in the sense of powerful Xeon machines, but workstations in the sense of top-to-bottom custom architecture, like SGI’s crossbar switch technology and all the custom architectures us mere mortals couldn’t afford, are no longer being made in large numbers. And it shows. Go on eBay to try and get your hand on a used and old SGI or Sun workstation, and be prepared to pay out of your nose for highly outdated and effectively useless hardware. The number of these machines still on the used market is dwindling, and with no new machines entering the used market, it’s going to become ever harder for us enthusiasts to get our hands on these sorts of exciting machines.
In 1978, a memory chip stored just 16 kilobits of data. To make a 32-kilobit memory chip, Mostek came up with the idea of putting two 16K chips onto a carrier the size of a standard integrated circuit, creating the first memory module, the MK4332 “RAM-pak”. This module allowed computer manufacturers to double the density of their memory systems and by 1982, Mostek had sold over 3 million modules. The Apple III is the best-known system that used these memory modules. A deep dive into these interesting chips.
System76 recently unveiled their latest entirely in-house Linux workstation, the Thelio Mega – a quad-GPU Threadripper monster with a custom case and cooling solution. System76’s CEO and founder Carl Richell penned a blog post about the design process of the Thelio Mega, including some performance, temperature, and noise comparisons. Early this year, we set off to engineer our workstation version of a Le Mans Hypercar. It started with a challenge: Engineer a quad-GPU workstation that doesn’t thermal throttle any of the GPUs. Three GPUs is pretty easy. Stack the forth one in there and it’s a completely different animal. Months of work and thousands of engineering hours later we accomplished our goal. Every detail was scrutinized. Every part is of the highest quality. And new factory capabilities, like milling, enabled us to introduce unique solutions to design challenges. The result is Thelio Mega. A compact, high-performance quad-GPU system that’s quiet enough to sit on your desk. I’m currently wrapping up a review of the Bonobo WS, and if at all possible, I’ll see if I can get a Thelio Mega for review, too (desktops like this, which are usually custom-built for each customer, are a bit harder to get for reviews).
So what’s the topic? Something that I started talking about almost 10 years ago, the Unified Extensible Firmware Interface (UEFI). Back then, it was more of a warning: the way you deploy Windows is going to change. Now, it’s a way of life (and fortunately, it no longer sucks like it did back in 2010 when we first started working with it). I don’t want to rehash the “why’s” behind UEFI because frankly, you no longer have much of a choice: all new Windows 10 devices ship with UEFI enabled by default (and if you are turning it off, shame on you). Instead, I want to focus much more on how it works and what’s going on behind the scenes. A really in-depth article about UEFI – you have to be a certain kind of person to enjoy stuff like this. The article’s about a year old, but still entirely relevant.
Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules. We’re gearing up for a return of the times where when buying a new motherboard or new memory, you better make sure the right DDR version is selected.
Lots of people were excited by the news over Hangover’s port to ppc64le, and while there’s a long way to go, the fact it exists is a definite step forward to improving the workstation experience on OpenPOWER. Except, of course, that many folks (including your humble author) can’t run it: Hangover currently requires a kernel with a 4K memory page size, which is the page size of the majority of extant systems (certainly x86_64, which only offers a 4K page size). ppc64 and ppc64le can certainly run on a 4K page size and some distributions do, yet the two probably most common distributions OpenPOWER users run — Debian and Fedora — default to a 64K page size. This article gives an answer to the question why.
This article provides a subjective history of POWER and open source from the viewpoint of an open source developer, outlines a few trends and conclusions, and previews what the future will bring. It is based on my talk at the annual OpenPOWER North America Summit, in which I aimed to show the importance of desktop/workstation-class hardware available to developers. In this article, I will cover a few additional topics, including cloud resources available to POWER developers, as well as a glimpse into the products and technologies under development. The biggest problem for POWER that I can see at the moment is that the kind of POWER processors you want – little endian – are expensive. This precludes more affordable desktops from entering the market, let alone even laptops. Big endian POWER processors aren’t exactly future-proof, as Linux distributions are dropping support for them. It’s a difficult situation, but I don’t think there’s much that can be done about it.
The Zuse Z4 is considered the oldest preserved computer in the world. Manufactured in 1945 and overhauled and expanded in 1949/1950, the relay machine was in operation on loan at the ETH Zurich from 1950 to 1955. Today the huge digital computer is located in the Deutsches Museum in Munich. The operating instructions for the Z4 were lost for a long time. In 1950, ETH Zurich was the only university in continental Europe with a functioning tape-controlled computer. From the 1940s, only one other computer survived: the Csirac vacuum tube computer (1949). It is in the Melbourne Museum, Carlton, Victoria, Australia. Evelyn Boesch from the ETH Zurich University archives let me know in early March 2020 that her father René Boesch (born in 1929), who had been working under Manfred Rauscher at the Institute for Aircraft Statics and Aircraft Construction at ETH Zurich since 1956, had kept rare historical documents. Boesch’s first employment was with the Swiss Aeronautical Engineering Association, which was housed and affiliated to the above-mentioned institute. The research revealed that the documents included a user manual for the Z4 and notes on flutter calculations. What an astonishing discovery. Stories like this make me wonder just how many rare, valuable, irreplaceable hardware, software, and documentation is rotting away in old attics, waiting to be thrown in a dumpster after someone’s death.
Techies hailed USB-C as the future of cables when it hit the mainstream market with Apple’s single-port MacBook in 2015. It was a huge improvement over the previous generation of USB, allowing for many different types of functionality — charging, connecting to an external display, etc. — in one simple cord, all without having a “right side up” like its predecessor. Five years later, USB-C is near-ubiquitous: Almost every modern laptop and smartphone has at least one USB-C port, with the exception of the iPhone, which still uses Apple’s proprietary Lightning port. For all its improvements, USB-C has become a mess of tangled standards — a nightmare for consumers to navigate despite the initial promise of simplicity. Especially the charging situation with USB-C can be a nightmare. I honestly have no clue which of my U SB-C devices can fast-charge with which charger and which cable, and I just keep plugging stuff in until it works. Add in all my fiancée’s devices, and it’s… Messy.
Most information presented during the annual X.Org Developers’ Conference doesn’t tend to be very surprising or ushering in breaking news, but during today’s XDC2020 it was subtly dropped that Arm Holdings appears to now be backing the open-source Panfrost Gallium3D driver. Panfrost has been developed over the past several years as what began as a reverse-engineered effort by Alyssa Rosenzweig to support Arm Mali Bifrost and Midgard hardware. This driver had a slow start but Rosenzweig has been employed by Collabora for a while now and they’ve been making steady progress on supporting newer Mali hardware and advancing the supported OpenGL / GLES capabilities of the driver. This is a major departure from previous policy for ARM, since the company always shied away from open source efforts around its Mali GPUs.
Update: it’s official now – NVIDIA is buying ARM. Original story: Nvidia Corp is close to a deal to buy British chip designer Arm Holdings from SoftBank Group Corp for more than $40 billion in a deal which would create a giant in the chip industry, according to two people familiar with the matter. A cash and stock deal for Arm could be announced as early as next week, the sources said. That will create one hell of a giant chip company, but at the same time – what alternatives are there? ARM on its own probably won’t make it, SoftBank has no clue what to do with ARM, and any of the other major players – Apple, Amazon, Google, Microsoft – would be even worse, since they all have platforms to lock you into, and ARM would be a great asset in that struggle. At least NVIDIA just wants to sell as many chips to as many people as possible, and isn’t that interested in locking you into a platform. That being said – who knows? Often, the downsides to deals like this don’t come out until years later. We’ll just have to wait and see.
As desktop processors were first crossing the Gigahertz level, it seemed for a while that there was nowhere to go but up. But clock speed progress eventually ground to a halt, not because of anything to do with the speed itself but rather because of the power requirements and the heat all that power generated. Even with the now-common fans and massive heatsinks, along with some sporadic water cooling, heat remains a limiting factor that often throttles current processors. Part of the problem with liquid cooling solutions is that they’re limited by having to get the heat out of the chip and into the water in the first place. That has led some researchers to consider running the liquid through the chip itself. Now, some researchers from Switzerland have designed the chip and cooling system as a single unit, with on-chip liquid channels placed next to the hottest parts of the chip. The results are an impressive boost in heat-limited performance. This seems like a very logical next step for watercooling and processor cooling in general, but this is far from easy. This article highlights that we are getting closer, though.
Arm is known for its Cortex range of processors in mobile devices, however the mainstream Cortex-A series of CPUs which are used as the primary processing units of devices aren’t the only CPUs which the company offers. Alongside the microcontroller-grade Cortex-M CPU portfolio, Arm also offers the Cortex-R range of “real-time” processors which are used in high-performance real-time applications. The last time we talked about a Cortex-R product was the R8 release back in 2016. Back then, the company proposed the R8 to be extensively used in 5G connectivity solutions inside of modem subsystems. Another large market for the R-series is storage solutions, with the Cortex-R processors being used in HDD and SSD controllers as the main processing elements. Today, Arm is expanding its R-series portfolio by introducing the new Cortex-R82, representing the company’s first 64-bit Armv8-R architecture processor IP, meaning it’s the first 64-bit real-time processor from the company. AnandTech and its usual deep dive into the intricacies of this new lineup from ARM. Obviously these kinds of chips are not something most people actively work with – we tend to merely use them, often even without realising it.
Linux capable RISC-V boards do exist but cost several hundred dollars or more with the likes of HiFive Unleashed and PolarFire SoC Icicle development kit. If only there was a RISC-V board similar to the Raspberry Pi board and with a similar price point… The good news is that the RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring PicoRio RISC-V SBC to market at a price point similar to Raspberry Pi. I’m 100% ready for fully top-to-bottom open source hardware, whether it’s Power9/Power10 at the high end, or RISV-V at the low end. ARM is a step backwards in this regard compared to x86, and while I doubt RISC-V or Power will magically displace either of those two, the surge in interest in ARM for more general purpose computing at least opens the door just a tiny little bit.
The Nanoprocessor is a mostly-forgotten processor developed by Hewlett-Packard in 1974 as a microcontroller for their products. Strangely, this processor couldn’t even add or subtract, probably why it was called a nanoprocessor and not a microprocessor. Despite this limitation, the Nanoprocessor powered numerous Hewlett-Packard devices ranging from interface boards and voltmeters to spectrum analyzers and data capture terminals. The Nanoprocessor’s key feature was its low cost and high speed: Compared against the contemporary Motorola 6800, the Nanoprocessor cost $15 instead of $360 and was an order of magnitude faster for control tasks. Recently, the six masks used to manufacture the Nanoprocessor were released by Larry Bower, the chip’s designer, revealing details about its design. The composite mask image below shows the internal circuitry of the integrated circuit. The blue layer shows the metal on top of the chip, while the green shows the silicon underneath. The black squares around the outside are the 40 pads for connection to the IC’s external pins. I used these masks to reverse-engineer the circuitry of the processor and understand its simple but clever RISC-like design. This is a very detailed and in-depth article, so definitely not for the faint of heart. Definitely a little over my head, but I know for a fact there’s quite a few among you that love and understand this sort of stuff deeply.