Hardware Archive

The untimely demise of workstations

Last month’s news that IBM would do a Hewlett-Packard and divide into two—an IT consultancy and a buzzword compliance unit—marks the end of “business as usual” for yet another of the great workstation companies. There really isn’t much left when it comes to proper workstations. All the major players left the market, have been shut down, or have been bought out (and shut down) – Sun, IBM, SGI, and countless others. Of course, some of them may still make workstations in the sense of powerful Xeon machines, but workstations in the sense of top-to-bottom custom architecture, like SGI’s crossbar switch technology and all the custom architectures us mere mortals couldn’t afford, are no longer being made in large numbers. And it shows. Go on eBay to try and get your hand on a used and old SGI or Sun workstation, and be prepared to pay out of your nose for highly outdated and effectively useless hardware. The number of these machines still on the used market is dwindling, and with no new machines entering the used market, it’s going to become ever harder for us enthusiasts to get our hands on these sorts of exciting machines.

Inside the stacked RAM modules used in the Apple III

In 1978, a memory chip stored just 16 kilobits of data. To make a 32-kilobit memory chip, Mostek came up with the idea of putting two 16K chips onto a carrier the size of a standard integrated circuit, creating the first memory module, the MK4332 “RAM-pak”. This module allowed computer manufacturers to double the density of their memory systems and by 1982, Mostek had sold over 3 million modules. The Apple III is the best-known system that used these memory modules. A deep dive into these interesting chips.

Behind the scenes of Thelio Mega engineering

System76 recently unveiled their latest entirely in-house Linux workstation, the Thelio Mega – a quad-GPU Threadripper monster with a custom case and cooling solution. System76’s CEO and founder Carl Richell penned a blog post about the design process of the Thelio Mega, including some performance, temperature, and noise comparisons. Early this year, we set off to engineer our workstation version of a Le Mans Hypercar. It started with a challenge: Engineer a quad-GPU workstation that doesn’t thermal throttle any of the GPUs. Three GPUs is pretty easy. Stack the forth one in there and it’s a completely different animal. Months of work and thousands of engineering hours later we accomplished our goal. Every detail was scrutinized. Every part is of the highest quality. And new factory capabilities, like milling, enabled us to introduce unique solutions to design challenges. The result is Thelio Mega. A compact, high-performance quad-GPU system that’s quiet enough to sit on your desk. I’m currently wrapping up a review of the Bonobo WS, and if at all possible, I’ll see if I can get a Thelio Mega for review, too (desktops like this, which are usually custom-built for each customer, are a bit harder to get for reviews).

Geeking out with UEFI

So what’s the topic? Something that I started talking about almost 10 years ago, the Unified Extensible Firmware Interface (UEFI). Back then, it was more of a warning: the way you deploy Windows is going to change. Now, it’s a way of life (and fortunately, it no longer sucks like it did back in 2010 when we first started working with it). I don’t want to rehash the “why’s” behind UEFI because frankly, you no longer have much of a choice: all new Windows 10 devices ship with UEFI enabled by default (and if you are turning it off, shame on you). Instead, I want to focus much more on how it works and what’s going on behind the scenes. A really in-depth article about UEFI – you have to be a certain kind of person to enjoy stuff like this. The article’s about a year old, but still entirely relevant.

DDR5 is coming: first 64GB DDR5-4800 modules from SK Hynix

Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules. We’re gearing up for a return of the times where when buying a new motherboard or new memory, you better make sure the right DDR version is selected.

Where did the 64K page size come from?

Lots of people were excited by the news over Hangover’s port to ppc64le, and while there’s a long way to go, the fact it exists is a definite step forward to improving the workstation experience on OpenPOWER. Except, of course, that many folks (including your humble author) can’t run it: Hangover currently requires a kernel with a 4K memory page size, which is the page size of the majority of extant systems (certainly x86_64, which only offers a 4K page size). ppc64 and ppc64le can certainly run on a 4K page size and some distributions do, yet the two probably most common distributions OpenPOWER users run — Debian and Fedora — default to a 64K page size. This article gives an answer to the question why.

I’m a POWER user

This article provides a subjective history of POWER and open source from the viewpoint of an open source developer, outlines a few trends and conclusions, and previews what the future will bring. It is based on my talk at the annual OpenPOWER North America Summit, in which I aimed to show the importance of desktop/workstation-class hardware available to developers. In this article, I will cover a few additional topics, including cloud resources available to POWER developers, as well as a glimpse into the products and technologies under development. The biggest problem for POWER that I can see at the moment is that the kind of POWER processors you want – little endian – are expensive. This precludes more affordable desktops from entering the market, let alone even laptops. Big endian POWER processors aren’t exactly future-proof, as Linux distributions are dropping support for them. It’s a difficult situation, but I don’t think there’s much that can be done about it.

Discovery: user manual of the oldest surviving computer in the world

The Zuse Z4 is considered the oldest preserved computer in the world. Manufactured in 1945 and overhauled and expanded in 1949/1950, the relay machine was in operation on loan at the ETH Zurich from 1950 to 1955. Today the huge digital computer is located in the Deutsches Museum in Munich. The operating instructions for the Z4 were lost for a long time. In 1950, ETH Zurich was the only university in continental Europe with a functioning tape-controlled computer. From the 1940s, only one other computer survived: the Csirac vacuum tube computer (1949). It is in the Melbourne Museum, Carlton, Victoria, Australia. Evelyn Boesch from the ETH Zurich University archives let me know in early March 2020 that her father René Boesch (born in 1929), who had been working under Manfred Rauscher at the Institute for Aircraft Statics and Aircraft Construction at ETH Zurich since 1956, had kept rare historical documents. Boesch’s first employment was with the Swiss Aeronautical Engineering Association, which was housed and affiliated to the above-mentioned institute. The research revealed that the documents included a user manual for the Z4 and notes on flutter calculations. What an astonishing discovery. Stories like this make me wonder just how many rare, valuable, irreplaceable hardware, software, and documentation is rotting away in old attics, waiting to be thrown in a dumpster after someone’s death.

USB-C was supposed to simplify our lives. Instead, it’s a total mess.

Techies hailed USB-C as the future of cables when it hit the mainstream market with Apple’s single-port MacBook in 2015. It was a huge improvement over the previous generation of USB, allowing for many different types of functionality — charging, connecting to an external display, etc. — in one simple cord, all without having a “right side up” like its predecessor. Five years later, USB-C is near-ubiquitous: Almost every modern laptop and smartphone has at least one USB-C port, with the exception of the iPhone, which still uses Apple’s proprietary Lightning port. For all its improvements, USB-C has become a mess of tangled standards — a nightmare for consumers to navigate despite the initial promise of simplicity. Especially the charging situation with USB-C can be a nightmare. I honestly have no clue which of my U SB-C devices can fast-charge with which charger and which cable, and I just keep plugging stuff in until it works. Add in all my fiancée’s devices, and it’s… Messy.

ARM is now backing Panfrost Gallium3D as open-source Mali graphics driver

Most information presented during the annual X.Org Developers’ Conference doesn’t tend to be very surprising or ushering in breaking news, but during today’s XDC2020 it was subtly dropped that Arm Holdings appears to now be backing the open-source Panfrost Gallium3D driver. Panfrost has been developed over the past several years as what began as a reverse-engineered effort by Alyssa Rosenzweig to support Arm Mali Bifrost and Midgard hardware. This driver had a slow start but Rosenzweig has been employed by Collabora for a while now and they’ve been making steady progress on supporting newer Mali hardware and advancing the supported OpenGL / GLES capabilities of the driver. This is a major departure from previous policy for ARM, since the company always shied away from open source efforts around its Mali GPUs.

Nvidia nears deal to buy chip designer Arm for more than $40 billion, sources say

Update: it’s official now – NVIDIA is buying ARM. Original story: Nvidia Corp is close to a deal to buy British chip designer Arm Holdings from SoftBank Group Corp for more than $40 billion in a deal which would create a giant in the chip industry, according to two people familiar with the matter. A cash and stock deal for Arm could be announced as early as next week, the sources said. That will create one hell of a giant chip company, but at the same time – what alternatives are there? ARM on its own probably won’t make it, SoftBank has no clue what to do with ARM, and any of the other major players – Apple, Amazon, Google, Microsoft – would be even worse, since they all have platforms to lock you into, and ARM would be a great asset in that struggle. At least NVIDIA just wants to sell as many chips to as many people as possible, and isn’t that interested in locking you into a platform. That being said – who knows? Often, the downsides to deals like this don’t come out until years later. We’ll just have to wait and see.

Researchers demonstrate in-chip water cooling

As desktop processors were first crossing the Gigahertz level, it seemed for a while that there was nowhere to go but up. But clock speed progress eventually ground to a halt, not because of anything to do with the speed itself but rather because of the power requirements and the heat all that power generated. Even with the now-common fans and massive heatsinks, along with some sporadic water cooling, heat remains a limiting factor that often throttles current processors. Part of the problem with liquid cooling solutions is that they’re limited by having to get the heat out of the chip and into the water in the first place. That has led some researchers to consider running the liquid through the chip itself. Now, some researchers from Switzerland have designed the chip and cooling system as a single unit, with on-chip liquid channels placed next to the hottest parts of the chip. The results are an impressive boost in heat-limited performance. This seems like a very logical next step for watercooling and processor cooling in general, but this is far from easy. This article highlights that we are getting closer, though.

Arm announces Cortex-R82: first 64-bit realtime processor

Arm is known for its Cortex range of processors in mobile devices, however the mainstream Cortex-A series of CPUs which are used as the primary processing units of devices aren’t the only CPUs which the company offers. Alongside the microcontroller-grade Cortex-M CPU portfolio, Arm also offers the Cortex-R range of “real-time” processors which are used in high-performance real-time applications. The last time we talked about a Cortex-R product was the R8 release back in 2016. Back then, the company proposed the R8 to be extensively used in 5G connectivity solutions inside of modem subsystems. Another large market for the R-series is storage solutions, with the Cortex-R processors being used in HDD and SSD controllers as the main processing elements. Today, Arm is expanding its R-series portfolio by introducing the new Cortex-R82, representing the company’s first 64-bit Armv8-R architecture processor IP, meaning it’s the first 64-bit real-time processor from the company. AnandTech and its usual deep dive into the intricacies of this new lineup from ARM. Obviously these kinds of chips are not something most people actively work with – we tend to merely use them, often even without realising it.

PicoRio Linux RISC-V SBC is an open source alternative to the Raspberry Pi

Linux capable RISC-V boards do exist but cost several hundred dollars or more with the likes of HiFive Unleashed and PolarFire SoC Icicle development kit. If only there was a RISC-V board similar to the Raspberry Pi board and with a similar price point… The good news is that the RISC-V International Open Source (RIOS) Laboratory is collaborating with Imagination technologies to bring PicoRio RISC-V SBC to market at a price point similar to Raspberry Pi. I’m 100% ready for fully top-to-bottom open source hardware, whether it’s Power9/Power10 at the high end, or RISV-V at the low end. ARM is a step backwards in this regard compared to x86, and while I doubt RISC-V or Power will magically displace either of those two, the surge in interest in ARM for more general purpose computing at least opens the door just a tiny little bit.

Inside the HP Nanoprocessor: a high-speed processor that can’t even add

The Nanoprocessor is a mostly-forgotten processor developed by Hewlett-Packard in 1974 as a microcontroller for their products. Strangely, this processor couldn’t even add or subtract, probably why it was called a nanoprocessor and not a microprocessor. Despite this limitation, the Nanoprocessor powered numerous Hewlett-Packard devices ranging from interface boards and voltmeters to spectrum analyzers and data capture terminals. The Nanoprocessor’s key feature was its low cost and high speed: Compared against the contemporary Motorola 6800, the Nanoprocessor cost $15 instead of $360 and was an order of magnitude faster for control tasks. Recently, the six masks used to manufacture the Nanoprocessor were released by Larry Bower, the chip’s designer, revealing details about its design. The composite mask image below shows the internal circuitry of the integrated circuit. The blue layer shows the metal on top of the chip, while the green shows the silicon underneath. The black squares around the outside are the 40 pads for connection to the IC’s external pins. I used these masks to reverse-engineer the circuitry of the processor and understand its simple but clever RISC-like design. This is a very detailed and in-depth article, so definitely not for the faint of heart. Definitely a little over my head, but I know for a fact there’s quite a few among you that love and understand this sort of stuff deeply.

E Ink demos a folding e-reader that can also take notes

Folding smartphones are slowly making their way into the mainstream. Could foldable e-readers be next? The E Ink Corporation, the company behind the digital paper tech found in the majority of e-readers, is trying to make it happen. The firm’s R&D lab has been developing foldable e-ink screens for a while, and its latest prototype clearly demonstrates the idea’s potential. This feels like such a natural fit for an e-reader. A foldable e-reader mimics a real book a lot more accurately than a regular portrait display does, and can potentially reduce the amount of times you have to perform a digital page flip. Still nowhere near a real book, of course, but a tiny step closer nonetheless.

Next step in SSD evolution: NVMe zoned namespaces explained

In June we saw an update to the NVMe standard. The update defines a software interface to assist in actually reading and writing to the drives in a way to which SSDs and NAND flash actually works. Instead of emulating the traditional block device model that SSDs inherited from hard drives and earlier storage technologies, the new NVMe Zoned Namespaces optional feature allows SSDs to implement a different storage abstraction over flash memory. This is quite similar to the extensions SAS and SATA have added to accommodate Shingled Magnetic Recording (SMR) hard drives, with a few extras for SSDs. ‘Zoned’ SSDs with this new feature can offer better performance than regular SSDs, with less overprovisioning and less DRAM. The downside is that applications and operating systems have to be updated to support zoned storage, but that work is well underway. Some light reading heading into the weekend.

Nvidia is reportedly in ‘advanced talks’ to buy ARM for more than $32 billion

SoftBank has been rumored to be exploring a sale of ARM — the British chip designer that powers nearly every major mobile processor from companies like Qualcomm, Apple, Samsung, and Huawei — and now, it might have found a buyer. Nvidia is reportedly in “advanced talks” to buy ARM in a deal worth over $32 billion, according to Bloomberg. Nvidia is said to be the only company that’s involved in concrete discussions with SoftBank for the purchase at this time, and a deal could arrive “in the next few weeks,” although nothing is finalized yet. If the deal does go through, it would be one of the largest deals ever in the computer chip business and would likely draw intense regulatory scrutiny. It’s not the worst option.

Upcoming review: something POWERful

I’ve got a very special piece of hardware coming my way for review: a Blackbird Secure Desktop from Raptor Computing Systems. The Blackbird is a desktop PC with an IBM POWER9 processor that is open source from top to its very bottom – no firmware blobs, no management engines, no proprietary BIOS. As the product page details: The Blackbird™ mainboard is an affordable, owner-controllable, desktop and entry server level mainboard. Built around the IBM POWER9 processor, and leveraging Linux and OpenPOWER™ technology, Blackbird™ allows you to secure your data without sacrificing performance. Designed with a fully owner-controlled CPU domain, you can audit and modify any portion of the open source firmware on the Blackbird™ mainboard, all the way down to the CPU microcode. This is an unprecedented level of access for any modern desktop-class machine, and one that is increasingly needed to assure safety and compliance with new regulations, such as the EU’s GDPR. I don’t yet know what exact specifications my review unit will have, but I’m assuming it’ll be the base model that has the 4-core POWER9 processor with SMT4 (4-way multithreading). I do know it’ll come with an AMD Radeon Pro WX4100 LP, which will be the only piece of hardware requiring card-side proprietary firmware (but it’s optional, since the mainboard itself has basic open source graphics capability too). I don’t usually do this, but there’s a first thing for everything, so here we go: do any of you have any questions about this exotic hardware you want me to try and answer? Specific things to look into? I’ll also be able to ask some questions to Raptor’s CTO, so there’s a lot of opportunity to get some serious answers. I’ll try to take as many suggestions into account as I can. The current estimated delivery date is 6 August, so expect the actual review in late August or early September. Also I’m sorry for the title pun.

Nvidia reportedly interested in acquiring ARM

Last week it came to light that SoftBank may be trying to sell chipset design firm ARM, and according to a new report from Bloomberg, Nvidia could be interested. Citing the usual “people with knowledge,” Nvidia has apparently approached ARM to court a deal with the Cambridge company. Out of the various options we have, Nvidia might actually not be the worst option. Abusive companies like Apple and Google are clearly the worst possible option, and Intel and AMD already have enough sway over the market as it is. NVIDIA, while not exactly a cute puppy kitten of a company, isn’t so big and domineering that acquiring ARM would be a complete disaster for competition.