The new BeagleV is a little different. It’s a small single-board PC with a RISC-V processor and support for several different GNU/Linux distributions as well as freeRTOS. With prices ranging from $120 to $150, the BeagleV is pricier than a Raspberry Pi computer, but it’s one of the most affordable and versatile options to feature a RISC-V processor. The makers of the BeagleV plan to begin shipping the first boards in April and you can sign up to apply for a chance to buy one of the first at the BeagleV website. It’s a good sign that RISC-V hardware is getting more accessible – a truly open source ISA is something we need to compete with the proprietary mess that is ARM.
For decades, my perception of USB was that of a technology both simple and reliable. You plug it and it works. The two first iterations freed PCs from a badly fragmented connector world made of RJ-45 (Ethernet), DA-15 (Joystick), DE-9 (Serial), DIN (PS/2), and DB-25 (Parallel). When USB-3.0 came out, USB-IF had the good idea to color code its ports. All you had to do was to “check for blue” in the chain to get your 5 Gbit/s. Even better, around the same time were introduced type-C connectors. Not only the world was a faster place, now we could plug things with one try instead of three. Up to that point in time, it was a good tech stack. Yet in 2013 things started to become confusing. USB and ThunderBolt have become incredibly complex, and it feels like a lot of this could’ve been avoided with a more sensible naming scheme and clearer, stricter specifications and labeling for cables.
In a major push to give Europe pride of place in the global semiconductor design and fabrication ecosystem, 17 EU member states this week signed a joint declaration to commit to work together in developing next generation, trusted low-power embedded processors and advanced process technologies down to 2nm. It will allocate up to €145bn funding for this European initiative over the next 2-3 years. Recognizing the foundational nature of embedded processors, security and leading-edge semiconductor technologies in everything from cars, medical equipment, mobile phones and networks to environmental monitoring, and smart devices and services, the European Commission said this is the reason it is crucial for key industries to be able to compete globally and have the capacity to design and produce the most powerful processors. It’s kind of odd that Europe does not command a more prominent position in the semiconductor industry, since the one company that enables the constant progress in this sector isn’t American, Chinese, or Japanese – but Dutch. ASML is by far the world’s largest developer and producer of photolithography systems, which are the machines companies like Intel and TSMC use to fabricate integrated circuits. Their machine are some of the most advanced machines in the world, and all the advanced, high-end chips from Intel, Apple, AMD, and so on, are built using machines from ASML. It seems odd, then, that Europe’s own semiconductor industry lags behind that of the rest of the world. This investment seems to aim to correct that, and that’s a good thing for all of us, no matter if you’re European, American, or from anywhere else – this can only increase competition.
Fujifilm has announced that it has set a new world record by creating a magnetic storage tape that can store a staggering 580 terabytes of data. The breakthrough, developed jointly with IBM Research, uses a new magnetic particle called Strontium Ferrite (SrFe), commonly used as a raw material for making motor magnets. Fujifilm has been investigating Strontium Ferrite as a possible successor to Barium Ferrite (BaFe), which is the leading material today. Tape is still, by far, the most efficient and cheapest way to store loads of data that doesn’t need to be accessed regularly. I find tape-based storage mediums fascinating, and this is right up my alley.
If you were writing reality as a screenplay, and, for some baffling reason, you had to specify what the most common central processing unit used in most phones, game consoles, ATMs, and other innumerable devices was, you’d likely pick one from one of the major manufacturers, like Intel. That state of affairs would make sense and fit in with the world as people understand it; the market dominance of some industry stalwart would raise no eyebrows or any other bits of hair on anyone. But what if, instead, you decided to make those CPUs all hail from a barely-known company from a country usually not the first to come to mind as a global leader in high-tech innovations (well, not since, say, the 1800s)? And what if that CPU owed its existence, at least indirectly, to an educational TV show? Chances are the producers would tell you to dial this script back a bit; come on, take this seriously, already. And yet, somehow, that’s how reality actually is. ARM is one of Britain’s greatest contributions to the technology sector, and those men and women at Acorn, the BBC, and everyone else involved in the BBC Computer Literacy Project were far, far ahead of their time, and saw before a lot of other governments just how important computing was going to be.
The Altra overall is an astounding achievement – the company has managed to meet, and maybe even surpass all expectations out of this first-generation design. With one fell swoop Ampere managed to position itself as a top competitor in the server CPU market. The Arm server dream is no longer a dream, it’s here today, and it’s real. AnandTech reviews the 80-core ARM server processor from Ampere – two of them in one server, in fact – and comes away incredibly impressed.
We’ve reviewed the most powerful BBC Micro model B disc protection scheme I found, across an audit of most of the copy protected discs released for the machine. It’s clever in that you don’t need specialized hardware to create the disc, or read the disc. But you’re going to struggle to duplicate the disc. Copy protection schemes from the ’80 and early ’90s are fascinating, and this one is no exception.
The new CPU configuration gives the new SoC a good uplift in performance, although it’s admittedly less of a jump than I had hoped for this generation of Cortex-X1 designs, and I do think Qualcomm won’t be able to retain the performance crown for this generation of Android-SoCs, with the performance gap against Apple’s SoCs also narrowing less than we had hoped for. On the GPU side, the new 35% performance uplift is extremely impressive. If Qualcomm is really able to maintain similar power figures this generation, it should allow the Snapdragon 888 to retake the performance crown in mobile, and actually retain it for the majority of 2021. At this point it feels like we’re far beyond the point of diminishing returns for smartphones, but with ARM moving to general purpose computers, there’s still a lot of performance gains to be made. I want a Linux-based competitor to Apple’s M1-based Macs, as Linux is perfectly suited for architecture transitions like this.
Ars Technica summarises and looks at the various claims made by Micro Magic about their RISC-V core. Micro Magic Inc.—a small electronic design firm in Sunnyvale, California—has produced a prototype CPU that is several times more efficient than world-leading competitors, while retaining reasonable raw performance. We first noticed Micro Magic’s claims earlier this week, when EE Times reported on the company’s new prototype CPU, which appears to be the fastest RISC-V CPU in the world. Micro Magic adviser Andy Huang claimed the CPU could produce 13,000 CoreMarks (more on that later) at 5GHz and 1.1V while also putting out 11,000 CoreMarks at 4.25GHz—the latter all while consuming only 200mW. Huang demonstrated the CPU—running on an Odroid board—to EE Times at 4.327GHz/0.8V and 5.19GHz/1.1V. Later the same week, Micro Magic announced the same CPU could produce over 8,000 CoreMarks at 3GHz while consuming only 69mW of power. I have some major reservations about all of these claims, mostly because of the lack of benchmarks that more accurately track real-world usage. Extraordinary claims requite extraordinary evidence, and I feel like some vague photos just doesn’t to the trick of convincing me. Then again, last time I said anything about an upcoming processor, I was off by a million miles, so what do I know?
A few weeks ago, we linked to an article that went in-depth into UEFI, and today, we have a follow-up. But the recent activity reminded me that there was one thing I couldn’t figure out how to do at the time: Enumerate all the available UEFI variables from within Windows. If you remember, Windows has API calls to get and set UEFI variable values, but not to enumerate them. So I started doing some more research to see if there was any way to do that – it’s obviously possible as the UEFI specs describe it, a UEFI shell can easily do it, and Linux does it (via a file system). My research took me to a place I wouldn’t have expected. We can always go deeper.
Earlier this year, we reviewed System76’s Lemur Pro, a laptop designed for portability and long battery life. This time around, we’re going entirely the opposite direction with the System76 Bonobo WS – a mobile workstation that looks like a laptop (if you squint), but packs some of the fastest desktop-grade hardware available on the market. Specifications System76 sent us the latest version of the Bonobo WS, with some truly bonkers specifications for what is, technically, a laptop (sort of), at a total price of $4315.22. This mobile workstation comes with an Intel Core i9-10900K, which has 10 cores and 20 threads and runs at 5.3 Ghz – and this is not a constrained mobile chip, but the full desktop processor. It’s paired with an 8GB RTX 2080 Super graphics card – which, again, is the desktop part, not the mobile version. It has 32 GB of RAM configured in dual-channel at 3200 Mhz. To top it off, I configured it with a 250 GB NVMe drive for the operating system, and an additional 1 TB NVME drive for storage and other stuff. Both of these drives have a theoretical sequential read and write speeds of 3500 MB/s and 2300 MB/s respectively. The Bonobo WS comes with a 17.3″ display, and I opted for the 1080p 144Hz version, since the 4K option was not yet available at the time of setting up the review unit. The 4K option, which I would normally recommend on a display of this size, might not make a lot of sense here since most people interested in a niche mobile workstation like this will most likely be using external displays anyway, making the splurge for the 4K option a bit moot, especially since it’s a mere 60 Hz panel. There’s a few other specifications we need to mention – specifically the weight and battery life of a massive computer like this one. The base weight is roughly 3.8 kg, and its dimensions are 43.43 × 399.03 × 319.02 mm (height × width × depth). While this machine can technically be classified as a laptop, the mobile workstation moniker is a far more apt description. This is not a machine for carrying from classroom to classroom – this is a machine that most users will use in just two, possible three places, and don’t move very often. Another reason for that is battery life. A machine with this much power requires a lot of juice, and the 97 Wh battery isn’t going to give you a lot of unplugged time to work. You’ll spend all of your time plugged into not one, but two power sockets, as this machine requires two huge power bricks. It even comes with an adorable rubber thing that ties the two power bricks together in a way that maintains some space between them for cooling and safety purposes. So not only do you have to lug around the massive machine itself, but also the two giant power bricks. As this is a mobile workstation, the ports situation is excellent. It has a USB 3.2 Gen 2×2 (ugh)/Thunderbolt 3 port (type C), 3 USB 3.2 Gen 2 (type A) ports, and a MicroSD card slot. For your external display needs, we’ve got a full-size HDMI port, 2 mini DisplayPorts (1.4) and a DisplayPort (1.4) over USB type C. Furthermore, there’s an Ethernet port, the usual audio jacks (microphone and headphones, and one also has an optical connection), and the obligatory Kensington lock. Of course, there’s wireless networking support through an Intel dual-band WiFi 6 chip, as well as Bluetooth support. Hardware The hardware of this machine is entirely dictated by its internals, since cramming this much desktop power in a computer that weighs less than 4 kg doesn’t leave you with much room to mess around. The entire design is dictated by the required cooling, and there are vents all over the place. This is not a pretty or attractive machine – but it doesn’t need to be. People who need this much mobile power to lug around don’t care about what it looks like, how thin it is, or how aluminium the aluminium is – they need this power to be properly cooled, and if that means more thickness or more vents, then please don’t skimp. If you care about form over function – which is an entirely legitimate criterion, by the way, and don’t let anybody tell you otherwise – there are other devices to choose from. While the laptop does have some RGB flourishes here and there, they’re not overly present or distracting, and the ability to switch between several colours for the keyboard lighting is very nice to have, since I find the generic white light most laptops use to not always be ideal. You can cycle through the various lighting options with a key combination. The keyboard has a little bit more key travel than I’m used to from most laptops, probably owed to its chunky size leaving more room for the keys to travel. The keys have a bit of wobble, but not enough to cause me to miss keystrokes. I am not a fan of the font used on the keyboard, but that’s a mere matter of taste. The trackpad is decent, feels fine enough, and works great with Linux (obviously). In what I first thought was a blast from the past, the laptop has physical buttons for right and left click underneath the trackpad. However, after a little bit of use, I realised just how nice it was to have actual, physical buttons, and not a diving board or – god forbid – a trackpad that only supports tapping. Of course, it’s not nearly as good as Apple’s force touch trackpad that simulates an eerily realistic click wherever you press, but it does the job just fine. That being said, though, much like with the display, I doubt many people who need a machine like this will really care. They’ll most likely not only have
Last month’s news that IBM would do a Hewlett-Packard and divide into two—an IT consultancy and a buzzword compliance unit—marks the end of “business as usual” for yet another of the great workstation companies. There really isn’t much left when it comes to proper workstations. All the major players left the market, have been shut down, or have been bought out (and shut down) – Sun, IBM, SGI, and countless others. Of course, some of them may still make workstations in the sense of powerful Xeon machines, but workstations in the sense of top-to-bottom custom architecture, like SGI’s crossbar switch technology and all the custom architectures us mere mortals couldn’t afford, are no longer being made in large numbers. And it shows. Go on eBay to try and get your hand on a used and old SGI or Sun workstation, and be prepared to pay out of your nose for highly outdated and effectively useless hardware. The number of these machines still on the used market is dwindling, and with no new machines entering the used market, it’s going to become ever harder for us enthusiasts to get our hands on these sorts of exciting machines.
In 1978, a memory chip stored just 16 kilobits of data. To make a 32-kilobit memory chip, Mostek came up with the idea of putting two 16K chips onto a carrier the size of a standard integrated circuit, creating the first memory module, the MK4332 “RAM-pak”. This module allowed computer manufacturers to double the density of their memory systems and by 1982, Mostek had sold over 3 million modules. The Apple III is the best-known system that used these memory modules. A deep dive into these interesting chips.
System76 recently unveiled their latest entirely in-house Linux workstation, the Thelio Mega – a quad-GPU Threadripper monster with a custom case and cooling solution. System76’s CEO and founder Carl Richell penned a blog post about the design process of the Thelio Mega, including some performance, temperature, and noise comparisons. Early this year, we set off to engineer our workstation version of a Le Mans Hypercar. It started with a challenge: Engineer a quad-GPU workstation that doesn’t thermal throttle any of the GPUs. Three GPUs is pretty easy. Stack the forth one in there and it’s a completely different animal. Months of work and thousands of engineering hours later we accomplished our goal. Every detail was scrutinized. Every part is of the highest quality. And new factory capabilities, like milling, enabled us to introduce unique solutions to design challenges. The result is Thelio Mega. A compact, high-performance quad-GPU system that’s quiet enough to sit on your desk. I’m currently wrapping up a review of the Bonobo WS, and if at all possible, I’ll see if I can get a Thelio Mega for review, too (desktops like this, which are usually custom-built for each customer, are a bit harder to get for reviews).
So what’s the topic? Something that I started talking about almost 10 years ago, the Unified Extensible Firmware Interface (UEFI). Back then, it was more of a warning: the way you deploy Windows is going to change. Now, it’s a way of life (and fortunately, it no longer sucks like it did back in 2010 when we first started working with it). I don’t want to rehash the “why’s” behind UEFI because frankly, you no longer have much of a choice: all new Windows 10 devices ship with UEFI enabled by default (and if you are turning it off, shame on you). Instead, I want to focus much more on how it works and what’s going on behind the scenes. A really in-depth article about UEFI – you have to be a certain kind of person to enjoy stuff like this. The article’s about a year old, but still entirely relevant.
Discussion of the next generation of DDR memory has been aflutter in recent months as manufacturers have been showcasing a wide variety of test vehicles ahead of a full product launch. Platforms that plan to use DDR5 are also fast approaching, with an expected debut on the enterprise side before slowly trickling down to consumer. As with all these things, development comes in stages: memory controllers, interfaces, electrical equivalent testing IP, and modules. It’s that final stage that SK Hynix is launching today, or at least the chips that go into these modules. We’re gearing up for a return of the times where when buying a new motherboard or new memory, you better make sure the right DDR version is selected.
Lots of people were excited by the news over Hangover’s port to ppc64le, and while there’s a long way to go, the fact it exists is a definite step forward to improving the workstation experience on OpenPOWER. Except, of course, that many folks (including your humble author) can’t run it: Hangover currently requires a kernel with a 4K memory page size, which is the page size of the majority of extant systems (certainly x86_64, which only offers a 4K page size). ppc64 and ppc64le can certainly run on a 4K page size and some distributions do, yet the two probably most common distributions OpenPOWER users run — Debian and Fedora — default to a 64K page size. This article gives an answer to the question why.
This article provides a subjective history of POWER and open source from the viewpoint of an open source developer, outlines a few trends and conclusions, and previews what the future will bring. It is based on my talk at the annual OpenPOWER North America Summit, in which I aimed to show the importance of desktop/workstation-class hardware available to developers. In this article, I will cover a few additional topics, including cloud resources available to POWER developers, as well as a glimpse into the products and technologies under development. The biggest problem for POWER that I can see at the moment is that the kind of POWER processors you want – little endian – are expensive. This precludes more affordable desktops from entering the market, let alone even laptops. Big endian POWER processors aren’t exactly future-proof, as Linux distributions are dropping support for them. It’s a difficult situation, but I don’t think there’s much that can be done about it.
The Zuse Z4 is considered the oldest preserved computer in the world. Manufactured in 1945 and overhauled and expanded in 1949/1950, the relay machine was in operation on loan at the ETH Zurich from 1950 to 1955. Today the huge digital computer is located in the Deutsches Museum in Munich. The operating instructions for the Z4 were lost for a long time. In 1950, ETH Zurich was the only university in continental Europe with a functioning tape-controlled computer. From the 1940s, only one other computer survived: the Csirac vacuum tube computer (1949). It is in the Melbourne Museum, Carlton, Victoria, Australia. Evelyn Boesch from the ETH Zurich University archives let me know in early March 2020 that her father René Boesch (born in 1929), who had been working under Manfred Rauscher at the Institute for Aircraft Statics and Aircraft Construction at ETH Zurich since 1956, had kept rare historical documents. Boesch’s first employment was with the Swiss Aeronautical Engineering Association, which was housed and affiliated to the above-mentioned institute. The research revealed that the documents included a user manual for the Z4 and notes on flutter calculations. What an astonishing discovery. Stories like this make me wonder just how many rare, valuable, irreplaceable hardware, software, and documentation is rotting away in old attics, waiting to be thrown in a dumpster after someone’s death.
Techies hailed USB-C as the future of cables when it hit the mainstream market with Apple’s single-port MacBook in 2015. It was a huge improvement over the previous generation of USB, allowing for many different types of functionality — charging, connecting to an external display, etc. — in one simple cord, all without having a “right side up” like its predecessor. Five years later, USB-C is near-ubiquitous: Almost every modern laptop and smartphone has at least one USB-C port, with the exception of the iPhone, which still uses Apple’s proprietary Lightning port. For all its improvements, USB-C has become a mess of tangled standards — a nightmare for consumers to navigate despite the initial promise of simplicity. Especially the charging situation with USB-C can be a nightmare. I honestly have no clue which of my U SB-C devices can fast-charge with which charger and which cable, and I just keep plugging stuff in until it works. Add in all my fiancée’s devices, and it’s… Messy.