It’s a hardware day today, and since AnandTech is the most authoritative source on stuff like this, we’ve got more from them. Arm announced its next big micro-architecture – which will find its way to flagship smartphones soon. Overall the Cortex-A77 announcement today isn’t quite as big of a change as what we saw last year with the A76, nor is it as big a change as today’s new announcement of Arm’s new Valhall GPU architecture and G77 GPU IP. However what Arm managed to achieve with the A77 is a continued execution of their roadmap, which is extremely important in the competitive landscape. The A76 delivered on all of Arm’s promises and ended up being an extremely performant core, all while remaining astonishingly efficient as well as having a clear density lead over the competition. In this regard, Arm’s major clients are still heavily focusing on having the best PPA in their products, and Arm delivers in this regard. The one big surprise about the A77 is that its floating point performance boost of 30-35% is quite a lot higher than I had expected of the core, and in the mobile space, web-browsing is the killer-app that happens to be floating point heavy, so I’m looking forward how future SoCs with the A77 will be able to perform. As linked above, the company also announced its next-generation mobile GPU architecture.
In 2017, we saw several new MCUs hit the market, as well as general trends continuing in the industry: the migration to open-source, cross-platform development environments and toolchains; new code-generator tools that integrate seamlessly (or not so seamlessly…) into IDEs; and, most notably, the continued invasion of ARM Cortex-M0+ parts into the 8-bit space. I wanted to take a quick pulse of the industry to see where everything is — and what I’ve been missing while backed into my corner of DigiKey’s web site. It’s time for a good ol’ microcontroller shoot-out.
Based on technology developed by Hewlett-Packard, Microsoft’s IntelliMouse Explorer arrived with a price tag that could be justified by even cash-strapped students like me. Even better, the underside of the mouse was completely sealed, preventing even the tiniest speck of dirt from penetrating its insides, and it improved on its predecessors by working on almost any surface that wasn’t too reflective. I remember getting back to my dorm room and plugging in the Explorer for the first time, wondering who had a rig fancy enough to use the included PS2 to USB adapter. There were undoubtedly a few driver installation hiccups along the way, but once Windows 98 was happy, I fired up Photoshop and strapped in for the smoothest mouse experience I’d ever had. Problem solved. The changeover from ball mice to optical mice is something few will ever rave about, but I remember it as one of the biggest changes in computer use I’ve personally ever experience. Everything about optical mice is better than ball mice, and using an optical mouse for the first time roughly two decades ago was a complete game-changer.
When OSNews covered the RISC V architecture recently, I was struck by my own lack of excitement. I looked into it, and the project looks intriguing, but it didn’t move me on an emotional level like a new CPU architecture development would have done many years ago. I think it’s due to a change in myself, as I have got older. When I first got into computers, in the early 80s, there was a vibrant environment of competing designs with different approaches. This tended to foster an interest, in the enthusiast, in what was inside the box, including the CPU architecture. Jump forwards to the current era, and the computer market is largely homogenized to a single approach for each class of computing device, and this means that there is less to get excited about in terms of CPU architectures in general. I want to look at what brought about this change in myself, and maybe these thoughts will resonate with some of you.
From personal experience, I am aware that heat issues on laptops are often caused by a poor application of the stock thermal paste (also known as “thermal interface material” or TIM), provided that the cooling system is functioning. The reason is simple: the thermal paste – as the name suggests – is supposed to facilitate the transfer of the heat from the CPU/GPU to the heatsink. This only works efficiently, though, if a very thin layer of thermal paste is applied between CPU and heatsink in such a way that minimises the chance of creating “air bubbles” (air has a bad thermal conductivity). So the problem is that very often, the stock thermal paste is applied in factories in ridiculously large amounts, that often spread out of the die of the CPU and that most certainly achieve the opposite effect by slowing down, instead of facilitating, the transfer of heat from CPU to heatsink. Sadly, Apple doesn’t seem to be any different from other manufacturers from this point of view, despite the higher prices and the generally wonderful design and construction quality. Plus, often the stock thermal paste used by some manufacturers is quite cheap, and not based on some very efficient thermally conductive material. This is a very common problem, and one that is actually fairly easily rectified if you have even a modicum of understanding of how a screwdriver works. I’m planning on replacing the stick thermal paste on my XPS 13 9370 just to see if it will make a difference. I run Linux on it – KDE Neon – and Linux is slightly less efficient at decoding video than Windows, causing more fan spin-up. There’s a very real chance replacing the thermal paste will give me just enough thermal headroom to address this issue.
Here’s the interesting part. This motherboard doesn’t officially support 16 GB of RAM. The specs on the page I linked indicate that it supports a maximum of 8 GB. It only has 2 slots, so I had a suspicion that 8 GB sticks just weren’t as common back when this motherboard first came out. I decided to try anyway. In a lot of cases, motherboards do support more RAM than the manufacturer officially claims to support. I made sure the BIOS was completely updated (version 946F1P06) and put in my two 8 gig sticks. Then, I booted it up into my Ubuntu 16.04 install and everything worked perfectly. I decided that my theory about the motherboard actually supporting more RAM than the documentation claimed was correct and forgot about it. I enjoyed having all the extra RAM to work with and was happy that my gamble paid off. Then, a few months later, I tried to boot into Windows 10. I mostly use this computer in Linux. I only occasionally need to boot into Windows to check something out. That’s when the fun really started. A deeply technical exploration into this particular issue, and definitely worth a read.
A few years ago, I was out at the W6TRW swap meet at the parking lot of Northrop Grumman in Redondo Beach, California. Tucked away between TVs shaped like polar bears and an infinite variety of cell phone chargers and wall warts was a small wooden box. There was a latch, a wooden handle, and on the side a DB-25 port. There was a switch for half duplex and full duplex. I knew what this was. This was a modem. A wooden modem. Specifically, a Livermore Data Systems acoustically coupled modem from 1965 or thereabouts. Turn down the lights, close the curtains, and put on some Barry White. You’re going to need it.
I became fascinated by what is happening in the RISC-V space just by seeing it pop up every now and then in my Twitter feed. Since I am currently unemployed I have a lot of time and autonomy to dig into whatever I wish. RISC-V is a new instruction set architecture. To understand RISC-V, we must first dig into what an instruction set architecture is. This is my learning technique. I bounce from one thing to another, recursively digging deeper as I learn more. Some more RISC-V information. I wouldn’t be surprised to see more and more RISC-V articles and even hardware to buy over the coming years.
When Wave Computing acquired MIPS, “going open source” was the plan Wave’s CEO Derek Meyer had in mind. But Meyer, a long-time MIPS veteran, couldn’t casually mention his plan then. Wave was hardly ready with the solid infrastructure it needed to support a legion of hardware developers interested in coming to the MIPS open-source community. To say “go open source” is easy. Pulling it off has meant a huge shift from MIPS, long accustomed to the traditional IP licensing business. MIPS will compete with and exist alongside RISC-V. The future of truly open source hardware is getting more and more interesting.
In recent years, advances in AI have produced algorithms for everything from image recognition to instantaneous translation. But when it comes to applying these advances in the real world, we’re only just getting started. A new product from Nvidia announced today at GTC — a $99 AI computer called the Jetson Nano — should help speed that process. The Nano is the latest in Nvidia’s line of Jetson embedded computing boards, used to provide the brains for robots and other AI-powered devices. Plug one of these into your latest creation, and it’ll be able to handle tasks like object recognition and autonomous navigation without relying on cloud processing power. Fascinating little device that could be a great boon for the maker community.
Last year I finally bought a Kryoflux, unfortunately in the middle of moving house. Now I’m finally able to use it beyond verifying that it’s not completely broken. After imaging a few dozens of floppies, I can say one thing–Kryoflux is surprisingly difficult to use with PC 5¼″disks. There is a distinct impression that Kryoflux was designed to deal primarily with Amiga and C64 floppies, and although PC floppy formats present absolutely no difficulty for the Kryoflux hardware as such, using the software for archiving standard PC 5¼″ media is very far from simple. Let’s start with the easy part. Imaging 3½″ media is relatively simple because PC 3½″drives are straightforward (well, let’s omit the special Japanese 1.6M media). 3½″ drives always rotate at 300 RPM and usually automatically handle media density based on the floppy itself. But if everything were easy, life wouldn’t be very interesting. Preserving the data on these ancient floppies is crucial, and it’s great to see various types of specialised hardware exist just for this purpose.
To satisfy the true geeks, Western Digital organized a Swerv Deep Dive at the Bay Area RISC-V Meetup. The meetup was well organized (free food!) and attended by roughly 100 people. A Webex recording of this meetup is currently still available here. (The first 53 minutes are empty. The meat of the presentation starts at the 53min30 mark.) Zvonimir Bandic, Senior Director of Next Generation Platform Technologies Department at Western Digital, gave an excellent presentation, well paced, little marketing fluff, with sufficient technical detail to pique my interest to dive deeper in the specifics of the core. I highly recommend watching the whole thing. There was also a second presentation about instruction tracing which I won’t talk about in this post. In this blog post, I’ll go through the presentation and add some extra details that I noted down at the meetup or that were gathered while going through the SweRV source code on GitHub or while going through the RISC-V SweRV EH1 Programmer’s Reference. This goes way beyond my comfort level.
Ars Technica reports: Fulfilling its 2017 promise to make Thunderbolt 3 royalty-free, Intel has given the specification for its high-speed interconnect to the USB Implementers Forum (USB-IF), the industry group that develops the USB specification. The USB-IF has taken the spec and will use it to form the basis of USB4, the next iteration of USB following USB 3.2. Yes, it’s called USB4, which will exist alongside USB 3.2 Gen 1, USB 3.2 Gen 2, and USB 3.2 Gen 2×2. I don’t even know what to say.
USB 3.2, which doubles the maximum speed of a USB connection to 20Gb/s, is likely to materialize in systems later this year. In preparation for this, the USB-IF—the industry group that together develops the various USB specifications—has announced the branding and naming that the new revision is going to use, and… It’s awful. I won’t spoil it for you. It’s really, really bad.
The Opportunity Rover, also known as the Mars Exploration Rover B (or MER-1), has finally been declared at end of mission today after 5,352 Mars solar days when NASA was not successfully able to re-establish contact. It had been apparently knocked off-line by a dust storm and was unable to restart either due to power loss or some other catastrophic failure. Originally intended for a 90 Mars solar day mission, its mission became almost 60 times longer than anticipated and it traveled nearly 30 miles on the surface in total. Spirit, or MER-2, its sister unit, had previously reached end of mission in 2010. And why would we report that here? Because Opportunity and Spirit were both in fact powered by the POWER1, or more accurately a 20MHz BAE RAD6000, a radiation-hardened version of the original IBM RISC Single Chip CPU and the indirect ancestor of the PowerPC 601. There are a lot of POWER chips in space, both with the original RAD6000 and its successor the RAD750, a radiation-hardened version of the PowerPC G3. What an awesome little tidbit of information about these Mars rovers, which I’m assuming everybody holds in high regard as excellent examples of human ingenuety and engineering.
While it’s clear that the most significant opportunities for RISC-V will be in democratising custom silicon for accelerating specific tasks and enabling new applications — and it’s already driving a renaissance in novel computer architectures, for e.g. IoT and edge processing — one question that people cannot help but ask is, so when can I have a RISC-V PC? The answer to which is, right now. The result is a RISC-V powered system that can be used as a desktop computer and thanks to the efforts of Atish Patra at Western Digital, installing Fedora Linux is a breeze. This is obviously not exactly commodity hardware, but it does show that the ingredients are there and the combination provides a powerful development platform for anyone who might want to prototype a RISC-V PC — or indeed a vast array of other applications which stand to benefit from the open ISA. This has me very excited. Over the last few decades, virtually all competitors to x86 slowly died out – SPARC, PowerPC, MIPS, etc. – which turned desktop computing hardware into a rather boring affair. Recently we’ve been seeing more and more ARM desktop boards, and now it seems RISC-V is starting to dabble in this area too. Great news.
One of the things about having a pretty nice work laptop with a screen that’s large enough to have more than one real window at once is that I actually use it, and I use it with multiple windows, and that means that I need to use the mouse. I like computer mice in general so I don’t object to this, but like most modern laptops my Dell XPS 13 doesn’t have a mouse, it has a trackpad (or touchpad, take your pick). You can use a modern touchpad as a mouse, but over my time in using the XPS 13 I’ve come to understand (rather viscerally) that a touchpad is not a mouse and trying to act as if it was is not a good idea. There are some things that a touchpad makes easy and natural that aren’t very natural on a mouse, and a fair number of things that are natural on a mouse but don’t work very well on a touchpad (at least for me; they might for people who are more experienced with touchpads). Chris Siebenmann makes some good points regarding touchpads here. Despite the fact that touchpads on Windows and Linux have gotten better over the years, they’re still not nearly as good as Apple’s, and will never beat a mouse. I feel like mouse input on laptops is ripe for serious innovation.
The CADR microprocessor is a general purpose processor designed for convenient emulation of complex order codes, particularly those involving stacks and pointer manipulation. It is the central processor in the LISP machine project, where it interprets the bit-efficient 16-bit order code produced by the LISP machine compiler. (The terms “LISP machine” and “CADR machine” are sometimes confused. In this document, the CADR machine is a particular design of microprocessor, while the LISP machine is the CADR machine plus the microcode which interprets the LISP machine order code.) I’ll admit I have no idea what anything in this long, technical description means, but I’m pretty sure this is right up many readers’ alleys.
LG is going several steps further by making the TV go away completely whenever you’re not watching. It drops slowly and very steadily into the base and, with the push of a button, will rise back up in 10 seconds or so. It all happens rather quietly, too. You can’t see the actual “roll” when the TV is closed in, sadly; a transparent base would’ve been great for us nerds to see what’s happening inside the base as the TV comes in or unfurls, but the white is certainly a little more stylish. Functionally, LG tells me it hasn’t made many changes to the way the LG Display prototype worked aside from enhancing the base. I didn’t get to ask about durability testing — how many times the OLED TV R has been tested to go up and down, for example — but that’s something I’m hoping to get an answer to. We don’t really talk about TVs all that much on OSNews – it’s generally a boring industry – but this rollable display technology is just plain cool.
Without question, 2018 was the year RISC-V genuinely began to build momentum among chip architects hungry for open-source instruction sets. That was then.
By 2019, RISC-V won't be the only game in town.
Wave Computing announced Monday that it is putting MIPS on open source, with MIPS Instruction Set Architecture (ISA) and MIPS' latest core R6 available in the first quarter of 2019.
Good news, and it makes me wonder - will we ever see a time where x86 and x86-64 are open source? I am definitely not well-versed enough in these matters to judge just how important the closed-source nature of the x86 ISA really is to Intel and AMD, but it seems like something that will never happen.