Hardware Archive
What if you have a very modern machine that is entirely UEFI-only, meaning it has no compatibility support module and thus no way of enabling a legacy BIOS mode? Well, install a CSM as an EFI application, of course! CSMWrap is an EFI application designed to be a drop-in solution to enable legacy BIOS booting on modern UEFI-only (class 3) systems. It achieves this by wrapping a Compatibility Support Module (CSM) build of the SeaBIOS project as an out-of-firmware EFI application, effectively creating a compatibility layer for traditional PC BIOS operation. ↫ CSMWrap’s GitHub page The need for this may not be immediately obvious, but here’s the problem: if you want to run an older operating system that absolutely requires a traditional BIOS on a modern machine that only has UEFI without any CSM options (a class 3-machine), you won’t be able to boot said operating system. CSMWrap is a possible solution, as it leverages innate EFI capabilities to run a CSM as an EFI application, thereby adding the CSM functionality back in. All you need to do is drop CSMWrap into /efi/boot on the same drive the operating system that needs BIOS to boot is on, and UEFI will list it as a bootable operating system. It does come with some limitations, however. For instance, one logical core of your processor will be taken up by CSMWrap and will be entirely unavailable to the booted BIOS-based operating system. In other words, this means you’re going to need a processor with at least more than one logical processor (e.g., even a single-core machine with hyperthreading will work). It’s also suggested to add a legacy-capable video card if you’re using an operating system that doesn’t support VESA BIOS extensions (e.g. anything older than NT). This is an incredibly neat idea, and even comes with advantages over built-in CSMs, since many of those are untested and riddled with issues. CSMWrap uses SeaBIOS, which is properly tested and generally a much better BIOS than whatever native CSMs contain. All in all, a great project.
What’s the scroll lock key actually for? Scroll Lock was reportedly specifically added for spreadsheets, and it solved a very specific problem: before mice and trackpads, and before fast graphic cards, moving through a spreadsheet was a nightmare. Just like Caps Lock flipped the meaning of letter keys, and Num Lock that of the numeric keypad keys, Scroll Lock attempted to fix scrolling by changing the nature of the arrow keys. ↫ Marcin Wichary I never really put much thought into the scroll lock key, and I always just assumed that it would, you know, lock scrolling. I figured that in the DOS era, wherein the key originated, it stopped DOS from scrolling, keeping the current output of your DOS commands on the screen until you unlocked scrolling again. In graphical operating systems, I assumed it would stop any window with scrollable content from scrolling, or something – I just never thought about it, and never bothered to try. Well, its original function was a bit different: with scroll lock disabled, hitting the arrow keys would move the selection cursor. With scroll lock enabled, hitting the arrow keys would move the content instead. After reading this, it makes perfect sense, and my original assumption seems rather silly. It also seems some modern programs, like Excel, Calc, some text editors, and others, still exhibit this same behaviour when the scroll lock key is used today. The more you know.
Chips and Cheese has an excellent deep dive into Arm’s latest core design, and I have thoughts. Arm now has a core with enough performance to take on not only laptop, but also desktop use cases. They’ve also shown it’s possible to deliver that performance at a modest 4 GHz clock speed. Arm achieved that by executing well on the fundamentals throughout the core pipeline. X925’s branch predictor is fast and state-of-the-art. Its out-of-order execution engine is truly gargantuan. Penalties are few, and tradeoffs appear well considered. There aren’t a lot of companies out there capable of building a core with this level of performance, so Arm has plenty to be proud of. That said, getting a high performance core is only one piece of the puzzle. Gaming workloads are very important in the consumer space, and benefit more from a strong memory subsystem than high core throughput. A DSU variant with L3 capacity options greater than 32 MB could help in that area. X86-64’s strong software ecosystem is another challenge to tackle. And finally, Arm still relies on its partners to carry out its vision. I look forward to seeing Arm take on all of these challenges, while also iterating on their core line to keep pace as AMD and Intel improve their cores. Hopefully, extra competition will make better, more affordable CPUs for all of us. ↫ Chester Lam at Chips and Cheese The problem with Arm processors in the desktop (and laptop) space certainly isn’t one of performance – as this latest design by Arm once again shows. No, the real problem is a complete and utter lack of standardisation, with every chip and every device in the Arm space needing dedicated, specific operating system images people need to create, maintain, and update. This isn’t just a Linux or BSD problem, as even Microsoft has had numerous problems with this, despite Windows on Arm only supporting a very small number of Qualcomm processors. A law or rule that has held fast since the original 8086: never bet against x86. The number of competing architectures that were all surely going to kill x86 is staggeringly big – PowerPC, Alpha, PA-RISC, Sparc, Itanium, and many more – and even when those chips were either cheaper, faster, or both, they just couldn’t compete with x86’s unique strength: its ecosystem. When I buy an x86 computer, either in parts or from an OEM, either Intel or AMD, I don’t have to worry for one second if Windows, Linux, one of the BSDs, or goddamn FreeDOS, and all of their applications, are going to run on it. They just will. Everything is standardised, for better or worse, from peripheral interconnects to the extremely crucial boot process. On the Arm side, though? It’s a crapshoot. That’s why whenever anyone recommends a certain cool Arm motherboard or mini PC, the first thing you have to figure out is what its software support situation is like. Does the OEM provide blessed Linux images? If so, do they offer more than an outdated Ubuntu build? Have they made any update promises? Will Windows boot on this thing? Does it work with any GPUs I might already own? There’s so many unknowns and uncertainties you just don’t have to deal with when opting for x86. For its big splashy foray into general purpose laptops with its Snapdragon Elite chips, Qualcomm promised Linux support on par with Windows from day one. We’re several years down the line, and it’s still a complete mess. And that’s just one chip line, of one generation! As long as every individual Arm SoC and Arm board are little isolated islands with unknown software and hardware support status, x86 will continue to survive, even if x86 laptops use more power, even if x86 chips end up being slower. Without the incredible ecosystem x86 has, Arm will never achieve its full potential, and eventually, as has happened to every single other x86 competitor, x86 will eventually catch up to and surpass Arm’s strong points, at lower prices. Never bet against x86.
There’s the two behemoth architectures, x86 and ARM, and we probably all own one or more devices using each. Then there’s the eternally up-and-coming RISC-V, which, so far, seems to be having a lot of trouble outgrowing its experimental, developmental stage. There’s a fourth, though, which is but a footnote in the west, but might be more popular in its country of origin, China: LoongArch (I’m ignoring IBM’s POWER, since there hasn’t been any new consumer hardware in that space for a long, long time). Wesley Moore got his hands on a mini PC built around the Loongson 3A6000 processor, and investigated what it’s like to run Linux on it. He opted for Chimera Linux, which supports LoongArch, and the installation process feels more like Linux on x86 than Linux on ARM, which often requires dedicated builds and isn’t standardised. Sadly, Wayland had issues on the machine, but X.org worked just fine, and it seems virtually all Chimera Linux packages are supported for a pretty standard desktop Linux experience. Performance of this chip is rather mid, at best. The Loongson-3A6000 is not particularly fast or efficient. At idle it consumes about 27W and under load it goes up to 65W. So, overall it’s not a particularly efficient machine, and while the performance is nothing special it does seem readily usable. Browsing JS heavy web applications like Mattermost and Mastodon runs fine. Subjectively it feels faster than all the Raspberry Pi systems I’ve used (up to a Pi 400). ↫ Wesley Moore I’ve been fascinated by LoongArch for years, and am waiting to pounce on the right offer for LoongArch’s fastest processor, the 3C6000, which comes in dual-socket configurations for a maximum total of 128 cores and 256 threads. The 3C6000 should be considerably faster than the low-end 3A6000 in the mini PC covered by this article. I’m a sucker for weird architectures, and it doesn’t get much weirder than LoongArch.
Cameron Kaiser comes in with another amazing article, this time diving into a unique video titler from Canada, released in 1985. The Super Micro Script was one of several such machines this company made over its lifetime, a stylish self-contained box capable of emitting a 32×16 small or 10×4 large character layer with 64×32 block graphics in eight colours. It could even directly overlay its output over a composite video signal using a built-in genlock, one of the earliest such consumer units to do so. Crack this unit open, however, and you’ll find the show controlled by an off-the-shelf Motorola 6800-family microcontroller and a Motorola 6847 VDG video chip, making it a relative of contemporary 1980s home computers that sometimes used nearly exactly the same architecture. More important than that, though, it has socketed EPROMs we can theoretically pull and substitute with our own — though we’ll have to figure out why the ROMs look like nonsense, and there’s also the small matter of this unit failing to generate a picture. Nevertheless, when we’re done, another homegrown Canadian computer will rise and shine. We’ll even add a bitbanged serial port and write a MAME emulation driver for it so we can develop software quickly … after we fix it first. ↫ Cameron Kaiser I know I keep repeating myself, but Kaiser’s work on so many of these rare and unique systems is not only worthwhile and amazing to read, they’re also incredibly valuable from a historical and preservation perspective. This article in hand, anyone who stumbles upon one of these machines can get the most out of it, possibly fix one, and use it for fun projects. I’m incredibly grateful for this sort of work. Video titles are such an interesting relic of the past. These days, adding titles to a video is child’s play, but back when computing power came at a massive premium and digital video was but a distant dream, using analog video to overlay text onto was the best way to go about it. Video titler makers did try to move the technology from professional settings to home settings, but from what I can gather, this move never really paid off. Still, I’d love to buy one of these at some point and mess around with it. There’s some real cool retro effects you can create with these.
Guest post by Nils Andreas
2026-01-22
Hardware
In January 1976, MOS Technologies presented a demonstration computer for their recently developed 6502 processor. MOS, which was acquired by Commodore later that year, needed to show the public what their low-cost processor was able to. The KIM-1 single board computer came fully assembled with an input keypad, a six-digit LED display, and complete documentation. It was intended for developers, but it turned out that at a price of only $249 the computer was the ideal playground for hobbyists, who could now afford a complete computer. The unforgettable Jim Butterfield described it like this back in 1999: But suddenly there was the KIM-1. It was fully assembled (although you had to add a power supply). Everybody’s KIM-1 was essentially the same (although the CPU added an extra instruction during the KIM-1’s production life). And this created something that was never before part of the home computer phenomenon: users could quite happily exchange programs with each other; magazines could publish such programs; and people could talk about a known system. We knew the 6502 chip was great, but it took quite a while to convince the majority of computer hobbyists. MOS Technology offered this CPU at a price that was a fraction of what the other available chips cost. We faced the attitude that “it must be no good because it’s too cheap,” even though the 6502, with its pipelined architecture, outperformed the 8080 and the 6800.” ↫ Jim Butterfield Even though there would soon be better equipped and faster home computers (mostly based on the 6502) and the KIM-1 vanished from the collective minds, the home computer revolution started 50 years ago in Jan 1976. Hans Otten keeps the memory alive on his homepage, where you can find a full collection of information about single-board computers and especially the KIM-1.
The 9020 is a fascinating system, exemplary of so many of the challenges and excitement of the birth of the modern computer. On the one hand, a 9020 is a sophisticated, fault-tolerant, high-performance computer system with impressive diagnostic capabilities and remarkably dynamic resource allocation. On the other hand, a 9020 is just six to seven S/360 computers married to each other with a vibe that is more duct tape and bailing wire than aerospace aluminum and titanium. ↫ J. B. Crawford I was hooked from beginning to end. An absolutely exceptional article.
Can you use a cheap FPGA board as a base for a new computer inspired by the original IBM PC? Well, yes, of course, so that’s what Yuri Zaporozhets has set out to do just that. Based on the GateMateA1-EVB, the project’s got some of the basics worked out already – video output, keyboard support, etc. – and work is underway on a DOS-like operating system. A ton of work is still ahead, of course, but it’s definitely an interesting project.
Inter-corporation bullshit screwing over consumers – a tale as old as time. Major laptop vendors have quietly removed hardware decode support for the H.265/HEVC codec in several business and entry-level models, a decision apparently driven by rising licensing fees. Users working with H.265 content may face reduced performance unless they verify codec support or rely on software workarounds. ↫ Hilbert Hagedoornn at The Guru of 3D You may want to know how much these licensing fees are, and by how much they’re increasing next year, making these laptop OEMs remove features to avoid the costs. The HEVC licensing fee is $0.20 per device, and in 2026 it’s increasing to $0.24. Yes, a $0.04 increase per device is “forcing” these giant companies to screw over their consumers. Nobody’s coming out a winner here, and everyone loses. We took a wrong turn, but nobody seems to know when and where.
Wait, what? The term 3.5 inch floppy disc is in fact a misnomer. Whilst the specification for 5.25 inch floppy discs employs Imperial units, the later specification for the smaller floppy discs employs metric units. The standards for these discs are all of which specify the measurements in metric, and only metric. These standards explicitly give the dimensions as 90.0mm by 94.0mm. It’s in clause 6 of all three. ↫ Jonathan de Boyne Pollard Even the applicable standard in the US, ANSI X3.171-1989, specifies the size in metric. We could’ve been referring to these things using proper measurements instead of archaic ones based on the size of a monk’s left testicle at dawn at room temperature in 1375 or whatever nonsense imperial or customary used to be based on. I feel dirty for thinking I had to use “inches” for this. If we ever need to talk about these disks on OSNews from here on out, I’ll be using proper units of measurement.
Earlier this year, popular NAS vendor Synology announced it would start requiring some of its more expensive models to only use Synology-branded drives. It seems the uproar this announcement caused has had some real chilling effect on sales, and the company just cancelled its plans. Synology has backtracked on one of its most unpopular decisions in years. After seeing NAS sales plummet in 2025, the company has decided to lift restrictions that forced users to buy its own Synology hard drives. The policy, introduced earlier this year, made third-party HDDs from brands like Seagate and WD practically unusable in newer models such as the DS925+, DS1825+, and DS425+. That change didn’t go over well. Users immediately criticised Synology for trying to lock them into buying its much more expensive drives. Many simply refused to upgrade, and reviewers called out the move as greedy and shortsighted. According to some reports, sales of Synology’s 2025 NAS models dropped sharply in the months after the restriction was introduced. ↫ Hilbert Hagedoorn at Guru3D.com If you want to screw over your users to make a few more euros, it’s generally a good idea to first assess just how locked-in your users really are. Synology is but one of many companies making and selling NAS devices, and even building one yourself is stupidly easy these days. There’s an entire cottage industry of motherboards and enclosures specifically designed for this purpose, and there are countless easy-to-use software options out there, too. In other words, nobody is really locked into Synology, so any unpopular move by the company was bound to make people look elsewhere, only to discover there are tons of competing options to choose from. The market seems to have spoken, and Synology can only respond by reversing its decision. Honestly, I had almost forgotten what a healthy tech market with tons of competing options looks like.
It was good while it lasted, I guess. Arduino will retain its independent brand, tools, and mission, while continuing to support a wide range of microcontrollers and microprocessors from multiple semiconductor providers as it enters this next chapter within the Qualcomm family. Following this acquisition, the 33M+ active users in the Arduino community will gain access to Qualcomm Technologies’ powerful technology stack and global reach. Entrepreneurs, businesses, tech professionals, students, educators, and hobbyists will be empowered to rapidly prototype and test new solutions, with a clear path to commercialization supported by Qualcomm Technologies’ advanced technologies and extensive partner ecosystem. ↫ Qualcomm’s press release Qualcomm’s track record when it comes to community engagement, open source, and long-term support are absolutely atrocious, and there’s no way Arduino will be able to withstand the pressures from management. We’ve seen this exact story play out a million times, and it always begins with lofty promises, and always ends with all of them being broken. I have absolutely zero faith Arduino will be able to continue to do its thing like it has. Arduino devices are incredibly popular, and it makes sense for Qualcomm to acquire them. If I were using Arduino’s for my open source projects, I’d be a bit on edge right now.
I am a huge fan of my Rock 5 ITX+. It wraps an ATX power connector, a 4-pin Molex, PoE support, 32 GB of eMMC, front-panel USB 2.0, and two Gen 3×2 M.2 slots around a Rockchip 3588 SoC that can slot into any Mini-ITX case. Thing is, I never put it in a case because the microSD slot lives on the side of the board, and pulling the case out and removing the side panel to install a new OS got old with a quickness. I originally wanted to rackmount the critter, but adding a deracking difficulty multiplier to the microSD slot minigame seemed a bit souls-like for my taste. So what am I going to do? Grab a microSD extender and hang that out the back? Nay! I’m going to neuralyze the SPI flash and install some Kelvin Timeline firmware that will allow me to boot and install generic ARM Linux images from USB. ↫ Interfacing Linux Using EDK2 to add UEFI to an ARM board is awesome, as it solves some of the most annoying problems of these ARM boards: they require custom images specifically prepared for the board in question. After flashing EDK2 to this board, you can just boot any ARM Linux distribution – or Windows, NetBSD, and so on – from USB and install it from there. There’s still a ton of catches, but it’s a clear improvement. The funniest detail for sure, at least for this very specific board, is that the SPI flash is exposed as a block device, so you can just use, say the GNOME Disk Utility to flash any new firmware into it. The board in question is a Radxa ROCK 5 ITX+, and they’re not all that expensive, so I’m kind of tempted here. I’m not entirely sure what I’d need yet another computer for, honestly, but it’s not like that’s ever stopped any of us before.
Over the years, we’ve seen a good number of interfaces used for computer monitors, TVs, LCD panels and other all-things-display purposes. We’ve lived through VGA and the large variety of analog interfaces that preceded it, then DVI, HDMI, and at some point, we’ve started getting devices with DisplayPort support. So you might think it’s more of the same. However, I’d like to tell you that you probably should pay more attention to DisplayPort – it’s an interface powerful in a way that we haven’t seen before. ↫ Arya Voronova at HackADay DisplayPort is a better user experience in every way compared to HDMI. I am so, so sad that HDMI has won out in the consumer electronics space, with all of its countless anti-user features as detailed in the linked article. I refuse to use HDMI when DisplayPort is available, so all of my computers’ displays are hooked up over DP. Whenever I did try to use HDMI, I always ran into issues with resolution, refresh rates, improper monitor detection, and go knows what else. Plug in a DP cable, and everything always just works. Sadly, in consumer electronics, DisplayPort isn’t all that common. Game consoles, Hi-Fi audio, televisions, and so on, all push HDMI hard and often don’t offer a DisplayPort option at all. It takes me back to the early-to-late 2000s, when my entire audio setup was hooked up using optical cables, simply because I was a MiniDisc user and had accepted the gospel of optical cables. Back then, too, I refused to buy or use anything that used unwieldy analog cables. Mind you, this had nothing to do with audio quality – it was a usability thing. If anyone is aware of home audio devices and televisions that do offer DisplayPort, feel free to jump into the comments.
A simple instruction-stepped Z80 CPU emulator written in Go, inspired by the cycle-accurate emulation techniques described in floooh’s blog posts. ↫ Zen80 GitHub page It has support for all documented Z80 instructions, supports most games and applications, and much more.
You’ve seen them everywhere, especially on older computer equipment: the classic 9-pin serial connector. You probably know it as a DB9. It’s an iconic connector for makers, engineers, and anyone who’s ever used an RS232 serial device. Here’s a little secret, though: calling it a DB9 is technically wrong. The correct name is actually DE9. ↫ Christo-boots with the-pher at Sparkfun Electronics I honestly had no idea, and looking through the Wikipedia page, it seems this isn’t the only common misnomer when it comes to D-sub connectors.
When it comes to open hardware, choices are not exactly abundant. Truly open source hardware – open down to the firmware level of individual components – that also has acceptable performance is rare, with one of the few options being the Talos II and Blackbird POWER9 workstations from Raptor Computing Systems (which I reviewed). Another option that can be fully open source with the right configuration are the laptops made by MNT, which use the ARM architecture (which I also reviewed). Both of these are excellent options, but they do come with downsides; the Talos II/Blackbird are expensive and getting a bit long in the tooth (and a possible replacement is at least a year away), and the MNT Reform and Pocket Reform simply aren’t for everyone due to their unique and opinionated design. Using an architecture other than x86 also simply isn’t an option for a lot of people, ruling out POWER9 and ARM hardware entirely. In the x86 world, it’s effectively impossible to avoid proprietary firmware blobs, but there are companies out there trying to build x86 laptops that try to at least minimise the reliance on such unwelcome blobs. One of these companies is NovaCustom, a Dutch laptop (and now desktop!) OEM that sells x86 computers that come with Dasharo open firmware (based on coreboot) and a strong focus on privacy, open source, customisability, and repairability. NovaCustom sent over a fully configured NovaCustom V54 laptop, so let’s dive into what it’s like to configure and use an x86 laptop with Dasharo open firmware and a ton of unique customisation options. Hardware configuration I opted for the 14″ laptop model, the V54, since the 16″ V65 is just too large for my taste. NovaCustom offers a choice between a 1920×1200 60Hz and a 2880×1800 120Hz panel, and I unsurprisingly chose the latter. This higher-DPI panel strikes a perfect balance between having a 4K panel, which takes a lot more processing power to drive, and a basic 1080p panel, which I find unacceptable on anything larger than 9″ or so. The refresh rate of 120Hz is also a must on any modern display, as anything lower looks choppy to my eyes (I’m used to 1440p/280Hz on my gaming PC, and 4K/160Hz on my workstation – I’m spoiled). The display also gets plenty bright, but disappointingly, the V54 does not offer a touch option. I don’t miss it, but I know it’s a popular feature, so be advised. While the V54 can be equipped with a dedicated mobile RTX 4060 or 4070 GPU, I have no need for such graphical power in a laptop, so I stuck with the integrated Intel Arc GPU. Note that if you do go for the dedicated GPU, you’ll lose the second M.2 slot, and the laptop will gain some weight and thickness. I did opt for the more powerful CPU option with the Intel Intel Core Ultra 7 155H, which packs 6 performance cores (with hyperthreading), 8 efficiency cores, and 2 low-power cores, for a total of 16 cores and 22 threads maxing out at 4.8Ghz. Unless you intend to do GPU-intensive work, this combination is stupid fast and ridiculously powerful. Throw in the 32GB of DDR5 5600MHz RAM in a dual-channel configuration (2×16, replaceable) and a speedy 7.400 MB/s (read)/6.500 MB/s (write) 1TB SSD, and I sometimes feel like this is the sort of opulence Marie Antoinette would indulge herself in if she were alive today. It won’t surprise you to learn that with this configuration, you won’t be experiencing any slowdowns, stuttering, or other performance issues. Ports-wise, the V54 has a USB-C port (3.2 Gen 2), a Thunderbolt 4 port (with Display Alt Mode supporting DP 2.1), a USB-A port (3.2 Gen 2) and a barrel power jack on the right side, a combo audio jack, USB-A port (3.2 Gen 1), microSD card slot, and a Kensington lock on the left, and an Ethernet and HDMI port on the back. Especially the Ethernet port is such a welcome affordance in this day and age, and we’ll get back to it since we need it for Dasharo. The trackpad is large, smooth, and pleasant to use – for a diving board type trackpad, that is. More and more manufacturers are adopting the Apple-style haptic trackpads, which I greatly prefer, but I suspect there might be some patent and IP shenanigans going on that explain why uptake of those in the PC space hasn’t exactly been universal. If you’re coming from a diving board trackpad, you’ll love this one. If you’re coming from a haptic trackpad, it’s a bit of a step down. A standout on the V54 is the keyboard. The keys are perfectly spaced, have excellent travel, a satisfying, silent click, and they are very stable. It’s an absolute joy to type on, and about as good as a laptop keyboard can be. On top of that, at least when you opt for the US-international keyboard layout like I do, you get a keyboard that actually properly lists the variety of special characters on its keys. This may look chaotic and messy to people who don’t need to use those special characters, but as someone who does, this is such a breath of fresh air compared to all those modern, minimalist keyboards where you end up randomly mashing key combinations to find that one special character you need. Considering my native Dutch uses diacritics, and my wife’s native Swedish uses the extra letters å, ä, and ö (they’re letters!), this is such a great touch. The keyboard also has an additional layer for a numeric pad, as well as the usual set of function keys you need on a modern laptop, including a key that will max out the fan speed in case you need it (the little fan glyph on my keyboard seems double-printed, though, which is a small demerit). I especially like the angry moon glyph on the sleep key. He’s my grumpy friend and I love him. Of course, the
There’s quite a few ways to mess around with home automation, with the most popular communication methods being things like ZigBee, plain Wi-Fi, and so on. One of the more promising new technologies is Thread, and Dennis Schubert decided to try and use it for a new homebrew project he was working on. After diving into the legalese of the matter, though, he discovered that Thread is a complete non-starter due to excessive mandatory membership fees without any exceptions for non-commercial use. To summarize: if you’re a hobbyist without access to some serious throwaway money to join the Thread Group, there is no way to use Thread legally – the license does not include an exception for non-commercial uses. If you’re like me and want to write a series of blog posts about how Thread works, there’s also no legal way. A commercial membership program for technology stacks like Thread isn’t new; it’s somewhat common in that space. Same with requiring certifications for your commercial products if you want to use a logo like the “Works with Thread” banner. And that’s fine with me. If you’re selling a commercial electronics product, you have to go through many certification processes anyway, so that seems fair. But having a blanket ban on implementations, even for non-commercial projects, is absolutely bonkers. This means that no hobbyist should ever get close to it, and that means that the next generation of electrical engineers and decision-makers don’t get to play around with the tech before they enter the industry. But of course, that doesn’t really matter to the Thread Group: their members list includes companies like Apple, Google, Amazon, Nordic, NXP, and Qualcomm – they can just force Thread into being successful by making sure it’s shipped in the most popular “home hubs”. So it’s just us that get screwed over. Anyway, if you planned to look at Thread… well, don’t. You’re not allowed to use it. ↫ Dennis Schubert So you can buy Thread dev kits to create your own devices at home, but even such non-commercial use is not allowed. The situation would be even more complex for anyone trying to sell a small batch of fun devices using Thread, because they’d first have to fork over the exorbitant yearly membership fee. What this means is that Thread is a complete non-starter for anyone but an established name, which is probably exactly why the big names are pushing it so hard. They want to control our home automation just as much as everything else, and it seems like Thread is their foot in the door. Be advised.
GlobalFoundries today announced a definitive agreement to acquire MIPS, a leading supplier of AI and processor IP. This strategic acquisition will expand GF’s portfolio of customizable IP offerings, allowing it to further differentiate its process technologies with IP and software capabilities. ↫ Press release about the acquisition MIPS has a long and storied history, most recently as it abandoned its namesake instruction set architecture in favour of RISC-V. MIPS processors are still found in a ton of devices though, but usually not in high-profile devices like smartphones or whatever. Their new RISC-V cores haven’t yet seen a lot of uptake, but that’s a problem all across the RISC-V ecosystem.
The GPU in your computer is about 10 to 100 times more powerful than the CPU, depending on workload. For real-time graphics rendering and machine learning, you are enjoying that power, and doing those workloads on a CPU is not viable. Why aren’t we exploiting that power for other workloads? What prevents a GPU from being a more general purpose computer? ↫ Raph Levien Fascinating thoughts on parallel computation, including some mentions of earlier projects like Intel’s Larabee or the Connection Machine with 64k processors the ’80s, as well as a defense of the PlayStation 3’s Cell architecture.