The Intel 386 SX CPU quickly replaced the 286 CPU in the early 1990s. For a time, it was a very popular CPU, especially for people who were wanting to run Microsoft Windows. Yet the two CPUs run at nearly identical speed. So what was the big deal? The 286 vs 386SX argument could be confusing in 1991, and it’s not much clearer today. Here at OSNews we pride ourselves in pointing you to the most relevant, up-to-date buying advice available on the internet.
This is the Commodore 64 KERNAL, modified to run on the Atari 8-bit line of computers. They’re practically the same machine; why didn’t someone try this 30 years ago? No time like the present.
Remember Framework, the company building a repairable, modular laptop? The first reviews are in, and it seems they’re quite positive – people are wondering why none of the other big OEMs are capable of making a thin, light, and sturdy laptop with this amount of upgradeability and repairability. Linus from Linus Tech Tips made a long, detailed video about the laptop as well, and was so impressed he bought one right away. I have to say – this laptop has me very, very intrigued. It hits all the right buttons, with the only major uncertainty being just how long a relatively small company like this can stay afloat, to ensure a steady stream of future upgrades. It seems anyone can make new modules and new parts for this laptop, though, so hopefully a community of makers springs up around it as well. In any event, I’m hoping to get my hands on a review unit, because we really need to know how well Linux runs on this machine.
While Loongson has been known for their MIPS-based Loongson chips that are open-source friendly and have long been based on MIPS, with MIPS now being a dead-end, the Chinese company has begun producing chips using its own “LoongArch” ISA. The first Loongson 3A5000 series hardware was just announced and thanks to the company apparently using the Phoronix Test Suite and OpenBenchmarking.org we have some initial numbers. Announced this week was the Loongson 3A5000 as their first LoongArch ISA chip that is quad-core with clock speeds up to 2.3~2.5GHz. Loongson 3A5000 offers a reported 50% performance boost over their prior MIPS-based chips while consuming less power and now also supporting DDR4-3200 memory. The Loongson 3A5000 series is intended for domestic Chinese PCs without relying on foreign IP and there is also the 3A5000LL processors intended for servers. Performance isn’t even remotely interesting – for now. The Loongson processors will improve by leaps and bounds over the coming years, if only because it will have the backing of the regime. I hope some enterprising people import these to the west, because I’d love to see them in action. Nothing in technology excites me more than odd architectures.
After a month of reverse-engineering, we’re excited to release documentation on the Valhall instruction set, available as a PDF. The findings are summarized in an XML architecture description for machine consumption. In tandem with the documentation, we’ve developed a Valhall assembler and disassembler as a reverse-engineering aid. Valhall is the fourth Arm Mali architecture and the fifth Mali instruction set. It is implemented in the Arm Mali-G78, the most recently released Mali hardware, and Valhall will continue to be implemented in Mali products yet to come. Excellent and important work.
Fifty years ago, IBM introduced the first-ever floppy disk drive, the IBM 23FD, and the first floppy disks. Floppies made punched cards obsolete, and its successors ruled software distribution for the next 20 years. Here’s a look at how and why the floppy disk became an icon. It’s still amazing to me just how quickly they fell out of favour.
The Libre-SOC project, a team of engineers and creative personas aiming to provide a fully open System-on-Chip, has today posted a layout that the team sent for chip fabrication of the OpenPOWER-based processor. Currently being manufactured on TSMC’s 180 nm node, the Libre-SOC processor is a huge achievement in many ways. To get to a tape out, the Libre-SOC team was accompanied by engineering from Chips4Makers and Sorbonne Université, funded by NLnet Foundation. Based on IBM’s OpenPOWER instruction set architecture (ISA), the Libre-SOC chip is a monumental achievement for open-source hardware. It’s also the first independent OpenPOWER chip to be manufactured outside IBM in over 12 years. Every component, from hardware design files, documentation, mailing lists to software, is open-sourced and designed to fit with the open-source spirit and ideas. This is an impressive milestone, and I can’t wait until this is ready for more general use. With things like RISC-V and OpenPOWER, there’s a lot of progress being made on truly open source hardware, and that has me very, very excited. This also brings an OpenPOWER laptop closer to being a real thing, and that’s something I’d buy in less than a heartbeat.
Liam Proven posted a good summary of the importance of the PDP and VAX series of computers on his blog. Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It’s entitled “First new vax in …30 years?” Someone posted it on Hackernews. One of the comments said, roughly, that they didn’t see the significance and could someone “explain it like I’m a Computer Science undergrad.” This is my attempt to reply… Um. Now I feel like I’m 106 instead of “just” 53. OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families… and both were from the same company.
Anders Magnusson, writing on the Port-vax NetBSD mailing list: Some time ago I ended up in an architectural discussion (risc vs cisc etc…) and started to think about vax. Even though the vax is considered the “ultimate cisc” I wondered if its cleanliness and nice instruction set still could be implemented efficient enough. Well, the only way to know would be to try to implement it 🙂 I had an 15-year-old demo board with a small low-end FPGA (Xilinx XC3S400), so I just had to learn Verilog and try to implement something. And it just passed EVKAA.EXE: Along with the development of a VAX implementation in an FPGA, discussions arose about possible 64-bit extensions: For userspace; the vax architecture itself leave the door open for expanding the word size. The instructions are all defined to use only the part of a register it needs, so adding a bunch of ‘Q’ instructions are a no-brainer. Argument reference will work as before. The JMP/JSR/RET/… might need a Q counterpart, since it suddenly store/require 8 bytes instead of 4. Kernel; the hardware structures (SCB, PCB, …) must all be expanded. Memory management changed (but the existing leave much to wish for anyway). All this is probably a quite simple update to the architecture. It’s nice to see people still putting work and effort into what is nearly a half-century old, and otherwise obsolete, instruction set.
For simplicity, let’s say you have a single-CPU system that supports “dynamic frequency scaling”, a feature that allows software to instruct the CPU to run at a lower speed, commonly known as “CPU throttling”. Assume for this scenario that the CPU has been throttled to half-speed for whatever reason, could be thermal, could be energy efficiency, could be due to workload. Finally, let’s say that there’s a program that is CPU-intensive, calculating the Maldebrot set or something. The question is: What percentage CPU usage should performance monitoring tools report? Should it report 100%, or 50%? This is like asking what side of the bed is the front, and which side is the back – you can make valid arguments either way, and nobody is wrong or right.
Now, as Qualcomm looks to push 5G connectivity into laptops, it is pairing modems with a powerful central processor unit, or CPU, Amon said. Instead of using computing core blueprints from longtime partner Arm Ltd, as it now does for smartphones, Qualcomm concluded it needed custom-designed chips if its customers were to rival new laptops from Apple. As head of Qualcomm’s chip division, Amon this year led the $1.4 billion acquisition of startup Nuvia, whose ex-Apple founders help design some those Apple laptop chips before leaving to form the startup. Qualcom will start selling Nuvia-based laptop chips next year. The processor industry is scrambling to catch up to Apple, and every Intel and AMD OEM is looking for something that can give the same or merely vaguely similar kind of performance and power draw in laptops like the M1. Qualcomm is claiming here that they can, and will – this year, without relying on Arm. Bold claim.
One example of this was the parallel universe of FireWire hubs. If you think of FireWire as “a big USB” then a hub wouldn’t seem so strange, but FireWire was actually meant to replace SCSI. SCSI and FireWire are peer-to-peer: any device on the bus can talk to any other device, unlike USB where each bus has at most one host and the host does all the initiation of data transfer. (USB On-The-Go still has one host and one host only; it just allows certain devices like your mobile phone to swing both ways.) The point-to-point capabilities of USB 3 notwithstanding, a USB hub has one upstream port for the host and multiple downstream ports for the devices. A FireWire hub, however, is like getting a longer internal SCSI cable; more devices simply exist on the same bus. Connecting multiple FireWire hubs just makes a bigger bus because all the ports are the same. Everything you ever wanted to know about FireWire hubs, with lots of examples.
Today, RISC-V CPU design company SiFive launched a new processor family with two core designs: P270 (a Linux-capable CPU with full support for RISC-V’s vector extension 1.0 release candidate) and P550 (the highest-performing RISC-V CPU to date). There’s quite a bit to unpack here today. Not only did SiFive announce these two new core designs, it also partnered with Intel. Intel will be the main development partner on the P550 core on Intel’s 7nm process, and most likely, Intel will also build its own SoCs using these P550 cores. In other words, there’s a lot of IP sharing going on here. This is a big step for both RISV-V and SiFive, and bodes well for the open source ISA as a whole.
From the January 1996 issue of PC World: Sony has great hopes for its MiniDisc Data format as the next-generation mass storage media. And why not? On the surface, it has a lot going for it. A blank 2.5-inch magneto-optical MiniDisc offers 140MB of rewritable storage, and Sony promises the discs can be rewritten more than a million times with no loss of data integrity. MD Data was emblematic for the MiniDisc format as a whole. Great technology, but far too expensive for most people, and always outdone by emerging competing formats (CD-R, MP3 players). Still, I used MiniDisc all the way through high school and university, well into the smartphone era, and I will always consider it my favourite music format.
It’s that time of the year again, and after last month’s unveiling of Arm’s newest infrastructure Neoverse V1 and Neoverse N2 CPU IPs, it’s now time to cover the client and mobile side of things. This year, things Arm is shaking things up quite a bit more than usual as we’re seeing three new generation microarchitectures for mobile and client: The flagship Cortex-X2 core, a new A78 successor in the form of the Cortex-A710, and for the first time in years, a brand-new little core with the new Cortex-A510. The three new CPUs form a new trio of Armv9 compatible designs that aim to mark a larger architectural/ISA shift that comes very seldomly in the industry. Alongside the new CPU cores, we’re also seeing a new L3 and cluster design with the DSU-110, and Arm is also making a big upgrade in its interconnect IP with the new cache coherent CI-700 mesh network and NI-700 network-on-chip IPs. AnandTech’s usual deep dive into the processors Android devices will be using next year.
Though it can’t match the high-quality screens and discrete GPUs available in some competing laptops (like the Dell XPS 13 and Alienware m15 r4), Framework offers a unique feature customers can’t find anywhere else right now: control. Laptops have steadily gotten less repairable and upgradeable over time, to the horror of many computing enthusiasts. While we’re starting to see manufacturers ship more notebooks with upgradeable storage and graphics card options, the rest of the components are typically off-limits — and often soldered down in a way that makes trying to replace or upgrade it a dicey proposition at best. By contrast, Framework’s laptop has been designed from the ground up for socket-based modularity. This is a decision Patel claims hasn’t prevented them from achieving nearly the same heights of thinness and lightness as competitors like Apple and Dell have. This is the first review of the Framework Laptop I’ve seen, and it seems very positive. I’m unreasonably excited about this machine, and I’ll try and see if I can get my hands on a review unit. This machine seems like a perfect fit for the average OSNews reader.
One reason these legislative efforts have failed is the opposition, which happens to sell boatloads of new devices every year. Microsoft’s top lawyer advocated against a repair bill in its home state. Lobbyists for Google and Amazon.com Inc. swooped into Colorado this year to help quash a proposal. Trade groups representing Apple Inc. successfully buried a version in Nevada. Telecoms, home appliance firms and medical companies also opposed the measures, but few have the lobbying muscle and cash of these technology giants. While tech companies face high-profile scrutiny in Washington, they quietly wield power in statehouses to shape public policy and stamp out unwelcome laws. Tech companies argue that right-to-repair laws would let pirates rip off intellectual property and expose consumers to security risks. In several statehouses, lobbyists told lawmakers that unauthorized repair shops could damage batteries on devices, posing a threat of spontaneous combustion. What’s good enough for the car industry, is more than good enough for these glorified toaster makers. Cars are basically murder weapons we kind of screwed ourselves into being reliant on, but Apple and Microsoft make complicated toasters that you need to really screw up in order to hurt anyone with. Computer and device makers must be forced to make parts and schematics available to any independent repair shop, just like car makers have to do. So many perfectly capable devices end up in dangerous, toxic landfills in 3rd world countries simply because Apple, Microsoft, and other toaster makers want to increase their bottom line. It’s disgusting behaviour, especially with how sanctimonious they are about protecting the environment and hugging baby seals.
Today, we’re pivoting towards the future and the new Neoverse V1 and Neoverse N2 generation of products. Arm had already tested the new products last September, teasing a few characteristics of the new designs, but falling short of disclosing more concrete details about the new microarchitectures. Following last month’s announcement of the Armv9 architecture, we’re now finally ready to dive into the two new CPU microarchitectures as well as the new CMN-700 mesh network. These are looking really good.
Over the last few months I have been on and off digging into the history of early PC networking products, especially Ethernet-based ones. In that context, it is impossible to miss the classic NE2000 adapter with all its offshoots and clones. Especially in the Linux community, the NE2000 seems to have had rather bad reputation that was in part understandable but in part based on claims that simply make no sense upon closer examination. A deep dive into this very popular and widespread NE2000 adapter.
ServeTheHome attended Arm Vision Day 2021 and posted a quick overview. At the event, the company introduced Armv9 which will bring about key advancements for machine learning, digital signal processing, and security. One of the key drivers of Arm expecting to see massive shipment growth is the need for specialized compute. Or another way to look at this is that a number of traditional analog devices will convert to some level of “smart” and connected over the next few years. An example was given of a mechanical pump (like a water pump) that could be monitored for failure signs and efficiency versus just pumping water. For each of those applications, there will be different needs in terms of sensor connectivity and processing, general-purpose and accelerated compute (CPU and AI as examples), memory, and communications infrastructure. Arm sees the lower power cost of new chips enabling a wider array of chips and therefore more chips being sold. Another key push will be for Arm SystemReady. This is building on Arm ServerReady which helped Arm servers go from being a science experiment to boot each server to our experience with the Ampere Altra Wiwynn Mt. Jade Server where it worked (mostly) out-of-the-box using a standard image. Arm SystemReady is probably the biggest thing for OS enthusiasts. One of the weaknesses of the Arm hardware ecosystem, compared to the x86 ecosystem, is the lack of a standardized boot environment. x86 has a BIOS or UEFI, and Arm has UEFI (server) and something (probably devicetrees and a fork of Das U-Boot). Going forward Arm SystemReady systems will be able to boot via UEFI to allow for a standard OS image like x86. They could have picked something else (coreboot, Barebox, Das U-Boot), but UEFI is at least better then what it was.