The 6502 was the CPU in my first computer (an Apple II plus), as well as many other popular home computers of the late 1970s and 80s. It lived on well into the 1990s in game consoles and chess computers, mostly in its updated “65C02” CMOS version. Here’s a re-implementation of the 65C02 in an FPGA, in a pin-compatible format that lets you upgrade those old computers and games to 100 MHz clock rate! Interesting project.
This Atari 1040ST is still in use after 36 years! Frans Bos bought this Atari in 1985 to run his camp site (Camping Böhmerwald). He wrote his own software over the years to manage his camp site, as well as reservations and the registration of the guests. He really likes the speed of the machine compared to newer computers. And 6 months every year the machine is on day and night.
Tracking quantum computing has been a bit confusing in that there are multiple approaches to it. Most of the effort goes toward what are called gate-based computers, which allow you to perform logical operations on individual qubits. These are well understood theoretically and can perform a variety of calculations. But it’s possible to make gate-based systems out of a variety of qubits, including photons, ions, and electronic devices called transmons, and companies have grown up around each of these hardware options. But there’s a separate form of computing called quantum annealing that also involves manipulating collections of interconnected qubits. Annealing hasn’t been as worked out in theory, but it appears to be well matched to a class of optimization problems. And, when it comes to annealing hardware, there’s only a single company called D-Wave. Now, things are about to get more confusing still. On Tuesday, D-Wave released its roadmap for upcoming processors and software for its quantum annealers. But D-Wave is also announcing that it’s going to be developing its own gate-based hardware, which it will offer in parallel with the quantum annealer. We talked with company CEO Alan Baratz to understand all the announcements. I think I understood some of those words because I, too, watch Space Time.
The European Commission, the executive arm of the European Union, has announced plans to force smartphone and other electronics manufacturers to fit a common USB-C charging port on their devices. The proposal is likely to have the biggest impact on Apple, which continues to use its proprietary Lightning connector rather than the USB-C connector adopted by most of its competitors. The rules are intended to cut down on electronic waste by allowing people to re-use existing chargers and cables when they buy new electronics. In addition to phones, the rules will apply to other devices like tablets, headphones, portable speakers, videogame consoles, and cameras. Manufacturers will also be forced to make their fast-charging standards interoperable, and to provide information to customers about what charging standards their device supports. Under the proposal, customers will be able to buy new devices without an included charger. It was the European Union that spearheaded the change from one-charger-per-device to standardising on Micro-USB, which was followed by USB-C later on. There’s a lot of pro-Apple, anti-government, right-wing talking points going around the internet today, especially coming from the United States, but unlike what they want you to believe, laws like this do not stop or even hinder innovation or the arrival of newer charging ports or standards. New charging standards can be rolled into USB-C, and the law can be changed for newer ports if the industry asks for it and there is sufficient consensus. Nobody liked the situation we had where every single device had its own incompatible charger. In fact, I have countless Palm OS devices I have a hard time charging because I lost some of their chargers over time. It was an infuriating time, and it’s thanks to EU pressure that the situation has improved as much as it has. However, due to Apple’s reluctance to play ball, the EU now has to step in and regulate – had Apple been a good citizen and adopted USB-C like everyone else, we probably wouldn’t have needed this law. Too bad for Apple. They most likely won’t be able to buy their way out of this one, and we don’t have historically black colleges Apple can take back promised funding from, either.
The open source Panfrost driver for Mali GPUs has now achieved official conformance on Mali-G52 for OpenGL ES 3.1, as seen on the Khronos adopters list. This important milestone is a step forward for the open source driver, as it now certifies Panfrost for use in commercial products containing Mali G52 and paves the way for further conformance submissions on other Mali GPUs. Excellent news, and great progress.
I’ve long been intrigued by Thunderbolt add-in cards. Apparently regular looking PCIe expansion cards, but shipped with a mystery interface cable to the motherboard, of which there is a small list of supported models. It’s not a secret that these cards may work in a motherboard which isn’t supported, but full functionality is not a given. I have spent the past few evenings trawling through many forums, reading about the many different experiences people are having, and have also purchased some hardware to play around with myself, so we can dig into these problems and see what (if any) solutions there are. Excellent deep dive into a topic I had never once in my life stopped to think about. As the author concludes, it would be cool if we ever got working, reliable Thunderbolt add-in cards for AMD or earlier Intel systems, but it seems unlikely.
How do you write a review of a laptop when you’re struggling to find truly negative things to say? This is rarely an issue – every laptop is a compromise – but with the KDE Slimbook, I feel like I’ve hit this particular problem for the first time. A luxury, for sure, but it makes writing this review a lot harder than it’s supposed to be. First, let’s talk about Slimbook itself. Slimbook is a Linux OEM from Spain, founded in 2015, which sells various laptops and desktops with a variety of preinstalled Linux distributions to choose from (including options for no operating system, or Windows). A few years ago, Slimbook partnered with KDE to sell the KDE Slimbook – a Slimbook laptop with KDE Neon preinstalled, and the KDE logo engraved on the laptop’s lid. The current KDE Slimbook is – I think – the third generation, and the first to make the switch from Intel to AMD. With the help of the KDE organisation, Slimbook sent over a KDE Slimbook for me to review, and here’s my impressions. Power and quality The KDE Slimbook is the first modern AMD laptop I’ve tested and used, and it feels great to see AMD at the top again when it comes to laptops. The laptop Slimbook sent me comes in at € 1149, and packs the AMD Ryzen 7 4800H, which has 8 cores and 16 threads, running at a base clock of 2.9Ghz and a boost clock of 4.2Ghz. That’s more cores and threads than in any of my desktop PCs (save for the dual-processor POWER9 workstation I’m currently reviewing as well), which I still find kind of bonkers. Integrated onto the processor die is the Radeon RX Vega 7 GPU, with 7 compute units running at 1600Mhz. This obviously isn’t a gaming-oriented GPU, but it can run less intensive games in a pinch, and since it’s AMD, it works perfectly fine with Wayland, too. My unit was configured with a total of 16GB of RAM, in dual-channel mode (as it should be), running at 3200 MT/s. The motherboard has two RAM slots, both accessible, and can be configured with a maximum of 64GB of RAM – making this a rather future-proof laptop when it comes to memory. It won’t surprise you in 2021 that my review unit came with an NVMe SSD – a 256GB, PCIe 3.0 model from Gigabyte, good for a maximum sequential read speed of 1700 GB/s and a maximum sequential write speed of 1100 GB/s. This isn’t exactly the fastest SSD on the market, but Slimbook offers the option for faster – and more expensive – Samsung EVO SSDs as well. On top of that, the M.2 2280 slot is user-accessible, so you can always upgrade later. Slimbook sent me the 15.6″ model, which comes with a 15.6″ 1920×1080 60Hz panel. There is also a 14″ model with the same resolution and refresh rate. The panel is 100% sGRB, and is plenty bright and pleasant to look at. Sadly, Slimbook does not offer 1440p, 4K, or high-refresh rate options, which is a big downside in 2021. If it were up to me, I’d love to see at least a 1440p/144Hz option on both the 14″ and 15.6″, and I hope the next generation of the KDE Slimbook will offer this as an option. Battery life has been outstanding. The device loses little charge when sleeping, and I easily get 7-8 hours of regular use out of the battery. The keyboard deviates from the norm a little bit, in that it’s not the usual island chicklet type keyboard where the keys are surrounded by metal. Instead, the keys float in the keyboard deck, which instantly brought back memories of Apple’s aluminium PowerBook line. I prefer this type of keyboard design over the chicklet island design, and typing is a delight on the KDE Slimbook – the keys are stable, clicky, and requiring just the right amount of force. I also happen to think it looks really, really nice, and it has full-height inverted T arrow keys. Nice. The keyboard does have two minor niggles, though, and they both relate to the backlight. First, it takes 1-2 seconds for the keyboard backlight to come back on after it has faded off, and that’s a lot more annoying than you would think. The second issue has to do with the lettering on the keyboard. The backlight shines through the lettering on the keyboard, but in some places, it just does not shine through at all. I’m not sure what the underlying issue is – the placement of the individual LEDs or the lettering etching process – but it makes some keys hard to read when the backlight is on. The trackpad is excellent, feels smooth, pleasant, and responsive, and I haven’t experienced any issues. It’s of the diving board design, and I think it’s glass, but I’m not entirely sure. Even if it’s plastic – if it feels and works well, that’s not an issue to me. I am, however, deeply intrigued by that little LED in the top-left corner. I have no idea what it’s for, and I am fairly sure I’ve seen it come on at least a few times. I made it a point not to look it up to see if I could figure it out, but here we are, and I still have no clue. The KDE Slimbook comes packed with ports, which is a godsend in the modern world. On the left side, there’s a microSD slot, a headphone/microphone jack, a USB 3.0 port, a USB 2.0 port, an Ethernet jack, and a Kensington lock. On the right side, there’s a USB-C port (no Thunderbolt, since this is an AMD machine), a USB 3.0 port, a full-size HDMI port, and the barrel plug power connector. That’s a solid set of ports, and I have no complaints about the selection. The one big miss here is that the machine does not support charging
Dubbed OMG Cables, these new variants are more capable than their counterparts. According to their creator, payloads can be triggered from over one mile away. Attackers can use them to log keystrokes and change keyboard mappings. There is also a geofencing feature, a kill switch and the ability to forge the identity of specific USB devices, like those that can leverage a specific vulnerability. While it’s unlikely us random, generic people will ever be the target of tools like this, there’s no doubt in my mind they’re being used all over the world to monitor dissidents, spy on competing companies, and so on.
The story of NEC’s FPUs is interesting, but as is usually the case, something led me down this path. While looking through loads of old scrap boards I found a most curious arrangement, a board with a normal unassuming V30 processor, but right next to it was another 40-pin chip, a chip with a HUGE die lid labeled D9008D, dated similar to everything else, in the 1989-1991 range curiously copyrighted 85 86 and ’87. I pulled the chip (soldered in , of course) and it sat on my desk, for a year until I decide to open the lid on it, and what did it reveal? A die that most certainly was a floating point data path. This odd chip was an FPU, and an FPU that was directly connected to the V30 CPU. Very interesting article about a very obscure topic.
Arm is widely regarded as the most important semiconductor IP firm. Their IP ships in billions of new chips every year from phones, cars, microcontrollers, Amazon servers, and even Intel’s latest IPU. Originally it was a British owned and headquartered company, but SoftBank acquired the firm in 2016. They proceeded to plow money into Arm Holdings to develop deep pushes into the internet of things, automotive, and server. Part of their push was also to go hard into China and become the dominant CPU supplier in all segments of the market. As part of the emphasis on the Chinese market, SoftBank succumbed to pressure and formed a joint venture. In the new joint venture, Arm Holdings, the SoftBank subsidiary sold a 51% stake of the company to a consortium of Chinese investors for paltry $775M. This venture has the exclusive right to license Arm’s IP within China. Within 2 years, the venture went rogue. Recently, they gave a presentation to the industry about rebranding, developing their own IP, and striking their own independently operated path. This is not the first time the Chinese government – through its companies and investors – has gained access to a large amount of silicon IP (both VIA and AMD fell for this too). Not that I care much for Arm here – they were blinded by greed, and will pay the price – but hopefully this opens the eyes of other companies in similar positions.
The Intel 386 SX CPU quickly replaced the 286 CPU in the early 1990s. For a time, it was a very popular CPU, especially for people who were wanting to run Microsoft Windows. Yet the two CPUs run at nearly identical speed. So what was the big deal? The 286 vs 386SX argument could be confusing in 1991, and it’s not much clearer today. Here at OSNews we pride ourselves in pointing you to the most relevant, up-to-date buying advice available on the internet.
This is the Commodore 64 KERNAL, modified to run on the Atari 8-bit line of computers. They’re practically the same machine; why didn’t someone try this 30 years ago? No time like the present.
Remember Framework, the company building a repairable, modular laptop? The first reviews are in, and it seems they’re quite positive – people are wondering why none of the other big OEMs are capable of making a thin, light, and sturdy laptop with this amount of upgradeability and repairability. Linus from Linus Tech Tips made a long, detailed video about the laptop as well, and was so impressed he bought one right away. I have to say – this laptop has me very, very intrigued. It hits all the right buttons, with the only major uncertainty being just how long a relatively small company like this can stay afloat, to ensure a steady stream of future upgrades. It seems anyone can make new modules and new parts for this laptop, though, so hopefully a community of makers springs up around it as well. In any event, I’m hoping to get my hands on a review unit, because we really need to know how well Linux runs on this machine.
While Loongson has been known for their MIPS-based Loongson chips that are open-source friendly and have long been based on MIPS, with MIPS now being a dead-end, the Chinese company has begun producing chips using its own “LoongArch” ISA. The first Loongson 3A5000 series hardware was just announced and thanks to the company apparently using the Phoronix Test Suite and OpenBenchmarking.org we have some initial numbers. Announced this week was the Loongson 3A5000 as their first LoongArch ISA chip that is quad-core with clock speeds up to 2.3~2.5GHz. Loongson 3A5000 offers a reported 50% performance boost over their prior MIPS-based chips while consuming less power and now also supporting DDR4-3200 memory. The Loongson 3A5000 series is intended for domestic Chinese PCs without relying on foreign IP and there is also the 3A5000LL processors intended for servers. Performance isn’t even remotely interesting – for now. The Loongson processors will improve by leaps and bounds over the coming years, if only because it will have the backing of the regime. I hope some enterprising people import these to the west, because I’d love to see them in action. Nothing in technology excites me more than odd architectures.
After a month of reverse-engineering, we’re excited to release documentation on the Valhall instruction set, available as a PDF. The findings are summarized in an XML architecture description for machine consumption. In tandem with the documentation, we’ve developed a Valhall assembler and disassembler as a reverse-engineering aid. Valhall is the fourth Arm Mali architecture and the fifth Mali instruction set. It is implemented in the Arm Mali-G78, the most recently released Mali hardware, and Valhall will continue to be implemented in Mali products yet to come. Excellent and important work.
Fifty years ago, IBM introduced the first-ever floppy disk drive, the IBM 23FD, and the first floppy disks. Floppies made punched cards obsolete, and its successors ruled software distribution for the next 20 years. Here’s a look at how and why the floppy disk became an icon. It’s still amazing to me just how quickly they fell out of favour.
The Libre-SOC project, a team of engineers and creative personas aiming to provide a fully open System-on-Chip, has today posted a layout that the team sent for chip fabrication of the OpenPOWER-based processor. Currently being manufactured on TSMC’s 180 nm node, the Libre-SOC processor is a huge achievement in many ways. To get to a tape out, the Libre-SOC team was accompanied by engineering from Chips4Makers and Sorbonne Université, funded by NLnet Foundation. Based on IBM’s OpenPOWER instruction set architecture (ISA), the Libre-SOC chip is a monumental achievement for open-source hardware. It’s also the first independent OpenPOWER chip to be manufactured outside IBM in over 12 years. Every component, from hardware design files, documentation, mailing lists to software, is open-sourced and designed to fit with the open-source spirit and ideas. This is an impressive milestone, and I can’t wait until this is ready for more general use. With things like RISC-V and OpenPOWER, there’s a lot of progress being made on truly open source hardware, and that has me very, very excited. This also brings an OpenPOWER laptop closer to being a real thing, and that’s something I’d buy in less than a heartbeat.
Liam Proven posted a good summary of the importance of the PDP and VAX series of computers on his blog. Earlier today, I saw a link on the ClassicCmp.org mailing list to a project to re-implement the DEC VAX CPU on an FPGA. It’s entitled “First new vax in …30 years?” Someone posted it on Hackernews. One of the comments said, roughly, that they didn’t see the significance and could someone “explain it like I’m a Computer Science undergrad.” This is my attempt to reply… Um. Now I feel like I’m 106 instead of “just” 53. OK, so, basically all modern mass-market OSes of any significance derive in some way from 2 historical minicomputer families… and both were from the same company.
Anders Magnusson, writing on the Port-vax NetBSD mailing list: Some time ago I ended up in an architectural discussion (risc vs cisc etc…) and started to think about vax. Even though the vax is considered the “ultimate cisc” I wondered if its cleanliness and nice instruction set still could be implemented efficient enough. Well, the only way to know would be to try to implement it 🙂 I had an 15-year-old demo board with a small low-end FPGA (Xilinx XC3S400), so I just had to learn Verilog and try to implement something. And it just passed EVKAA.EXE: Along with the development of a VAX implementation in an FPGA, discussions arose about possible 64-bit extensions: For userspace; the vax architecture itself leave the door open for expanding the word size. The instructions are all defined to use only the part of a register it needs, so adding a bunch of ‘Q’ instructions are a no-brainer. Argument reference will work as before. The JMP/JSR/RET/… might need a Q counterpart, since it suddenly store/require 8 bytes instead of 4. Kernel; the hardware structures (SCB, PCB, …) must all be expanded. Memory management changed (but the existing leave much to wish for anyway). All this is probably a quite simple update to the architecture. It’s nice to see people still putting work and effort into what is nearly a half-century old, and otherwise obsolete, instruction set.
For simplicity, let’s say you have a single-CPU system that supports “dynamic frequency scaling”, a feature that allows software to instruct the CPU to run at a lower speed, commonly known as “CPU throttling”. Assume for this scenario that the CPU has been throttled to half-speed for whatever reason, could be thermal, could be energy efficiency, could be due to workload. Finally, let’s say that there’s a program that is CPU-intensive, calculating the Maldebrot set or something. The question is: What percentage CPU usage should performance monitoring tools report? Should it report 100%, or 50%? This is like asking what side of the bed is the front, and which side is the back – you can make valid arguments either way, and nobody is wrong or right.