The touchscreen infotainment systems in new cars are a distracting mess

When I’m in charge of a car company, we’re going to have one strict rule about interior design: make it so it doesn’t cause you to crash the car.

You’d think this would already be in effect everywhere, but no. Ever since the arrival of the iPhone, car designers have aspired to replicate that sleek, glassy aesthetic within the cabin. And it never works, because you tend to look at a phone while you use it. In a car, you have this other thing you should be looking at, out there, beyond the high-resolution panoramic screen that separates your face from the splattering june bugs.

If a designer came to me with a bunch of screens, touch pads, or voice-activated haptic-palm-pad gesture controls, I’d trigger a trapdoor that caused the offender to plummet down into the driver’s seat of a Cadillac fitted with the first version of the CUE system—which incorporated a motion sensor that would actually change the screen as your finger approached it. And I’d trigger my trapdoor by turning a knob. I wouldn’t even have to look at it.

I couldn’t agree more. One of the things I dread about ever replacing my 2009 Volvo S80 are these crappy touchscreens that are added to every car these days, often of dubious quality, with no regard to user interface design or driver safety. For instance, I don’t want to take my eyes off the road just to adjust the temperature of the climate control – there should be a big, easy to find knob within arm’s reach.

This just seems extremely unsafe to me.

Subway history: how OS/2 powered the NYC subway for decades

The role of OS/2 in the NYC subway system is more of a conduit. It helps connect the various parts that people use with the parts they don’t. Waldhauer notes, “There are no user-facing applications for OS/2 anywhere in the system. OS/2 is mainly used as the interface between a sophisticated mainframe database and the simple computers used in subway and bus equipment for everyday use. As such, the OS/2 computers are just about everywhere in the system.”

At this point, we’re talking about an OS designed in the late 80s, released in the early 90s, as part of a difficult relationship between two tech giants. The MTA had to ignore most of this because it had already made its decision and changing course would cost a lot of money.

It’s sad that OS/2 – in its current form available as ArcaOS 5.0 – has a relatively steep entry price, because it’s an incredibly fun and unique operating system to play around with. I’d love to set up a VM just for fun and playing around, but at $129, I really can’t justify that.

R.T. Russell’s Z80 BBC Basic is now open source

More news from the CP/Mish front:

As part of the work I’ve been doing with cpmish I’ve been trying to track down the copyright holders of some of the more classic pieces of CP/M software and asking them to license it in a way that allows redistribution. One of the people I contacted was R.T. Russell, the author of the classic Z80 BBC BASIC, and he very kindly sent me the source and agreed to allow it to be distributed under the terms of the zlib license. So it’s now open source!

I’ve made the 37-year-old source build and added it to the cpmish respository; it works fine and is shipping with the cpmish disk images.

Ada/SPARK on Genode

The Genode OS Framework is written in C++, but has support (to one degree or another) for writing components in several other languages. Perhaps foremost of these is Ada/SPARK, thanks in part to active development/support by Componolit, which maintains the Ada/SPARK toolset for Genode (SPARK is a subset of Ada, designed to be verifiable).

On Genodians.org, there have been three recent articles exploring the use of Ada/SPARK on Genode, each approaching the subject from a different angle.

First, in “C++ and SPARK as a Continuum“, Genode co-founder Norman Feske shows how to create hybrid C++/SPARK components. There are a few restrictions, but the results fit well with the Genode component philosophy.

By regarding C++ and SPARK as a continuum rather than an black-and-white decision, we can use SPARK at places where we regard formal verification as most valuable while not restricting Genode components to be entirely static. It gives us Genode developers the chance to slowly embrace the application of formal methods and recognize their benefit in practice.

Second, Martin Stein takes the first steps toward converting (a fork of) the in-house “base-hw” kernel to Ada in “Spunky: A Kernel Using Ada – Part 1: RPC“. This is just the beginning of this project, so stay tuned.

What should I say? Thanks to the almost pedantic need for correctness of the Ada compiler and the sheer endless chain of complains it kept throwing at me, the final image worked out of the box and put a big smile on my face.

Third, Johannes Kliemann of Componolit dives into the deep end of the pool in “SPARK as an Extremum: Components in Pure SPARK“, which describes the arduous journey that led them to create an API for creating components completely in SPARK.

With the realization that generated bindings are not feasible and both binding and API need to be created by hand, previous API limitations such as functions that are not allowed in SPARK could be removed. This API should not resemble any characteristics of any language or platform that implements it. The goal was to create a pure SPARK API for asynchronous verified components. The result is the Componolit Ada Interface, an interface collection that provides component startup, shutdown and platform interaction.

As you can see, even though the Genode core has become pretty mature, there is still much interesting research and experimentation being done on the eternal quest for more trustworthy computing.

Windows 10 build 18917 begins splitting the Shell from the OS

For those who don’t know, Windows Core OS is supposed to be a new version of Windows that can adapt more easily to any kind of screen, thanks in part to a new infrastructure for the Shell, which separates it from the system itself. This means that Microsoft can create different Windows experiences for different form factors such as Lenovo’s foldable ThinkPad, while using the same core components as a base.

Yesterday, Microsoft released Windows 10 build 18917 to the Fast Ring, and while it included some welcome improvements, perhaps the most interesting change went unnoticed. Twitter user Albacore has discovered that with this build, the company has started implementing some work towards the separation of the Shell from the rest of Windows. There’s now a Shell Update Agent, which is meant to be able to update the Shell on demand.

Windows and a possible new shell are like multiplying by 0.5 – you never get quite to zero. There’s been so many rumours and leaks for so long now, one has to wonder if it will ever actually happen.

SwiftUI and Catalyst: Apple executes its invisible transition strategy

And then there’s SwiftUI, which may be a harder concept for regular users to grasp, but it’s a huge step on Apple’s part. This is Apple’s ultimate long game—an entirely new way to design and build apps across all of Apple’s platforms, based on the Swift language (introduced five years ago as yet another part of Apple’s long game).

In the shorter term, iOS app developers will be able to reach to the Mac via Catalyst. But in the longer term, Apple is creating a new, unified development approach to all of Apple’s devices, based in Swift and SwiftUI. Viewed from this perspective, Catalyst feels more like a transitional technology than the future of Apple’s platforms.

Apple’s own SwiftUI page provides more details. This is the future of application development across all of Apple’s platforms, so if you have a vested interested in the Apple world, you’d do good to get yourself acquainted with it.

Google defends Chrome ad blocking changes

For a while now, Google has been working on changing the way Chrome extensions work. Among other changes, the Web Request API will be replaced by the Declarative Net Request API, which is stricter in the kind of data extensions need to function. However, current ad blockers also use the Web Request API currently, and the replacement API limits these extensions in what they can do.

Google has written a blog post explaining their reasoning. It concludes:

This has been a controversial change since the Web Request API is used by many popular extensions, including ad blockers. We are not preventing the development of ad blockers or stopping users from blocking ads. Instead, we want to help developers, including content blockers, write extensions in a way that protects users’ privacy.

You can read more about the Declarative Net Request API and how it compares to the Web Request API here.

We understand that these changes will require developers to update the way in which their extensions operate. However, we think it is the right choice to enable users to limit the sensitive data they share with third-parties while giving them the ability to curate their own browsing experience. We are continuing to iterate on many aspects of the Manifest V3 design, and are working with the developer community to find solutions that both solve the use cases extensions have today and keep our users safe and in control.

I don’t doubt that Google’s Chrome engineers are making these changes because they genuinely believe they make the browser better and safer. I’m concerned with the bean counters and managers, and Google’s omnipresent ad sales managers, who will be all too eager to abuse Chrome’s popularity to make ad blocking harder.

AMD Zen 2 microarchitecture analysis: Ryzen 3000 and EPYC Rome

We have been teased with AMD’s next generation processor products for over a year. The new chiplet design has been heralded as a significant breakthrough in driving performance and scalability, especially as it becomes increasingly difficult to create large silicon with high frequencies on smaller and smaller process nodes. AMD is expected to deploy its chiplet paradigm across its processor line, through Ryzen and EPYC, with those chiplets each having eight next-generation Zen 2 cores. Today AMD went into more detail about the Zen 2 core, providing justification for the +15% clock-for-clock performance increase over the previous generation that the company presented at Computex last week.

The 16c/32t Ryzen 9 3950X looks quite attainable at $750 – a price that is surely to come down after launch.

CERN’s Microsoft Alternatives project

CERN has started a project to replace all of the closed source, proprietary software that it uses with open alternatives.

Given the collaborative nature of CERN and its wide community, a high number of licenses are required to deliver services to everyone, and when traditional business models on a per-user basis are applied, the costs per product can be huge and become unaffordable in the long term.

A prime example is that CERN has enjoyed special conditions for the use of Microsoft products for the last 20 years, by virtue of its status as an “academic institution”. However, recently, the company has decided to revoke CERN’s academic status, a measure that took effect at the end of the previous contract in March 2019, replaced by a new contract based on user numbers, increasing the license costs by more than a factor of ten. Although CERN has negotiated a ramp-up profile over ten years to give the necessary time to adapt, such costs are not sustainable.

I always find it strange when scientific institutions funded by public money get locked in by proprietary software vendors, to the point where they are so reliant on them it becomes virtually impossible to opt for alternatives. Good on CERN – although a bit late – for trying to address this issue.

CP/Mish: an open source CP/M reimplementation

CP/Mish is an open source sort-of-CP/M distribution for the 8080 and Z80 architectures (although for technical reasons currently it only works on the Z80).

It contains no actual Digital Research code. Instead, it’s a collection of third party modules which replicate it, all with proper open source licenses, integrated with a build system that should make it easy to work with.

[…]

CP/Mish is not CP/M, but it’s enough like CP/M to run CP/M programs and do CP/M things. And, if you want the real CP/M, CP/Mish uses the standard interaces so you can just drop in a Digital Research BDOS and CCP and it’ll work.

Some companies bet on CP/M, some bet on DOS. We know who won, and who lost. Still, CP/M inspired a lot of DOS, so anybody with experience with DOS should feel right at home on CP/M.

Apple unveils new Mac Pro

The all-new Mac Pro is an absolute powerhouse with up to 28-core Intel Xeon processors, up to 1.5TB of ECC RAM, up to 4TB of SSD storage, up to AMD Radeon Pro Vega II Duo graphics with 64GB of HBM2 memory, and eight PCIe expansion slots for maximum performance, expansion, and configurability.

The new design includes a stainless steel frame with smooth handles and an aluminum housing that lifts off for 360-degree access to the entire system. The housing also features a unique lattice pattern, which has already been referred to as a cheese grater, to maximize airflow and quiet operation.

I love this machine. Not because I need it, will buy it, or even understand the kind of professional workflows people use these for – but because I’ve always had a soft spot for high-performance, no-compromises professional workstations. Whether it be the Sun or SGI UNIX workstations of the ’90s, the PowerMac and Mac Pro machines of the early 2000s, or the less flashy but still just as stunning powerhouses HP, for instance, makes with their Z line of workstations, such as the Z8.

This new Mac Pro fits that professional workstation bill more than probably any PowerMac or Mac Pro before it, and I love it for it. While not nearly as insane or crazy, it reminds me of SGI’s most powerful MIPS workstation, the crazy SGI Tezro, which had a list price of tens of thousands of dollars. In that light, I’m not even remotely surprised that this is an expensive machine – anybody who has spent even a modicum of time in the world of professional workstations knows how expensive these machines are, and why.

In short, if you are appalled by the price, this machine is not for you.

Apple also unveiled a new professional display, and while the specifications look impressive, the stand that’s sold separately for 999 dollars has already become a meme. I’ll leave the display talk to those of us who know more about the kinds of demands professionals place upon displays.

KDE Usability and Productivity: are we there yet?

KDE’s Usability & Productivity initiative is now almost two years old, and I’ve been blogging weekly progress for a year and a half. So I thought it would be a good time to take stock of the situation: how far we’ve come, and what’s left to do. Let’s dive right in!

This initiative has been a lot of fun to follow. If you always update to the latest KDE release – for instance by using KDE Neon – you’ll see the weekly fixes and polishes highlighted in the blog posts appear on your machine very quickly.

Apple announces iOS 13, iPadOS, macOS Catalina, and so much more

I wasn’t available for about a week due to emigrating from The Netherlands to Sweden, so I’ve clearly got some catching up to do when it comes to Apple’s WWDC keynote. Apple detailed iOS 13 for the iPhone, the renamed iPadOS for the iPad, macOS Catalina for the Mac, updates for watchOS and tvOS, and so much more.

I’m not going into great detail on everything here, since you’ve most likely already delved deep into these new releases if you’re interested. I do wish to mention that I’m insanely happy with mouse and multiwindow support for the iPad – it turns my iPad Pro into a very useful and powerful device, and I can’t wait for the first keyboards with built-in trackpad to come onto the scene.

I’ll keep the other big announcement – the new Mac Pro – for a separate item and discussion thread.

KDE Privacy Sprint, 2019 edition

From the 22nd to 26th of March, members of the KDE Privacy team met up in Leipzig, Germany, for our Spring 2019 sprint.

During the sprint, we floated a lot of different ideas that sparked plenty of discussions. The notion of privacy encompasses a wide range of topics, technologies and methods, so it is often difficult to decide what to focus on. However, all the aspects we worked on are important. We ended up tackling a variety of issues, and we are confident that our contributions will improve data protection for all users of KDE software.

Quite a few ideas became reality for upcoming KDE releases. Good work.

Microsoft’s Universal Windows Platform app dream is dead and buried

Microsoft had a dream with Windows 8 that involved universal Windows apps that would span across phones, tablets, PCs, and even Xbox consoles. The plan was that app developers could write a single app for all of these devices, and it would magically span across them all. This dream really started to fall apart after Windows Phone failed, but it’s well and truly over now.

Microsoft has spent years pushing developers to create special apps for the company’s Universal Windows Platform (UWP), and today, it’s putting the final nail in the UWP coffin. Microsoft is finally allowing game developers to bring full native Win32 games to the Microsoft Store, meaning the many games that developers publish on popular stores like Steam don’t have to be rebuilt for UWP.

The concept of UWP was sound, but on Windows it had to compete with Win32, and on mobile, Windows Phone was an abject failure. There just wasn’t any developer uptake.

Google to restrict modern ad blocking Chrome extensions to enterprise users

Google is essentially saying that Chrome will still have the capability to block unwanted content, but this will be restricted to only paid, enterprise users of Chrome. This is likely to allow enterprise customers to develop in-house Chrome extensions, not for ad blocking usage.

For the rest of us, Google hasn’t budged on their changes to content blockers, meaning that ad blockers will need to switch to a less effective, rules-based system, called “declarativeNetRequest.”

I’m glad I switched to Firefox already, and I suggest you do the same. A browser that is not tied to a platform vendor (like Safari) or run by an ad company (like Chrome).

Arm’s new Cortex-A77 CPU micro-architecture: evolving performance

It’s a hardware day today, and since AnandTech is the most authoritative source on stuff like this, we’ve got more from them. Arm announced its next big micro-architecture – which will find its way to flagship smartphones soon.

Overall the Cortex-A77 announcement today isn’t quite as big of a change as what we saw last year with the A76, nor is it as big a change as today’s new announcement of Arm’s new Valhall GPU architecture and G77 GPU IP.

However what Arm managed to achieve with the A77 is a continued execution of their roadmap, which is extremely important in the competitive landscape. The A76 delivered on all of Arm’s promises and ended up being an extremely performant core, all while remaining astonishingly efficient as well as having a clear density lead over the competition. In this regard, Arm’s major clients are still heavily focusing on having the best PPA in their products, and Arm delivers in this regard.

The one big surprise about the A77 is that its floating point performance boost of 30-35% is quite a lot higher than I had expected of the core, and in the mobile space, web-browsing is the killer-app that happens to be floating point heavy, so I’m looking forward how future SoCs with the A77 will be able to perform.

As linked above, the company also announced its next-generation mobile GPU architecture.

AMD teases first Navi GPU products: RX 5700 Series launches in July

More AMD news – this time on the graphics front, where the company is still catching up to NVIDIA.

While the bulk of this morning’s AMD Computex keynote has been on AMD’s 3rd generation Ryzen CPUs and their underlying Zen 2 architecture, the company also took a moment to briefly touch upon its highly anticipated Navi GPU architecture and associated family of products. AMD didn’t go too deep here, but they have given us just enough to be tantalized ahead of a full reveal in the not too distant future. The first Navi cards will be the Radeon RX 5700 series, which are launching in July and on an architectural level will offer 25% better performance per clock per core and 50% better power efficiency than AMD’s current-generation Vega architecture. The products will also be AMD’s first video cards using faster GDDR6 memory. Meanwhile AMD isn’t offering much in the way of concrete details on performance, but they are showing it off versus NVIDIA’s GeForce RTX 2070 in the AMD-favorable game Strange Brigade.

Not that many details just yet, so it’s safe to assume AMD is not yet ready to truly take on NVIDIA. That being said – like with Zen and Ryzen, give AMD a few generations, and NVIDIA might finally be facing real competition.

AMD Ryzen 3000 announced: five CPUs, 12 cores for $499, up to 4.6 GHz, PCIe 4.0

Today at Computex, AMD CEO Dr. Lisa Su is announcing the raft of processors it will be launching on its new Zen 2 chiplet-based microarchitecture. Among other things, AMD is unveiling its new Ryzen 9 product tier, which it is using for its 12-core Ryzen 9 3900X processor, and which runs at 4.6 GHz boost. All of the five processors will be PCIe 4.0 enabled, and while they are being accompanied by the new X570 chipset launch, they still use the same AM4 socket, meaning some AMD 300 and 400-series motherboards can still be used. We have all the details inside.

If the first few waves of Zen-based processors put AMD back on the map, this is the wave that will propel the company beyond Intel on all fronts – single-core performance, multicore performance, price, and on all fronts, from workstations to gaming. Intel will probably be trailing AMD on all these fronts until at least 2022.

AMD’s turnaround over the past few years is nothing short of stunning, and I’m quite sure my next machine will be rocking team red once again.