Samsung is porting Tizen to RISC-V

In case you missed it at the 2024 Samsung Developer Conference today, our partners at Samsung Visual Display discussed the work they have been doing to port the Tizen operating system to RISC-V. Tizen is an open-source operating system (OS) that is used in many Samsung smart T.V.s and it makes sense that they would look to the fast growing, global open-standard RISC-V to develop future systems. The presentation showed the results of efforts at both companies to expand the capabilities of the already robust Tizen approach. At the event they also demonstrated a T.V. running on RISC-V and using a SiFive Performance P470 based core.

↫ John Ronco

The announcement is sparse on details, and there isn’t much more to add than this, but the reality is that of course Samsung was going to port Tizen to RISC-V. The growing architecture is bound to compete with the industry standard ARM in a variety of market segments, and it makes perfect sense to have your TV and other (what we used to call) embedded operating systems ready to go.

Windows 11 version 24H2 is now available for download

Windows 11 2024 Update, also known as version 24H2, is now publicly available. Microsoft announced the rollout alongside the new AI-powered features that are coming soon to Windows Insiders with Copilot+ PCs and Copilot upgrades.

Unlike recent Windows 11 updates, version 24H2 is a “full operating system swap,” so updating to it will take more time than usual. What is going as usual is the way the update is being offered to users. Microsoft is gradually rolling out the update to “seekers” with Windows 11 versions 22H2 and 23H2. That means you need to go to the Settings app and manually request the update.

↫ Taras Buria at Neowin

I’ve said it a few times before but I completely lost track of how Windows releases and updates work at this point. I thought this version and its features had been available for ages already, but apparently I was wrong, and it’s only being released now. For now, you can get it by opting in through Windows Update, while the update will be pushed to everyone later on. I really wish Microsoft would move to a simpler, more straightforward release model and cadence, but alas.

Anyway, this version brings all the AI/ML CoPilot stuff, WiFi 7 support, improvements to File Explorer and the system tray, the addition of the sudo command, and more. The changes to Explorer are kind of hilarious to me, as Microsoft seems to have finally figured out labels are a good thing – the weird copy/cut/paste buttons in the context menu have labels now – but this enhanced context menu still has its own context menu. Explorer now also comes with support for more compression formats, which is a welcome change in 2007. To gain access to the new sudo command, go to Settings > System > For developers and enable the option.

For the rest, this isn’t a very impactful release, and will do little to convince the much larger Windows 10 userbase to switch to Windows 11, something that’s going to be a real problem for Microsoft in the coming year.

Nobody knows what happened within the MMC Association in 1998

In 1999, some members from the MMC Association decided to split and create SD Association. But nobody seems to exactly know why.

↫ sdomi’s webpage

I don’t even know how to summarise any of this research, because it’s not only a lot of information, it’s also deeply bureaucratic and boring – it takes a certain kind of person to enjoy this sort of stuff, and I happen to fit the bill. This is a great read.

FreeBSD to invest in laptop support

FreeBSD is going to take its desktop use quite a bit more seriously going forward.

FreeBSD has long been a top choice for IT professionals and organizations focused on servers and networking, and it is known for its unmatched stability, performance, and security. However, as technology evolves, FreeBSD faces a significant challenge: supporting modern laptops. To address this, the FreeBSD Foundation and Quantum Leap Research has committed $750,000 to improve laptop support, a strategic investment that will be pivotal in FreeBSD’s future.

↫ FreeBSD Foundation blog

So, what are they going to spend this big bag of money on? Well, exactly the kind of things you expect. They want to improve and broaden support for various wireless chipsets, add support for modern powersaving processor states, and make sure laptop-specific features like touchpad gestures, specialty buttons, and so on, work properly. On top of that, they want to invest in better graphics driver support for Intel and AMD, as well as make it more seamless to switch between various audio devices, which is especially crucial on laptops where people might reasonably be expected to use headphones.

In addition, while not specifically related to laptops, FreeBSD also intends to invest in support for heterogeneous cores in its scheduler and improvements to the bhyve hypervisor. Virtualisation is, of course, not just something for large desktops and servers, but also laptop users might turn to for certain tasks and workloads.

The FreeBSD project will be working not just with Quantum Leap Research, but also various hardware makers to assist in bringing FreeBSD’s laptop support to a more modern, plug-and-play state. Additionally, the mentioned cash injection is not set in stone; additional contributions from both individuals and larger organisations are obviously welcome, and of course if you can contribute code, bug reports, documentation, and so on, you’re also more than welcome to jump in.

IBM PC 5150 model numbers

Recently I came across a minor mystery—the model numbers of the original IBM PC. For such a pivotal product, there is remarkably little detailed original information from the early days.

↫ Michal Necasek

Count me surprised. When I think IBM, I think meticulously documented and detailed bureaucracy, where every screw, nut, and bolt is numbered, documented, and tracked, so much so in fact this all-American company even managed to impress the Germans. You’d expect IBM, of all companies, to have overly detailed lists of every IBM PC it ever designed, manufactured, and sold, but as it turns out, it’s actually quite hard to assemble a complete list of the early IBM PCs the company sold.

The biggest problem are the models from before 1983, since before that year, the IBM PC does not appear in IBM’s detailed archive of announcements. As such, Michal Necasek had to dig into random bits of IBM documentation to assemble references to those earlier models, and while he certainly didn’t find every single one of them, it’s a great start, and others can surely pick up the search from here.

Arch Linux and Valve deepen ties with direct collaboration

When Valve took its second major crack at making Steam machines happen, in the form of the Steam Deck, one of the big surprises was the company’s choice to base the Linux operating system the Steam Deck uses on Arch Linux, instead of the Debian base it was using before. It seems this choice is not only benefiting Valve, but also Arch.

We are excited to announce that Arch Linux is entering into a direct collaboration with Valve. Valve is generously providing backing for two critical projects that will have a huge impact on our distribution: a build service infrastructure and a secure signing enclave. By supporting work on a freelance basis for these topics, Valve enables us to work on them without being limited solely by the free time of our volunteers.

↫ Levente Polyak

This is great news for Arch, but of course, also for Linux in general. The work distributions do to improve their user experience tend to be picked up by other distributions, and it’s clear that Valve’s contributions have been vast. With these collaborations, Valve is also showing it’s in it for the long term, and not just interested in taking from the community, but also in giving, which is good news for the large number of people now using Linux for gaming.

The Arch team highlights that these projects will follow the regular administrative and decision-making processes within the distribution, so we’re not looking at parallel efforts forced upon everyone else without a say.

California’s new law forces digital stores to admit you’re just licensing content, not buying it

California Governor Gavin Newsom has signed a law (AB 2426) to combat “disappearing” purchases of digital games, movies, music, and ebooks. The legislation will force digital storefronts to tell customers they’re just getting a license to use the digital media, rather than suggesting they actually own it.

When the law comes into effect next year, it will ban digital storefronts from using terms like “buy” or “purchase,” unless they inform customers that they’re not getting unrestricted access to whatever they’re buying. Storefronts will have to tell customers they’re getting a license that can be revoked as well as provide a list of all the restrictions that come along with it. Companies that break the rule could be fined for false advertising.

↫ Emma Roth at The Verge

A step in the right direction, but a lot more is definitely needed. This law in particular seems to leave a lot of wiggle room for companies to keep using the “purchase” term while hiding the disclosure somewhere in the very, very small fine print. I would much rather a law like this just straight up ban the use of the term “purchase” and similar terms when all you’re getting is a license. Why allow them to keep lying about the nature of the transaction in exchange for some fine print somewhere?

The software industry in particular has been enjoying a free ride when it comes to consumer protection laws, and the kind of malpractice, lack of accountability, and laughable quality control would have any other industry shut down in weeks for severe negligence. We’re taking baby steps, but it seems we’re finally arriving at a point where basic consumer protection laws and rights are being applied to software, too.

Several decades too late, but at least it’s something.

COSMIC alpha 2 released

System76, the premiere Linux computer manufacturer and creator of the COSMIC desktop environment, has updated COSMIC’s Alpha release to Alpha 2. The latest release includes more Settings pages, the bulk of functionality for COSMIC Files, highly requested window management features, and considerable infrastructure work for screen reader support, as well as some notable bug fixes.

↫ system76’s blog

The pace of development for COSMIC remains solid, even after the first alpha release. This second alpha keeps adding a lot of things considered basic for any desktop environment, such as settings panels for power and battery, sounds, displays, and many more. It also brings window management support for focus follows cursor and cursor follows focus, which will surely please the very specific, small slice of people who swear by those. Also, you can now disable the super key.

A major new feature that I’m personally very happy about is the “adjust density” feature. COSMIC will allow you to adjust the spacing between the various user interface elements so you can choose to squeeze more information on your screen, which is one of the major complaints I have about modern UI design in macOS, Windows, and GNOME. Being able to adjust this to your liking is incredibly welcome, especially combined with COSMIC’s ability to change from ’rounded’ UI elements to ‘square’ UI elements.

The file manager has also been vastly, vastly improved, tons of bugs were fixed, and much, much more. It seems COSMIC is on the right path, and I can’t wait to try out the first final result once it lands.

Tcl/Tk 9.0 released

Tcl 9.0 and Tk 9.0 – usually lumped together as Tcl/Tk – have been released. Tcl 9.0 brings 64bit compatibility so it can address data values larger than 2 GB, better Unicode support, support for mounting ZIP files as file systems, and much, much more. Tk 9.0 gets support for scalable vector graphics, much better platform integration with things like system trays, gestures, and so on, and much more.

Notice

Just want to let y’all know that my family and I have been hit hard with bronchitis these past two weeks, and especially my recovery is going quite slowly (our kids are healthy again, and my wife is recovering quite well!). As such, I haven’t been able to do much OSNews work.

I hope things will finally clear up a bit over the weekend so that I can resume normal service come Monday. Enjoy your weekend, y’all!

Eliminating memory safety vulnerabilities at the source

The push towards memory safe programming languages is strong, and for good reason. However, especially for bigger projects with a lot of code that potentially needs to be rewritten or replaced, you might question if all the effort is even worth it, particularly if all the main contributors would also need to be retrained. Well, it turns out that merely just focusing on writing new code in a memory safe language will drastically reduce the number of memory safety issues in a project as a whole.

Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages.

This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective.

↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog

In this blog post, Google highlights that even if you only write new code in a memory-safe language, while only applying bug fixes to old code, the number of memory safety issues will decreases rapidly, even when the total amount of code written in unsafe languages increases. This is because vulnerabilities decay exponentially – in other words, the older the code, the fewer vulnerabilities it’ll have.

In Android, for instance, using this approach, the percentage of memory safety vulnerabilities dropped from 76% to 24% over 6 years, which is a great result and something quite tangible.

Despite the majority of code still being unsafe (but, crucially, getting progressively older), we’re seeing a large and continued decline in memory safety vulnerabilities. The results align with what we simulated above, and are even better, potentially as a result of our parallel efforts to improve the safety of our memory unsafe code. We first reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping.

↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog

What this shows is that a large project, like, say, the Linux kernel, for no particular reason whatsoever, doesn’t need to replace all of its code with, say, Rust, again, for no particular reason whatsoever, to reap the benefits of a modern, memory-safe language. Even by focusing on memory-safe languages only for new code, you will still exponentially reduce the number of memory safety vulnerabilities. This is not a new discovery, as it’s something observed and confirmed many times before, and it makes intuitive sense, too; older code has had more time to mature.

What happened to the Japanese PC platforms?

The other day a friend asked me a pretty interesting question: what happened to all those companies who made those Japanese computer platforms that were never released outside Japan? I thought it’d be worth expanding that answer into a full-size post.

↫ Misty De Meo

Japan had a number of computer makers that sold platforms that looked and felt like western PCs, but were actually quite different hardware-wise, and incompatible with the IBM PC. None of these exist anymore today, and the reason is simple: Windows 95. The Japanese platforms compatible enough with the IBM PC that they could get a Windows 95 port turned into a commodity with little to distinguish them from regular IBM PCs, and the odd platform that didn’t use an x86 chip at all – like the X68000 – didn’t get a Windows port and thus just died off.

The one platform mentioned in this article that I had never heard of was FM Towns, made by Fujitsu, which had its own graphical operating system called Towns OS. The FM Towns machines and the Towns OS were notable and unique at the time in that it was the first operating system to boot from CD-ROM, and it just so happens that Joe Groff published an article earlier this year detailing this boot process, including a custom bootable image he made.

Here in the west we mostly tend to remember the PC-98 and X86000 platforms for their gaming catalogs and stunning designs, but that’s like only remembering the IBM PC for its own gaming catalog. These machines weren’t just glorified game consoles – they were full-fledged desktop computers used for the same boring work stuff we used the IBM PC for, and it truly makes me sad I don’t speak a single character of Japanese, so a unique operating system like Towns OS will always remain a curiosity for me.

Microsoft deprecates Windows Server Update Services, suggests cloud services instead

As part of our vision for simplified Windows management from the cloud, Microsoft has announced deprecation of Windows Server Update Services (WSUS). Specifically, this means that we are no longer investing in new capabilities, nor are we accepting new feature requests for WSUS. However, we are preserving current functionality and will continue to publish updates through the WSUS channel. We will also support any content already published through the WSUS channel.

↫ Nir Froimovici

What an odd feature to deprecate. Anyone with a large enough fleet of machines probably makes use of Windows Server Update Services, as it adds some much-needed centralised control to the downloading and deployment of Windows updates, so you can do localised partial rollouts for testing, which, as the CrowdStrike debacle showed us once more, is quite important. WSUS also happens to be a local tool, that is set up and run locally, instead of in the cloud, and that’s where we get to the real reason WSUS is being deprecated.

Microsoft is advising IT managers who use WSUS to switch to Microsoft’s alternatives, like Windows AutopatchMicrosoft Intune, and Azure Update Manager. These all happen to run in the cloud, giving up that control WSUS provided by running locally, and they’re not free either – they’re subscription services, of course. I mean, technically WSUS isn’t free either as it’s part of Windows Server, but these cloud services come on top of the cost of Windows Server itself.

Nobody escapes the relentless march of subscription costs.

Disable Sequoia’s monthly screen recording permission prompt

The widelyreported “foo is requesting to bypass the system private window picker and directly access your screen and audio” prompt in Sequoia (which Apple has moved from daily to weekly to now monthlycan be disabled by quitting the app, setting the system date far into the future, opening and using the affected app to trigger the nag, clicking “Allow For One Month”, then restoring the correct date.

↫ tinyapps.org blog

Or, and this is a bit of a radical idea, you could use an operating system that doesn’t infantalise its users.

Qualcomm wants to buy Intel

On Friday afternoon, The Wall Street Journal reported Intel had been approached by fellow chip giant Qualcomm about a possible takeover. While any deal is described as “far from certain,” according to the paper’s unnamed sources, it would represent a tremendous fall for a company that had been the most valuable chip company in the world, based largely on its x86 processor technology that for years had triumphed over Qualcomm’s Arm chips outside of the phone space.

↫ Richard Lawler and Sean Hollister at The Verge

Either Qualcomm is only interested in buying certain parts of Intel’s business, or we’re dealing with someone trying to mess with stock prices for personal gain. The idea of Qualcomm acquiring Intel seems entirely outlandish to me, and that’s not even taking into account that regulators will probably have a thing or two to say about this. The one thing such a crazy deal would have going for it is that it would create a pretty strong and powerful all-American chip giant, which is a PR avenue the companies might explore if this is really serious.

One of the most valuable assets Intel has is the x86 architecture and the associated patents and licensing deals, and the immense market power that comes with those. Perhaps Qualcomm is interested in designing x86 chips, or, more likely, perhaps they’re interested in all that sweet, sweet licensing money they could extract by allowing more companies to design and sell x86 processors. The x86 market currently consists almost exclusively of Intel and AMD, a situation which may be leaving a lot of licensing money on the table.

Pondering aside, I highly doubt this is anything other than an overblown, misinterpreted story.

Slowly booting full Linux on the Intel 4004 for fun, art, and absolutely no profit

Can you run Linux on the Intel 4004, the first commercially produced microprocessor, released to the world in 1971? Well, Dmitry Grinberg, the genius engineer who got Linux to run on all kinds of incredibly underpowered hardware, sought to answer this very important question. In short, yes, you can run Linux on the 4004, but much as with other extremely limited and barebones chips, you have to get… Creative. Very creative.

Of course, Linux cannot and will not boot on a 4004 directly. There is no C compiler targeting the 4004, nor could one be created due to the limitations of the architecture. The amount of ROM and RAM that is addressable is also simply too low. So, same as before, I would have to resort to emulation. My initial goal was to fit into 4KB of code, as that is what an unmodified unassisted 4004 can address. 4KB of code is not much at all to emulate a complete system. After studying the options, it became clear that MIPS R3000 would be the winner here. Every other architecture I considered would be harder to emulate in some way. Some architectures had arbitrarily-shifted operands all the time (ARM), some have shitty addressing modes necessitating that they would be slow (RISCV), some would need more than 4KB to even decode instructions (x86), and some were just too complex to emulate in so little space (PPC). … so … MIPS again… OK!

↫ Dmitry Grinberg

This is just one very small aspect of this massive undertaking, and the article and videos accompanying his success are incredibly detailed and definitely not for the faint of heart. The amount of skill, knowledge, creativity, and persistence on display here is stunning, and many of us can only dream of being able to do stuff like this. I absolutely love it.

Of course, the Linux kernel had to be slimmed down considerably, as a lot of stuff currently in the kernel are of absolutely no use on such an old system. Boot time is measured in days, still, but it helped a lot. Grinberg also turned the whole setup into what is effectively an art piece you can hang on the wall, where you can have it run and, well, do things – not much, of course, but he did include a small program that draws mandelbrot set on the VFD and serial port, which is a neat trick.

He plans on offering the whole thing as a kit, but a lot of it depends on getting enough of the old chips to offer a complete, ready-to-assemble kit in the first place.

Why Apple uses JPEG XL in the iPhone 16 and what it means for your photos

The iPhone 16 family has arrived and includes many new features, some of which Apple has played very close to its vest. One such improvement is the inclusion of JPEG XL file types, which promise improved image quality compared to standard JPEG files while delivering relatively smaller file sizes.

[…]

Overall, JPEG XL addresses many of JPEG’s shortcomings. The 30-year-old format is not very efficient, only offers eight-bit color depth, doesn’t support HDR, doesn’t do alpha transparency, doesn’t support animations, doesn’t support multiple layers, includes compression artifacts, and exhibits banding and visual noise. JPEG XL tackles these issues, and unlike WebP and AVIF formats, which each have some noteworthy benefits too, JPEG XL has been built from the ground up with still images in mind.

↫ Jeremy Gray at PetaPixel

Excellent news, and it will hopefully mean others will follow – something that tends to happen when Apple finally supports to the new thing.

Nintendo and The Pokémon Company file patent lawsuit against maker of hit game Palworld

Nintendo, together with The Pokémon Company, filed a patent infringement lawsuit in the Tokyo District Court against Pocketpair, Inc. on September 18, 2024.

This lawsuit seeks an injunction against infringement and compensation for damages on the grounds that Palworld, a game developed and released by the Defendant, infringes multiple patent rights.

↫ Nintendo press release

Since the release of Palworld, which bears a striking resemblance to the Pokémon franchise, everybody’s been kind of expecting a reaction from both Nintendo and The Pokémon Company, and here it is. What’s odd is that it’s not a trademark, trade dress, or copyright lawsuit, but a patent one, which is not what you’d expect when looking at how similar the Palworld creatures look to Pokémon, to the point where some people even suggest the 3D models were simply lifted wholesale from the latest Nintendo Switch Pokémon games.

There’s no mention of which patents Pocketpair supposedly infringes upon, and in a statement, the company claims it, too, has no idea which patents are supposedly in play. I have to admit I never even stopped to think game patents were a thing at all, but now that I spent more than 2 seconds pondering this concept, of course they exist.

This lawsuit will be quite interesting to follow, because the games industry is one of the few technology sectors out there where copying each others ideas, concepts, mechanics, and styles is not only normal, it’s entirely expected and encouraged. New ideas spread through the games industry like wildfires, and if some new mechanic is a hit with players, it’ll be integrated into other games within a few months, and games coming out a year later are expected to have the hit new mechanics from last year.

It’s a great example of how beneficial it is to have ideas freely spread, and how awesome it is to see great games take existing mechanics and apply interesting twists, or use them in entirely different genres than where they originated from. Demon’s Souls and the Dark Souls series are a great example of a series of games that not only established a whole new genre other games quickly capitalised on, but also introduced the gaming world to a whole slew of new and unique mechanics that are now being applied in all kinds of new and interesting ways.

Lawsuits like this one definitely pose a threat to this, so I hope that either this fails spectacularly in court, or that the patents in question are so weirdly specific as to be utterly without merit in going after any other game.