General Development Archive
Why does Space Station 14 crash with ANGLE on ARM64? 6 hours later… So. I’ve been continuing work on getting ARM64 builds out for Space Station 14. The thing I was working on yesterday were launcher builds, specifically a single download that supports both ARM64 and x64. I’d already gotten the game client itself running natively on ARM64, and it worked perfectly fine in my dev environment. I wrote all the new launcher code, am pretty sure I got it right. Zip it up, test it on ARM64, aaand… The game client crashes on Windows ARM64. Both in my VM and on Julian’s real Snapdragon X laptop. ↫ PJB at A stream of consciousness Debugging stories can be great fun to read, and this one is a prime example. Trust me, you’ll have no idea what the hell is going on here until you reach the very end, and it’s absolutely wild. Very few people are ever going to run into this exact same set of highly unlikely circumstances, but of course, with a platform as popular as Windows, someone was eventually bound to. Sidenote: the game in question looks quite interesting.
Jussi Pakkanen, creator of the Meson build system, has some words about modules in C++. If C++ modules can not show a 5× compilation time speedup (preferably 10×) on multiple existing open source code base, modules should be killed and taken out of the standard. Without this speedup pouring any more resources into modules is just feeding the sunk cost fallacy. That seems like a harsh thing to say for such a massive undertaking that promises to make things so much better. It is not something that you can just belt out and then mic drop yourself out. So let’s examine the whole thing in unnecessarily deep detail. You might want to grab a cup of $beverage before continuing, this is going to take a while. ↫ Jussi Pakkanen I’m not a programmer so I’m leaving this for the smarter people among us to debate.
The future of robotics is being shaped by smart, agile, and efficient software systems. As automation finds new use cases across sectors, the demand for high-performance solutions continues to rise. AI in robotics plays a critical role in building intelligent machines that adapt, learn, and function with precision. Let’s explore the core features that set powerful robotics software apart from the rest. One of the most vital aspects of ai robotics technology is its capacity to process data in real time. For machines that interact with the physical world, timely decision-making is non-negotiable. High-performance software can handle vast data streams from sensors, cameras, and external inputs without delays. Real-time analysis supports smooth motion, reliable responses, and increased operational safety. Precision in Motion Control Accurate motion is the backbone of robotics systems across industrial, medical, and defense sectors. Advanced robotics software ensures precise execution of tasks, if the movement is repetitive or unpredictable. Positioning, torque control, and speed management must all work in sync to produce flawless motion. Additionally, the software should support adaptive algorithms that adjust for external forces or disruptions. This is especially important in collaborative robotics, where machines and humans share close workspaces. Precision also reduces wear and tear, ensuring longer equipment life. These features are essential for systems that require exact and repeatable motion across long hours of operation. Modular Software Architecture By organizing functions into independent modules, developers can isolate and refine specific capabilities without overhauling the entire system. This approach also supports smoother deployment of new features or updates. Here are key advantages of modular software: It simplifies the process of replacing, upgrading, or adjusting parts of the system without breaking functionality elsewhere. Effective Power Control for Long-Term Performance Energy efficiency plays a critical role in robotics software performance, especially for mobile or field-deployed systems. The software must optimize power distribution across various hardware components without sacrificing functionality. This extends operational time and reduces battery strain in portable units. Moreover, high-performance systems often include tools for monitoring energy consumption in real time. These tools help predict battery life and alert operators before power dips impact functionality. Some software platforms can dynamically adjust motion routines or processing loads to preserve energy. By managing power wisely, the system remains productive longer, even under demanding workloads. Cross-Industry Adaptability and Customization High-performance robotics software from reputable companies must be versatile enough to meet the diverse needs of different industries. Be it in manufacturing, healthcare, agriculture, or logistics, the software should adapt to specific workflows and operational requirements. It achieves this by allowing easy customization, ensuring that robots can perform industry-specific tasks with precision. The ability to integrate with a variety of sensors, equipment, and third-party systems further enhances adaptability. This flexibility enables businesses across sectors to adopt robotic solutions without the need for major overhauls of their existing infrastructure. Safeguard Information and Maintain System Integrity With machines connected to networks and external systems, protecting data and access points is vital. High-performance platforms use encryption, access control, and secure protocols to prevent unauthorized interventions. Regular system audits, validation tools, and real-time alerts form part of a comprehensive defense strategy. These safeguards help maintain trust, especially when robots perform sensitive or mission-critical tasks. Developing high-performance robotics software means delivering more than speed and automation. As ai robotics technology continues to advance, software will remain the decisive factor between average and exceptional performance. These capabilities, combined with a modular structure, define the core of modern robotics systems.
If you’re interested in developing for and programming on MS-DOS and other variants of the venerable operating system, SuperIlu has collected the various tools and applications they use and like for this very task. In case you’re wondering who SuperIlu is – they are the developer of things like DOStodon, a Mastodon client for DOS, DOjS, and much more. This is my short list of interesting resources for MS-DOS development. This is neither meant to be unbiased nor exhaustive, it is just a list of software/tools I know and/or use. The focus is on free and open source software. ↫ SuperIlu at GitHub None of the items on the list are abandonware, so there’s no risk of relying on things that are no longer being developed. With most of the items also being free and open source software, you can further be assured you’re safe from the odd rugpull. If you’re into DOS development, this is a treasure trove.
It’s a bit of a silly post, because syntax is the least interesting d detail about the language, but, still, I can’t stop thinking how Zig gets this detail just right for the class of curly-braced languages, and, well, now you’ll have to think about that too. On the first glance, Zig looks almost exactly like Rust, because Zig borrows from Rust liberally. And I think that Rust has great syntax, considering all the semantics it needs to express (see “Rust’s Ugly Syntax”). But Zig improves on that, mostly by leveraging simpler language semantics, but also through some purely syntactical tasteful decisions. ↫ Alex Kladov Y’all know full well I know very little about programming, so there’s much interesting stuff I can add here. The only slightly related frame of reference I have is how some languages – as in, the ones we speak – have a pleasing grammar or spelling, and how even when you can’t actually speak a language, some of them intrinsically look attractive and pleasing when you see them in written form. I mean, you can’t look at Scottisch Gaelic and not notice it just looks pleasing: Dh’ éirich mi moch air mhaduinn an-dé‘S gun ghearr mi’n ear-thalmhainn do bhrìgh mo sgéilAn dùil gu ‘m faicinn fhéin rùn mo chléibhOch òin gu ‘m faca ‘s a cùl rium féin. ↫ Mo Shùil Ad Dhèidh by Donald MacNicol I have no idea if programmers can look at programming languages the same way, but I’ve often been told there’s more overlap between programming languages and regular language than many people think. As such, it wouldn’t surprise me if some programming languages look really pleasing to programmers, even if they can’t use them because they haven’t really learned them yet.
Claude Code has considerably changed my relationship to writing and maintaining code at scale. I still write code at the same level of quality, but I feel like I have a new freedom of expression which is hard to fully articulate. Claude Code has decoupled myself from writing every line of code, I still consider myself fully responsible for everything I ship to Puzzmo, but the ability to instantly create a whole scene instead of going line by line, word by word is incredibly powerful. ↫ Orta Therox Oh sweet Summer child. As a former translator, I can tell you that’s how it starts. As time goes on, your clients or your manager will demand more and more code from you. You will stop checking every line to meet the deadlines. Maybe you just stop checking the boilerplate at first, but it won’t stay that way. As pressure to be more “productive” mounts, you’ll start checking fewer and fewer lines. Before you know it, your client or manager will just give you entire autogenerated swaths of code, and your job will be to just go over it, making sure it kind of works. Before long, you realise there are fewer and fewer of you. Younger and less-skilled “developers” can quickly go over autogenerated code just as well as you do – but they’re way cheaper. You see the quality of the code you sign off on deteriorate rapidly, but you have no time, and not enough pay, to rewrite the autogenerated code. It works, kind of, and that will have to be enough. The autogenerated codebases you’re supposed to be checking and fixing are so large now, you’re no longer even really checking anything anymore. Quick, cursory glances, that’s all you have time for and can afford. Documentation and commenting code went out the window a long time ago, and every line of code scrolling across your screen is more tech debt you don’t care about, because it’s not your code anyway. And then it hits you. There’s no skill here. There’s no art here. You’re no longer a programmer. There’s no career prospects. Scrolling past shitty autogenerated code day in, day out, without the time or pay to wrangle it into something to be proud of, is the end of the line for you. Speak up about it, and you’ll be replaced by someone cheaper. The first time I was given a massive pile of autotranslated text to revise, without enough time and pay to ensure I was delivering a quality product, I quit and left the translation industry instantly. Like programming, translating is part skill, part art, and I didn’t get two university degrees in language and translation just to deliver barely passable trash. I took pride in my work, and I wasn’t going to let anyone put my name under a garbage product. Programmers, you’re next. Will you have the stones to stand by your art?
Late last year, we talked about Bismuth, a virtual machine being developed by Eniko Fox, one of the developers of the awesome game Kitsune Tails. Part of a operating systems development side project, Bismuth is a VM (think Java Virtual Machine, not VMware) on top of Fox’ custom kernel, designed specifically to run programs in a sandbox. The first article detailed the origins of Bismuth, and the second article delved into memory safety, sandboxing, and more. We’re a few months down the line now, and Fox recently published another article in the series, this time explaining how a hello world-program works in Bismuth. This is the third in a series of posts about a virtual machine I’m developing as a hobby project called Bismuth. I’ve talked a lot about Bismuth, mostly on social media, but I don’t think I’ve done a good job at communicating how you go from some code to a program in this VM. In this post I aim to rectify that by walking you through the entire life cycle of a hello world Bismuth program, from the highest level to the lowest. ↫ Eniko Fox There’s a ton of detail here, and at the end you’ll have a pretty solid grip on how Bismuth works.
Did you know that in 2023, a single malicious email could trick an AI-powered personal assistant into spamming thousands of users from a victim’s inbox? This isn’t science fiction, it’s a real-world exploit from the OWASP Top 10 for LLM Applications, proving Large Language Models can be played like a fiddle if you skimp on security. Since 2022, LLMs powering chatbots, code generators, and virtual sidekicks are reshaping our daily grind. But here’s the kicker. Their meteoric rise made us almost completely forgetting about security. This guide hands you the keys to outsmart the biggest LLM security traps, straight from the OWASP Top 10, a battle-tested playbook forged by 500 AI experts. Understanding the OWASP Top 10 for LLM Applications The OWASP Top 10 for LLM Applications, updated in 2025, guides you through AI-specific security risks. Over 500 experts from AI firms, cloud providers, and universities identified 43 vulnerabilities, then narrowed them to the 10 most severe through voting and public review. This guide helps developers coding apps, data scientists training models, and security teams protecting systems. It addresses LLM-unique threats like prompt injection and unbounded consumption, overlooked by traditional security. With 70+ percent of organizations using LLMs in 2025, this list is your defense against attackers. Prompt Injection: Stop Hackers from Hijacking Your AI Prompt injection is the LLM equivalent of SQL injection. A manipulated input that alters the system’s behavior. Attackers can craft inputs that hijack the context of an AI conversation and trigger actions the developer never intended. Here’s an example: A resume contains hidden instructions that make the AI declare the candidate as “excellent.” When an HR bot summarizes it, the manipulation passes through unnoticed. Bad news is you can’t completely prevent it. It exploits the very nature of LLMs and how they work. You can however enforce strict privilege boundaries, introduce user confirmations for sensitive actions, and isolate external content from internal prompts. Basically, treat your LLM like an untrusted user. Insecure Output Handling: Don’t Let AI Outputs Run Wild LLMs don’t just respond in text, they can also write SQL, HTML, JavaScript. If that output flows unfiltered into a backend system or browser, you’ve got a serious vulnerability. For example, a user prompt leads an LLM to generate JavaScript that, when rendered, exfiltrates session tokens from the browser. To mitigate it, Sanitize all output. Use encoding techniques. Never allow LLM output to directly invoke privileged system calls or script execution. Apply zero-trust principles. Training Data Poisoning: Clean Your Data Because LLMs learn from vast datasets, poisoning can when attackers insert malicious, biased, or false data into these datasets during pre-training or fine-tuning. For example, a competitor floods public forums with biased content targeting a niche domain. That poisoned data gets scraped and used in model training, subtly altering the model’s behavior in the competitor’s favor. To combat this, vet your data sources, enforce sandboxing during data collection, and apply anomaly detection. Maintain a “Machine Learning Bill of Materials” (ML-BOM) to audit data provenance. Model Denial of Service: Prevent Overloads Hackers love overwhelming your LLM with complex inputs, spiking cloud bills or crashing systems. It’s a denial-of-service attack that hits your budget or uptime hard. Imagine recursive prompts grinding a chatbot to a halt, racking up $10,000 in cloud fees overnight. In 2025, 60% of LLM attacks exploit unchecked resource use. The solution is to set strict input size limits, throttle API requests, and monitor for anomalous usage. Cap resource usage per session or user to avoid runaway costs or exhaustion. Supply Chain Vulnerabilities: Secure Your Sources In traditional software, the supply chain refers to libraries and dependencies. In LLMs, it includes models, datasets, plugins, and infrastructure components, many of which come from third parties. An example would be if a plugin from an unverified source gets installed to extend chatbot functionality. Hidden in its dependencies is code that exfiltrates user queries to a remote server. The solution? Apply rigorous review to models, datasets, and plugins. Use signed components, vulnerability scanning, and SBOM tools. Audit vendor privacy policies, especially if they retrain on your users’ data. Sensitive Information Disclosure: Protect Secrets Your LLM might blab PII or trade secrets without a second thought which means one loose output can break privacy laws or spill your company’s playbook. In 2023, a prompt tricked an LLM into coughing up health records from training data, landing a firm in hot water. Don’t let your AI be a gossip. For example, user queries a model and receives PII (personally identifiable information) from another user’s past interaction, accidentally retained during training. The solution would be to sanitize training data, restrict what gets logged, and don’t feed sensitive information into models. Implement clear user policies and opt-out clauses. Fine-tune models with minimal privilege scope. Insecure Plugin Design: Tighten Plugin Security Sloppy plugins are like open windows for hackers, letting malicious inputs waltz in. They can swipe data or jack up privileges in a snap. A 2024 case saw a plugin eat unverified URLs, letting attackers inject code like it’s happy hour. A good example would be a weather plugin accepting a free-form URL. A malicious user redirects it to their own domain, planting instructions that manipulate the system or exfiltrate data. The solution is to test plugins with SAST/DAST tools to root out flaws. Slap on OAuth2 authentication to keep things legit. Restrict plugin permissions to the bare minimum. Don’t let your AI’s sidekick turn traitor. Lock down those plugins before they burn you. Excessive Agency: Control AI Actions Your LLM or plugins with too much freedom can act like rogue agents, making moves without oversight. Unchecked, they’ll delete data or approve sketchy deals. Here’s an example. A customer service bot has access to delete accounts. A hallucinated or malicious prompt causes it to wipe legitimate user records. Mitigation? Limit plugin capabilities. Use minimal permission scopes. Implement human-in-the-loop approval for destructive or financial operations. Build narrow, purpose-built tools—not generic ones that “do everything.” Apply least-privilege access to keep your AI on
Accessibility is something that doesn’t get nearly enough attention, especially considering because not only will we need accessibility features eventually as we grow older, but also because a lot of accessibility features are just helpful even if you don’t technically need them. Given these facts, it’s a shame that accessibility is usually an afterthought, doubly so on open source desktops, a problem we recently talked about. But what if you don’t just need to use a few applications as, say, a blind person, but also actually program as a blind person? Acidic Light, accessibility engineer at KDE e.V., has published a blog post about how screen readers actually work, and what it’s like to program while blind, and the conclusions are not exactly great. I truly feel that, based on my experience with KDE and my experience actually delving into the weeds with AccessKit in a custom UI system, that accessibility programming just isn’t accessible. Unless you happen to already understand the way each platform works, trying to find resources on how to actually let a screen reader know your UI exists is just painful. It’s going to involve reading code other people have already written. It’s going to involve hours, if not days, if not weeks of research and painful debugging. You likely won’t be able to ask many people for help, because they’ll know as much as you do. ↫ Acidic Light If the people who know most what is needed to make a program accessible have so many problems actually making programs accessible, because the tooling, documentation, and institutional knowledge just isn’t there, what hope do other programmers have to make their code accessible? If a blind programmer can’t scratch their own itch, so to speak, we’re never going to reach a point where accessibility becomes a given. I’m very happy awareness of accessibility is growing, but I feel like this isn’t the first time we’ve seen an increase in accessibility awareness only for it to eventually fizzle out without meaningful improvements for those that need it the most. I really hope it sticks this time.
I generally don’t pay attention to the releases of programming languages unless they’re notable for some reason or another, and I think this one qualifies. Rust is celebrating its ten year anniversary with a brand new release, Rust 1.87.0. This release adds anonymous pipes to the standard library, inline assembly can now jump to labeled blocks in Rust code, and support for the i586 Windows target has been removed. Considering Windows 7 was the last Windows version to support i586, I’d say this is fair. You can update to the new version using the rustup command, or wait until your operating system adds it to its repository if you’re using a modern operating system.
The team that makes Cockpit, the popular server dashboard software, decided to see if they could improve their PR review processes by adding “AI” into the mix. They decided to test both sourcey.ai and GitHub Copilot PR reviews, and their conclusions are damning. About half of the AI reviews were noise, a quarter bikeshedding. The rest consisted of about 50% useful little hints and 50% outright wrong comments. Last week we reviewed all our experiences in the team and eventually decided to switch off sourcery.ai again. Instead, we will explicitly ask for Copilot reviews for PRs where the human deems it potentially useful. This outcome reflects my personal experience with using GitHub Copilot in vim for about 1.5 years – it’s a poisoned gift. Most often it just figured out the correct sequence of ), ], and } to close, or automatically generating debug print statements – for that “typing helper” work it was actually quite nice. But for anything more nontrivial, I found it took me more time to validate the code and fix the numerous big and subtle errors than it saved me. ↫ Martin Pitt “AI” companies and other proponents of “AI” keep telling us that these tools will save us time and makes things easier, but every time someone actually sits down and does the work of testing “AI” tools out in the field, the end results are almost always the same: they just don’t deliver the time savings and other advantages we’re being promised, and more often than not, they just create more work for people instead of less. Add in the financial costs of using and running these tools, as well as the energy they consume, and the conclusion is clear. When the lack of effectiveness of “AI” tools our in the real world is brought up, proponents inevitably resort to “yes it sucks now, but just you wait on the next version!” Then that next version comes, people test it out in the field again, and it’s still useless, and those same proponents again resort to “yes it sucks now, but just you wait on the next version!”, like a broken record. We’re several years into the hype, and that mythical “next version” still isn’t here. We’re several years into the “AI” hype, and I still have seen no evidence it’s not a dead end and a massive con.
This is a follow-up to the Samsung NX mini (M7MU) firmware reverse-engineering series. This part is about the proprietary LZSS compression used for the code sections in the firmware of Samsung NX mini, NX3000/NX3300 and Galaxy K Zoom. The post is documenting the step-by-step discovery process, in order to show how an unknown compression algorithm can be analyzed. The discovery process was supported by Igor Skochinsky and Tedd Sterr, and by writing the ideas out on encode.su. ↫ Georg Lukas It’s not weekend quite yet, but here’s some light reading ahead of time.
To argue that Objective-C resembles a metaphysically divine language, or even a good language, is like saying Shakespeare is best appreciated in pig latin. Objective-C is, at best, polarizing. Ridiculed for its unrelenting verbosity and peculiar square brackets, it is used only for building Mac and iPhone apps and would have faded into obscurity in the early 1990s had it not been for an unlikely quirk of history. Nevertheless, in my time working as a software engineer in San Francisco in the early 2010s, I repeatedly found myself at dive bars in SoMa or in the comments of HackerNews defending its most cumbersome design choices. ↫ Gabriel Nicholas at Wired I’ll just step back and let y’all handle this one.
One thing I love about Python is how it comes with its very own built-in zen. In moments of tribulations, when I am wrestling with crooked code and tangled thoughts, I often find solace in its timeless wisdom. ↫ Susam Pal I can’t program and know nothing about Python, but this still made me laugh.
Software gets more complicated. All of this complexity is there for a reason. But what happened to specializing? When a house is being built, tons of people are involved: architects, civil engineers, plumbers, electricians, bricklayers, interior designers, roofers, surveyors, pavers, you name it. You don’t expect a single person, or even a whole single company, to be able to do all of those. ↫ Vitor M. de Sousa Pereira I’ve always found that software development gets a ton of special treatment and leeway in quality expectations, and this has allowed the kind of stuff the linked article is writing about to become the norm. Corporations can demand so much from developers and programmers to the point where expecting quality is wholly unreasonable, because there’s basically no consequences for delivering a shit product. Bugs, crashes, security issues, lack of documentation, horrid localisation – it’s all par for the course in software, yet we would not tolerate any of that in almost any other type of product. While I’m sure some of this can be attributed to developers themselves, most of it seems to stem from incompetent managers imposing impossible deadlines downwards and setting unrealistic expectations upwards – you know, kick down, lick up – creating a perfect storm of incompetence. We all know it, we all experience it every day, and we all hate it – but we’ve just accepted it. As consumers, as developers, as regulatory bodies. It’s too late to fix this now. Software development will forever exist as a sort of no man’s land of quality expectations, free from regulations, warranties, and consumer protections, and imposing them now after the fact is never going to be accepted by the industry and won’t ever make it through any lawmaking process of any country, and we all suffer from it, both as users of software and as makers of it.
To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage. By porting the current codebase, we expect to be able to preview a native implementation of tsc capable of command-line typechecking by mid-2025, with a feature-complete solution for project builds and a language service by the end of the year. ↫ Anders Hejlsberg It seems Microsoft is porting TypeScript to Go, and WILL eventually offer both “TypeScript (JS)” and “TypeScript (native)” alongside one another during a transition period. TypeScript 6.x will be the JavaScript-based one and will continue to be developed until TypeScript 7.0, the Go-one, is mature enough. During the 6.x release cycle, however, there will be breaking changes and deprecations in preparation for 7.0. Those are some serious performance improvements, but I’m sure quite a few projects are going to run into issues during the transition period. I hope for them that the 6.x branch remains maintained for long enough to reasonably get everyone on board the new Go version.
Bjarne Stroustrup, creator of C++, has issued a call for the C++ community to defend the programming language, which has been shunned by cybersecurity agencies and technical experts in recent years for its memory safety shortcomings. C and C++ are built around manual memory management, which can result in memory safety errors, such as out of bounds reads and writes, though both languages can be written and combined with tools and libraries to help minimize that risk. These sorts of bugs, when they do crop up, represent the majority of vulnerabilities in large codebases. ↫ Thomas Claburn at The Register I mean, it makes sense to me that those responsible for new code to use programming languages that more or less remove the most common class of vulnerabilities. With memory-safe languages like Rust having been around for quite a while now, it’s almost wilful negligence to write new code where security is a priority in anything but such memory-safe languages. Of course, this doesn’t mean you delete any and all existing code – it just means you really need to start writing any new code in safer languages. After all, research shows that even when you only write new code in memory-safe languages, the reduction in vulnerabilities is massive. This reminds me a lot of those old videos of people responding to then-new laws mandating the use of seat belts in cars. A lot of people didn’t want to put them on, saying things to the tune of “I don’t need one because I’m a good driver”. Even if you are a good driver – which statistically you aren’t – everyone else on the road isn’t. When we see those old videos now, they feel quaint, archaic, and dumb – of course you wear a seat belt, you’d be an irresponsible idiot not to! – but only a few decades ago, those arguments made perfect sense to people. It won’t be long before the same will apply to people doggedly refusing to use memory-safe languages or libraries/extensions that introduce such safety to existing languages, and Bjarne Stroustrup seems to understand that. Are you really smarter than Bjarne Stroustrup?
I’m sure we can all have a calm, rational discussion about this, so here it goes: zlib-rs, the Rust re-implementation of the zlib library, is now faster than its C counterparts in both decompression and compression. We’ve released version 0.4.2 of zlib-rs, featuring a number of substantial performance improvements. We are now (to our knowledge) the fastest api-compatible zlib implementation for decompression, and beat the competition in the most important compression cases too. ↫ Folkert de Vries As someone who isn’t a programmer, looking at all the controversies and fallout around anything related to Rust is both fascinating and worrying. Fascinating because Rust clearly brings a whole slew of improvements over established and older languages, and worrying because the backlash from the establishment has been wildly irrational and bordering on the childish, complete with tamper tantrums and the taking of balls and going home. It shouldn’t surprise me that people get attached to programming languages the same way people get attached to operating systems, but surprisingly, it still does. If Rust not only provides certain valuable benefits like memory safety, but can also be used to create implementations that are faster than those created in, say, C, it’s really only going to be a matter of time before it simply becomes an untenable position to block Rust from, say, the Linux kernel. Progress has a tendency to find a way, especially the more substantial the benefits get, and as studies show, even only writing new code in memory-safe languages provides substantial benefits. In other words, more and more projects will simply switch over to Rust for new code where it makes sense, whether Rust haters want it or not. There will be enough non-Rust code to write and maintain, though, so I don’t think people will be out of a job any time soon because they refuse to learn Rust, but to me as an outsider, the Rust hate seems to grow more and more irrational by the day.
Cassette is a GUI application framework written in C11, with a UI inspired by the cassette-futurism aesthetic. Built for modern POSIX systems, it’s made out of three libraries: CGUI, CCFG and COBJ. Cassette is free and open-source software, licensed under the LGPL-3.0. ↫ Cassette GitHub page Upon first reading this description, you might wonder what a “cassette-futurism aesthetic” really is, but once you take a look at the screenshots of what Cassette can do, you immediately understand what it means. It’s still in the alpha stage and there’s lot still to do, but what it has now is already something quite unique I don’t think the major toolkits really cater to or can even pull off. There’s an example application that’s focused on showing some system stats, and that’s exactly the kind of stuff this seems a great fit for: good-looking, small widget-like applications showing glanceable information.
Now, if you have been following the development of EndBASIC, this is not surprising. The defining characteristic of the EndBASIC console is that it’s hybrid as the video shows. What’s newsworthy, however, is that the EndBASIC console can now run directly on a framebuffer exposed by the kernel. No X11 nor Wayland in the picture (pun intended). But how? The answer lies in NetBSD’s flexible wscons framework, and this article dives into what it takes to render graphics on a standard Unix system. I’ve found this exercise exciting because, in the old days, graphics were trivial (mode 13h, anyone?) and, for many years now, computers use framebuffer-backed textual consoles. The kernel is obviously rendering “graphics” by drawing individual letters; so why can’t you, a user of the system, do so too? ↫ Julio Merino This opens up a lot of interesting use cases and fun hacks for developers to implement in their CLI applications. All the code in the article is – as usual – way over my head, but will be trivial for quite a few of you. The mentioned EndBASIC project, created by the author, Julio Merino, is fascinating too: EndBASIC is an interpreter for a BASIC-like language and is inspired by Amstrad’s Locomotive BASIC 1.1 and Microsoft’s QuickBASIC 4.5. Like the former, EndBASIC intends to provide an interactive environment that seamlessly merges coding with immediate visual feedback. Like the latter, EndBASIC offers higher-level programming constructs and strong typing. EndBASIC’s primary goal is to offer a simplified and restricted DOS-like environment to learn the foundations of programming and computing, and focuses on features that quickly reward the learner. These include a built-in text editor, commands to manipulate the screen, commands to interact with shared files, and even commands to interact with the hardware of a Raspberry Pi. ↫ EndBASIC website Being able to run this on a machine without having to load either X or Wayland is a huge boon, and makes it accessible fast on quite a lot of hardware on which a full X or Wayland setup would be cumbersome or slow.