General Development Archive

A twenty-five year old curl bug

When we announced the security flaw CVE-2024-11053 on December 11, 2024 together with the release of curl 8.11.1 we fixed a security bug that was introduced in a curl release 9039 days ago. That is close to twenty-five years. The previous record holder was CVE-2022-35252 at 8729 days. ↫ Daniel Stenberg Ir’s really quite fascinating to see details like this about such a widepsread and widely used tool like curl. The bug in question was a logic error, which made Stenberg detail how any modern language like Rust, instead of C, would not have prevented this issue. Still, about 40% of all security issues in curl stem from not using a memory-safe language, or about 50% of all high/critical severity ones. I understand that jumping on every bandwagon and rewriting everything in a memory-safe language is a lot harder than it sounds, but I also feel like it’s getting harder and harder to keep justifying using old languages like C. I really don’t know why people get so incredibly upset at the cold, hard data about this. Anyway, the issue that sparked this post is fixed in curl 8.11.1.

If not React, then what?

Rejecting an engrained practice of bullshitting does not come easily. Frameworkism preaches that the way to improve user experiences is to adopt more (or different) tooling from the framework’s ecosystem. This provides adherents with something to do that looks plausibly like engineering, except it isn’t. It can even become a totalising commitment; solutions to user problems outside the framework’s expanded cinematic universe are unavailable to the frameworkist. Non-idiomatic patterns that unlock significant wins for users are bugs to be squashed. And without data or evidence to counterbalance bullshit artists’s assertions, who’s to say they’re wrong? Orthodoxy unmoored from measurements of user outcomes predictably spins into abstruse absurdities. Heresy, eventually, is perceived to carry heavy sanctions. It’s all nonsense. ↫ Alex Russell I’m not a developer, but any application that uses frameworks like React that I’ve ever used tend to be absolute trainwrecks when it comes to performance, usability, consistency, and platform integration. When someone claims to have an application available for a platform I use, but it’s using React or Electron or whatever, they’re lying in my eyes – what they really have is a website running in a window frame, which may or may not even be a native window frame. Developing using these tools indicates to me a lack of care, a lack of respect for the users of your product. I am militantly native. I’d rather use a less functional application than a Chrome web application cosplaying as a real application, and I will most likely not even consider using your service if all you have is a website-in-a-box. If you don’t respect me, I see no need to respect you. If you want an application on a specific platform, use that platform’s native tools and APIs to build it. Anything else tells me all I need to know about how much you truly care about the product you’re building.

Introduction to Bismuth VM

This is the first post in what will hopefully become a series of posts about a virtual machine I’m developing as a hobby project called Bismuth. This post will touch on some of the design fundamentals and goals, with future posts going into more detail on each. But to explain how I got here I first have to tell you about Bismuth, the kernel. ↫ Eniko Fox It’s not every day the a developer of an awesome video game details a project they’re working on that also happens to be excellent material for OSNews. Eniko Fox, one of the developers of the recently released Kitsune Tails, has also been working on an operating system and virtual machine in her spare time, and has recently been detailing the experience in, well, more detail. This one here is the first article in the series, and a few days ago she published the second part about memory safety in the VM. The first article goes into the origins of the project, as well as the design goals for the virtual machine. It started out as an operating systems development side project, but once it was time to develop things like the MMU and virtual memory mapping, Fox started wondering if programs couldn’t simply run inside a virtual machine atop the kernel instead. This is how the actual Bismuth virtual machine was conceived. Fox wants the virtual machine to care about memory safety, and that’s what the second article goes into. Since the VM is written in C, which is anything but memory-safe, she’s opting for implementing a form of sandboxing – which also happens to be the point in the development story where my limited knowledge starts to fail me and things get a little too complicated for me. I can’t even internalise how links work in Markdown, after all (square or regular brackets first? Also Markdown sucks as a writing tool but that’s a story for another time). For those of you more capable than me – so basically most of you – Fox’ series is a great series to follow along as she further develops the Bismuth VM.

FLTK 1.4.0 brings Wayland support

FLTK 1.4.0 has been released. This new version of the Fast Light Toolkit contains some major improvements, such as Wayland support on both Linux and FreeBSD. X11 and Wayland are both supported by default, and applications using FLTK will launch using Wayland if available, and otherwise fall back to starting with X11. This new release also brings HiDPI support on Linux and Windows, and improves said support on macOS. Those are the headline features, but there’s more changes here, of course, as well as the usual round of bugfixes. Right after the release of 1.4.0, a quick bugfix release, version 1.4.0-1, was released to address an issue in 1.4.0 – a build error on a single test program on Windows, when using Visual Studio. Not exactly a major bug, but great to see the team fix it so rapidly.

Startup’s “AI” tool spams GitHub repositories with bogus commits, without consent

Update: that was quick! GitHub banned the “AI” company’s account. Only GitHub gets to spam AI on GitHub, thank you very much. Most of the time, products with “AI” features just elicit sighs, especially when the product category in question really doesn’t need to have anything to do with “AI” in any way, shape, or form. More often than not, though, such features are not only optional and easily ignorable, and we can always simply choose not to buy or use said products in the first place. I mean, over the last few days I’ve migrated my Pixel 8 Pro from stock Google Android to GrapheneOS as the final part of my platform transition away from big tech, and Google’s insistence on shoving “AI” into everything certainly helped in spurring this along. But what are you supposed to do if an “AI” product forces itself upon you? What if you can’t run away from it? What if, one day, you open your GitHub repository and see a bunch of useless PRs from an “AI” bot who claims to help you fix issues, without you asking it to do so? Well, that’s what’s happening to a bunch of GitHub users who were unpleasantly surprised to see garbage, useless merge requests from a random startup testing out some “AI” tool that attempts to automatically ‘fix’ open issues on GitHub. The proposed ‘fixes’ are accompanied by a disclaimer: Disclaimer: The commit was created by Latta AI and you should never copy paste this code before you check the correctness of generated code. Solution might not be complete, you should use this code as an inspiration only. This issue was tried to solve for free by Latta AI – https://latta.ai/ourmission If you no longer want Latta AI to attempt solving issues on your repository, you can block this account. ↫ Example of a public open issue with the “AI” spam Let me remind you: this tool, called “Latta AI”, is doing all of this unprompted, without consent, and the commits generally seem bogus and useless, too, in that they don’t actually fix any of the issues. To make matters worse, your GitHub repository will then automatically appear as part of its marketing – again without any consent or permission from the owners of the GitHub projects in question. Clicking through to the GitHub repositories listed on the front page will reveal a lot about how developers are responding: they’re not amused. Every link I clicked on had Latta AI’s commit and comment marked as spam, abuse, or just outright deleted. We’re talking public open issues here, so it’s not like developers aren’t looking for input and possible fixes from third parties – they just want that input and those possible fixes to come from real humans, not some jank code generator that’s making us destroy the planet even faster. This is what the future of “AI” really looks like. It’s going to make spam even easier to make, even more pervasive, and even cheaper, and it’s going to infest everything. Nothing will be safe from these monkeys on typewriters, and considering what the spread of misinformation by human-powered troll farms can do, I don’t think we’re remotely ready for what “AI” is going to mean for our society. I can assure you lying about brown people eating cats and dogs will be remembered as quaint before this nonsense is over.

Moving a game project from C to the Odin language

Some months ago, I got really fed up with C. Like, I don’t hate C. Hating programming languages is silly. But it was way too much effort to do simple things like lists/hashmaps and other simple data structures and such. I decided to try this language called Odin, which is one of these “Better C” languages. And I ended up liking it so much that I moved my game Artificial Rage from C to Odin. Since Odin has support for Raylib too (like everything really), it was very easy to move things around. Here’s how it all went.. Well, what I remember the very least. ↫ Akseli Lahtinen You programmers might’ve thought you escaped the wrath of Monday on OSNews, but after putting the IT administrators to work in my previous post, it’s now time for you to get to work. If you have a C codebase and want to move it to something else, in this case Odin, Lahtinen’s article will send you on your way. As someone who barely knows how to write HTML, it’s difficult for me to say anything meaningful about the technical details, but I feel like there’s a lot of useful, first-hand info here.

What’s new in POSIX 2024 – XCU

As of the previous release of POSIX, the Austin Group gained more control over the specification, having it be more working group oriented, and they got to work making the POSIX specification more modern. POSIX 2024 is the first release that bears the fruits of this labor, and as such, the changes made to it are particularly interesting, as they will define the direction of the specification going forwards. This is what this article is about! Well, mostly. POSIX is composed of a couple of sections. Notably XBD (Base Definitions, which talk about things like what a file is, how regular expressions work, etc), XSH (System Interfaces, the C API that defines POSIX’s internals), and XCU (which defines the shell command language, and the standard utilities available for the system). There’s also XRAT, which explains the rationale of the authors, but it’s less relevant for our purposes today. XBD and XRAT are both interesting as context for XSH and XCU, but those are the real meat of the specification. This article will focus on the XCU section, in particular the utilities part of that section. If you’re more interested in the XSH section, there’s an excellent summary page by sortix’s Jonas Termansen that you can read here. ↫ im tosti The weekend isn’t over yet, so here’s some more light reading.

Go Plan9 memo, speeding up calculations 450%

I want to take advantage of Go’s concurrency and parallelism for some of my upcoming projects, allowing for some serious number crunching capabilities. But what if I wanted EVEN MORE POWER?!? Enter SIMD, Same Instruction Muliple Data . Simd instructions allow for parallel number crunching capabilities right down at the hardware level. Many programming languages either have compiler optimizations that use simd or libraries that offer simd support. However, (as far as I can tell) Go’s compiler does not utilizes simd, and I cound not find a general propose simd package that I liked. I just want a package that offers a thin abstraction layer over arithmetic and bitwise simd operations. So like any good programmer I decided to slightly reinvent the wheel and write my very own simd package. How hard could it be? After doing some preliminary research I discovered that Go uses its own internal assembly language called Plan9. I consider it more of an assembly format than its own language. Plan9 uses target platforms instructions and registers with slight modifications to their names and usage. This means that x86 Plan9 is different then say arm Plan9. Overall, pretty weird stuff. I am not sure why the Go team went down this route. Maybe it simplifies the compiler by having this bespoke assembly format? ↫ Jacob Ray Pehringer Another case of light reading for the weekend. Even as a non-programmer I learned some interesting things from this one, and it created some appreciation for Go, even if I don’t fully grasp things like this. On top of that, at least a few of you will think this has to do with Plan9 the operating system, which I find a mildly entertaining ruse to subject you to.

Unlocking Potential Through the Impact of Dedicated Development Teams on Material Digitization

In today’s world, everything is turning digital: manufacturing, retail, and agriculture. The global digital transformation market is set to reach a worth of $1009.8 billion by 2025, according to a report from Grand View Research, and this is one of the many reasons why technology has turned out to be the go-to method for streamlining operations, creating efficiency, and unlocking new possibilities. Development teams-specialized groups of tech talents-are at the heart of this transformation, moving material digitisation forward. Their influence is experienced across many industries, redefining how firms approach innovation, sustainability, and customer interaction. The Role of Dedicated Development Teams in Material Digitization The consistency, expertise, and focus that dedicated development teams can bring often provide the necessary impetus for an in-depth tackle of these complexities of material digitisation. It is not all about coding; in fact, it is about teams made up of project managers, analysts, engineers, and designers who integrate digital technologies into material handling and processing. Why a Dedicated Team? Choosing a dedicated team model for digitisation projects offers several advantages: Driving Innovation and Efficiency Dedicated development teams have been making revolutionary contributions to material digitisation. They digitise conventional materials and, in the process, create completely new avenues for innovation and efficiency in handling them. Case Studies of Success Navigating Challenges Together Of course, material digitisation comes with its problems. Data security, integration into existing systems, and the guarantee of actual-to-life digital material representation are specific difficulties facing most committed development teams. Partnering with an it outstaffing company can enhance their skill and teamwork, contributing to overcoming these setbacks. Overcoming Data Security Concerns Among the most critical issues in any digitisation project is data security. This develops dedicated teams with solid measures for protection, including encryption and secure access controls to digital materials. Additionally, regular audits of updates are needed in security to locate weaknesses that emerging threats could use. By prioritizing data security, organizations earn user trust and ensure the conduction of their services according to regulatory standards. Seamless Integration With Existing Systems Similarly, dedicated teams work at seamlessly integrating these into existing systems so that the digital materials can be put to practical use. In most cases, this demands bespoke API development or middleware solutions that will make the data flow across platforms smooth and unhindered. Rigorous testing and validation are thus required to establish that all systems communicate effectively and that data integrity is not compromised. Here, integration means increased productivity and an enhanced ability on the part of users to apply digital resources more usefully. The Multifaceted Benefits of Material Digitization However, dedicated development teams touch material digitisation well beyond operational efficiencies, driving it toward greener pastures and personalisation. Sustainability Through Digitization By digitizing materials, companies can reduce waste and optimize resources. For example, digital inventory systems prevent overproduction and excess inventory through efficient demand forecasting. This helps not only the environment but also the company’s bottom line. Besides, real-time data analytics enable organizations to make more informed decisions and respond promptly to various changes in markets and industries. Being sustainable in practice would enable companies to remain competitive in their respective industries. Enhancing Customer Engagement Material digitisation also opens up several new opportunities related to customer experiences. New immersive experiences offered by VR and AR enable the customer to try out a product virtually before buying it. Not only will this improve the buying experience, but it will also help develop a better brand relationship. Moreover, personalized experiences can also be built based on user preference, which genuinely makes a customer feel unique and understood. Hence, businesses can create customer loyalty and reinforce purchases by offering memorable and unique interactions. The Road Ahead: Collaborating for a Digitized Future Material digitisation is an ongoing journey full of potential and challenges. Companies need to continue their exploration, as the role of dedicated development teams will become much more important. Specialized teams are not simple service providers but strategic partners in innovation that help businesses navigate the complexities of the digital landscape. A Collaborative Ecosystem The digitisation of materials needs an ecosystem approach in which businesses, developers, and even end-users will work together. Encouraging open communication, feedback, and co-innovation leads to more practical digitisation solutions. For continuous improvement, various forms of partnership across different sectors will facilitate stakeholders’ use of diversified experience and insight. This collaborative approach accelerates the development of new technologies and ensures solutions that fit real user needs. Staying Ahead of the Curve Keeping one’s head above water is only possible with continuous learning and adaptation in a continuously changing digital world. The development teams should continually explore new technologies, methodologies, and practices to ensure that the digitisation of materials meets current needs but will also address future trends and opportunities. This allows teams to be more proactive in introducing innovative solutions that maximize efficiency and improve the user experience. With a culture of continuous improvement, organizations will be in leadership positions in their industry and prepared for whatever complications arise from the ever-changing digital landscape. Conclusion The influence of dedicated development teams goes deep and wide in material digitization. Pledged to expertise, innovation, and a perspective for the future, they are fostering industries down the value chain to unlock new potentials, efficiency, and sustainability while making the customer experience more engaging. No doubt, this team and business collaboration will form a cornerstone of this journey in digital transformation as it pertains to the way we interact with materials in our everyday lives.

“Lost” 1983 programming language bought on eBay

A YouTube channel has resurrected a programming language that hadn’t been seen since the 1980s — in a testament to both the enduring power of our technology, and of the communities that care about it. But best of all, Simpson uploaded the language to the Internet Archive, along with all his support materials, inviting his viewers to write their own programs (and saying he hoped his upstairs neighbor would’ve approved). And in our email interview, Simpson said since then it’s already been downloaded over 1,000 times — “which is pretty amazing for something so old.” ↫ David Cassel It’s great that this lost programming language, MicroText for the Commodore 64, was rediscovered, but I’m a bit confused as to how “lost” this language really was. I mean, it was “discovered” in a properly listed eBay listing, which feels like cheating to me. When I think of stories of discoveries of long-lost software, games, or media, it usually involves things like finding it in a shed after years of searching, or someone at a company going through that box of old hard drives discovering the game they worked on 32 years ago. I don’t know, something about this whole story feels off to me, and it’s ringing some alarm bells I can’t quite place. Regardless, it’s cool to have MicroText readily available on the web now, so that people can rediscover it and create awesome new things with it. Perhaps there’s old ideas to be relearned here.

Tcl/Tk 9.0 released

Tcl 9.0 and Tk 9.0 – usually lumped together as Tcl/Tk – have been released. Tcl 9.0 brings 64bit compatibility so it can address data values larger than 2 GB, better Unicode support, support for mounting ZIP files as file systems, and much, much more. Tk 9.0 gets support for scalable vector graphics, much better platform integration with things like system trays, gestures, and so on, and much more.

The Mouse programming language on CP/M

Mouse is an interpreted stack orientated language designed by Peter Grogono around 1975. It was designed to be a small but powerful language for microcomputers, similar to Forth, but much simpler. One obvious difference to Forth is that Mouse interprets a stream of characters most of which are only a single character and it relies more on variables rather than rearranging the stack as much. The version for CP/M on the Walnut Creek CD is quite small at only 2k. ↫ Lawrence Woodman (2020) Even with very little to no programming experience I can tell that this language looks a lot smaller and more compact than other code I’ve seen. I’ll have to leave it to the actual programmers and developers among the OSNews audience to provide more valuable insight, but I feel like there’s definitely something here that’ll interest some of you.

They don’t make ’em like that any more: Borland Turbo Pascal 7

All, in all, It was much easier to program for Windows using Turbo Pascal 7 than with anything else. Not only did it provide a programming model that matched the way the Windows user interface worked, the application itself had a Windows graphical interface – many Windows programming tools at that time actually ran under MSDOS, and were entirely text-based. TP 7 also had fully-graphical tools for designing the user interface elements, like menus and icons. Laying out a menu using a definition file with an obscure format, using Windows Notepad, was never an agreeable experience. Microsoft did produce graphical tools for this kind of operation, but Turbo Pascal combined them into a seamless IDE. All I had to do to build and run my programs was to hit the F7 key. I could even set breakpoints for the debugger, just by clicking a line of code. As I said, common enough today, but revolutionary for early Windows programming. ↫ Kevin Boone Even as a mere child who didn’t even know what programming was, I was aware of Turbo Pascal. It was a name that you just encountered all over the place as a DOS and Windows 3.x user, even if you didn’t know what it was. The author of this article, Kevin Boone, even claims Turbo Pascal “contributed to the widespread uptake, and eventual domination, of Microsoft Windows on desktop PCs”, which is not something I can verify because I was far too young, but I wouldn’t be surprised if it holds water. This article made me wonder if Pascal is easy to learn, and if someone wanting to learn programming can do worse than start with a Windows 3.x virtual machine and Turbo Pascal. Sure, it’s probably not very relevant today, but it might serve as a good, solid base to work from? I have no idea.

Developing a cryptographically secure bootloader for RISC-V in Rust

It seems to be bootloader season, because we’ve got another one – this time, a research project with very limited application for most people. SentinelBoot is a cryptographically secure bootloader aimed at enhancing boot flow safety of RISC-V through memory-safe principles, predominantly leveraging the Rust programming language with its ownership, borrowing, and lifetime constraints. Additionally, SentinelBoot employs public-key cryptography to verify the integrity of a booted kernel (digital signature), by the use of the RISC-V Vector Cryptography extension, establishing secure boot functionality. SentinelBoot achieves these objectives with a 20.1% hashing overhead (approximately 0.27s additional runtime) when compared to an example U-Boot binary (mainline at time of development), and produces a resulting binary one-tenth the size of an example U-Boot binary with half the memory footprint. ↫ Lawrence Hunter SentinelBoot is a project undertaken at the University of Manchester, and its goal is probably clear from the description: to develop a more secure bootloader for RISC V devices. An additional element is that they looked specifically at devices that receive updates over-the-air, like smartphones. In addition, scenarios where an attacker has physical access to the device in question were not considered, for obvious reasons – in such cases, the attacker can just replace the bootloader altogether anyway, and no amount of fancy Rust code is going to save you there. The details of the implementation as described in the article are definitely a little bit over my head, but the gist seems to be that the project’s been able to achieve a much more secure boot process without giving up much in performance. This being a research project with an intentionally limited scope does mean it’s most just something that’ll immediately benefit all of us, but it’s these kinds of projects that can really push the state of the art and try out the viability of new ideas.

“Why I prefer rST to Markdown”

This is my second book written with Sphinx, after the new Learn TLA+. Sphinx uses a peculiar markup called reStructured Text (rST), which has a steeper learning curve than markdown. I only switched to it after writing a couple of books in markdown and deciding I needed something better. So I want to talk about why rst was that something. ↫ Hillel Wayne I’ve never liked Markdown – I find it quite arbitrary and unpleasant to look at, and the fact there’s countless variants that all differ a tiny bit doesn’t help – so even though I don’t actually use Markdown for anything, I always have a passing interest in possible alternatives, if only to see what other, different, and unique ideas are out there when it comes to relatively simple markup languages. Now, I’m quite sure reStructured Text isn’t for me either, since I feel like it’s far more powerful than Markdown, and serves a different, more complex purpose. That being said, I figured I’d highlight it here since it seems it may be interesting to some of you who work on documentation for your software projects or similar endeavours.

The impact of AI on computer science education

Yesterday I highlighted a study that found that AI and ML, and the expectations around them, are actually causing people to need to work harder and more, instead of less. Today, I have another study for you, this time focusing a more long-term issue: when you use something like ChatGPT to troubleshoot and fix a bug, are you actually learning anything? A professor at MIT divided a group of students into three, and gave them a programming task in a language they did not know (FORTRAN). One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade. Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed. ↫ Esther Shein at ACM I find this an interesting result, but at the same time, not a very surprising one. It reminds me a lot of that when I went to high school, I was part of the first generation whose math and algebra courses were built around using a graphic calculator. Despite being able to solve and graph complex equations with ease thanks to our TI-83, we were, of course, still told to include our “work”, the steps taken to get from the question to the answer, instead of only writing down the answer itself. Since I was quite good “at computers”, and even managed to do some very limited programming on the TI-83, it was an absolute breeze for me to hit some buttons and get the right answers – but since I knew, and know, absolutely nothing about math, I couldn’t for the life of me explain how I got to the answers. Using ChatGPT to fix your programming problem feels like a very similar thing. Sure, ChatGPT can spit out a workable solution for you, but since you aren’t aware of the steps between problem and solution, you aren’t actually learning anything. By using ChatGPT, you’re not actually learning how to program or how to improve your skills – you’re just hitting the right buttons on a graphing calculator and writing down what’s on the screen, without understanding why or how. I can totally see how using ChatGPT for boring boilerplate code you’ve written a million times over, or to point you in the right direction while still coming up with your own solution to a problem, can be a good and helpful thing. I’m just worried about a degradation in skill level and code quality, and how society will, at some point, pay the price for that.

GitHub is starting to feel like legacy software

The corporate branding, the new “AI-powered developer platform” slogan, makes it clear that what I think of as “GitHub”—the traditional website, what are to me the core features—simply isn’t Microsoft’s priority at this point in time. I know many talented people at GitHub who care, but the company’s priorities just don’t seem to value what I value about the service. This isn’t an anti-AI statement so much as a recognition that the tool I still need to use every day is past its prime. Copilot isn’t navigating the website for me, replacing my need to the website as it exists today. I’ve had tools hit this phase of decline and turn it around, but I’m not optimistic. It’s still plenty usable now, and probably will be for some years to come, but I’ll want to know what other options I have now rather than when things get worse than this. ↫ Misty De Meo Apparently, GitHub is in the middle of a long, drawn-out process where it’s rewriting its frontend using React. De Meo was trying to use a particular feature of GitHub – the blame view, which also works through the command line but is apparently much harder to parse there – and realised the browser search feature just couldn’t find the line of code they absolutely knew for sure was there. After scrolling for a while, the browser search feature suddenly found the line of code. I’d heard rumblings that GitHub’s in the middle of shipping a frontend rewrite in React, and I realized this must be it. The problem wasn’t that the line I wanted wasn’t on the page—it’s that the whole document wasn’t being rendered at once, so my browser’s builtin search bar just couldn’t find it. On a hunch, I tried disabling JavaScript entirely in the browser, and suddenly it started working again. GitHub is able to send a fully server-side rendered version of the page, which actually works like it should, but doesn’t do so unless JavaScript is completely unavailable. ↫ Misty De Meo Seem like a classic case of people being told to develop something in too little time, with the wrong tools, while management is breathing down their necks and pulling engineers away to work on buzzwords like “AI”.

An overview of the Starlark language

Starlark is a small programming language, designed as a simple dialect of Python and intended primarily for embedded use in applications. Some people might say it’s a bit like Lua with Python syntax, but I think there are many interesting bits to discuss. The language is now open-source and used in many other applications and companies. As I led the design and implementation of Starlark, I’d like to write a bit more about it. ↫ Laurent Le Brun I’m sure there’s a few among you will like this.

Modernizing the AntennaPod code structure

AntennaPod has been around for a long time – the first bit of code was published in 2011. Since then, the app has grown massively and had several main developers. The beauty of open-source is that so many people can contribute and make a great app together. But sometimes having many people work on a project can lead to different ways of thinking about how to structure the project. Because of this, AntennaPod gradually grew to have a number of weird code constructs. Our latest release, version 3.4, fixes this. ↫ ByteHamster The AntennaPod team had an incredible task ahead of itself, and while it took them a few years, they pulled it off. The code structure graphs from before and after the code restructuring illustrate better than words ever could what they achieved. Thy changed 10000 lines of source code in 62 pull requests for this restructuring alone, while still adding new major features in the meantime. Pretty incredible.

Did GitHub Copilot really increase my productivity?

Yuxuan Shui, the developer behind the X11 compositor picom (a fork of Compton) published a blog post detailing their experiences with using GitHub Copilot for a year. I had free access to GitHub Copilot for about a year, I used it, got used to it, and slowly started to take it for granted, until one day it was taken away. I had to re-adapt to a life without Copilot, but it also gave me a chance to look back at how I used Copilot, and reflect – had Copilot actually been helpful to me? Copilot definitely feels a little bit magical when it works. It’s like it plucked code straight from my brain and put it on the screen for me to accept. Without it, I find myself getting grumpy a lot more often when I need to write boilerplate code – “Ugh, Copilot would have done it for me!”, and now I have to type it all out myself. That being said, the answer to my question above is a very definite “no, I am more productive without it”. Let me explain. ↫ Yuxuan Shui The two main reasons why Shui eventually realised Copilot was slowing them down were its unpredictability, and its slowness. It’s very difficult to understand when, exactly, Copilot will get things right, which is not a great thing to have to deal with when you’re writing code. They also found Copilot incredibly slow, with its suggestions often taking 2-3 seconds or longer to appear – much slower than the suggestions from the clangd language server they use. Of course, everybody’s situation will be different, and I have a suspicion that if you’re writing code in incredibly popular languages, say, Python or JavaScript, you’re going to get more accurate and possibly faster suggestions from Copilot. As Shui notes, it probably also doesn’t help that they’re writing an independent X11 compositor, something very few people are doing, meaning Copilot hasn’t been trained on it, which in turn means the tool probably has no clue what’s going on when Shui is writing their code. As an aside, my opinion on GitHub Copilot is clear – it’s quite possibly the largest case of copyright infringement in human history, and in its current incarnation it should not be allowed to continue to operate. As I wrote over a year ago: If Microsoft or whoever else wants to train a coding “AI” or whatever, they should either be using code they own the copyright to, get explicit permission from the rightsholders for “AI” training use (difficult for code from larger projects), or properly comply with the terms of the licenses and automatically add the terms and copyright notices during autocomplete and/or properly apply copyleft to the newly generated code. Anything else is a massive copyright violation and a direct assault on open source. Let me put it this way – the code to various versions of Windows has leaked numerous times. What if we train an “AI” on that leaked code and let everyone use it? Do you honestly think Microsoft would not sue you into the stone age? ↫ Thom Holwerda It’s curious that as far as I know, Copilot has not been trained on Microsoft’s own closed-source code, say, to Windows or Office, while at the same time the company claims Copilot is not copyright infringement or a massive open source license violation machine. If what Copilot does is truly fair use, as Microsoft claims, why won’t Microsoft use its own closed-source code for training? We all know the answer. Deeply questionable legality aside, do any of you use Copilot? Has it had any material impact on your programming work? Is its use allowed by your employer, or do you only use it for personal projects at home?