It seems like a number of Debian ports are going to face difficult times over the coming months. Debian developer Julian Andres Klode has sent a message to the Debian mailing lists that APT will very soon start requiring Rust.
I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem.
In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.
↫ Julian Andres Klode
The problem for a lot of architectures that Debian supports, in one way or another, is that Rust and its toolchain simply aren’t available for them. As such, Julian Andres Klode states, rather directly, that these architectures have about six months to get themselves a full Rust toolchain, or sunset their Debian ports. The Debian PA-RISC (hppa) and Alpha ports, for instance, do not have a Rust toolchain port, and most likely won’t be getting one either, especially not within six months.
The reasoning for moving towards a hard Rust dependency for APT is the same as it is in every other similar case: Debian’s and APT’s developers want to be able to make use of modern tools and technologies, even if that means dead architectures get left behind. As much as I am a massive fan of retro-architectures like PA-RISC, I really don’t want otherwise modern Linux distributions to eschew modern tools and technologies just because they’re not available for an architecture that died in 2005. I own and use the last and most powerful PA-RISC workstation running HP-UX as a retro platform, so I definitely care – but I really don’t expect Debian or Fedora or whatever to waste any resources on supporting them if that means holding the distributions back for everyone else using it on actually modern platforms.
If there’s a large enough community of people around such architectures, they’ll keep the Linux train running. If not, well, that’s life.

The Ubuntu and Debian guys seem to have lost their collective minds rewriting core utils but luckily there are more conservative options like Net/OpenBSD and Slackware
What’s the problem with improving stuff?
Nothing if it really does improve anything. Rewriting in Rust is like Hollywood rebooting successful movies
Cleaning stuff, decreasing attack surface, making accessible to new developers…
The problem is they act as if there are no other alternatives. And abandoning / removing existing working ports is an acceptable choice.
So, now we have Ubuntu, Debian, and soon git itself abandoning every platform where LLVM does not exist, plus those without proper Rust support.
Linux was supposed to be the OS “that runs on everywhere”, not “where there is more convenience for some of the maintainers”
sukru,
I would welcome having more rust implementations, for this reason and more, but it is being worked on…
https://thenewstack.io/rust-support-is-being-built-into-the-gnu-gcc-compiler/
I feel there are several points muddling this though. Whether it’s init systems, sound daemons, display servers, etc, there’s always been controversy over change and maintainers have always been the ones who decide what to support. This isn’t anything new for linux or FOSS. There are those who are really passionate about legacy architectures, which is fine, but it doesn’t negate the economic tradeoffs involved in supporting such a minuscule user base. Theo de Raadt from OpenBSD comes to mind…
https://undeadly.org/cgi?action=article;sid=20140114072427
His opinion to keep supporting old legacy hardware isn’t wrong but when a project is footing the bill to maintain and run such niche legacy architectures, I think it’s fair to debate whether the resources are being well spent. Clearly some maintainers feel it is, but others don’t.
Projects should be pragmatic about where they spend resources. To be perfectly honest, supporting legacy hardware doesn’t really benefit me much. Sure, it’s nice to see that it works, but I’d rather see modern distros improve support for modern commodity hardware. Yes I’m talking about ARM hardware. There’s way too much off the shelf hardware that can’t install common linux distros like debian on it and to me this feels like a far bigger problem for FOSS.
Alfman,
Yes, that would be nice. But the current pace is pushing the cart before the horse.
For a resource like apt, though the calculus is different. Apt goes beyond debian, and is supported by many other operating systems, which might be the only way of running Linux on a specific platform.
They are sometimes cross compatible. I could download debian packages on a non-debian system. Of course not always supported, but that also gave a good “backup” when things are not quite working out of the box.
sukru,
I understand why someone downstream may not like it, but in principal I don’t see the calculus as being particularly “different”. People have posed similar complaints about systemd being too invasive, or gnome/kde officially ending support for X11, dropping 32bit support, etc. Clearly people have different interests & priorities, and nothing I say is meant to dismiss their preferences, but ultimately it’s the maintainers who make the decision for a project.
I do appreciate your point of view that apt’s evolution to rust won’t please everyone. You are right. However this is the standard deal we always get with FOSS: play along with the maintainers or create a new fork if you don’t like the changes. Other examples of this happening are MATE, libav, libreoffice, and many more. There will be debate over what’s best, but the point I want to get across is that this is par for the course with FOSS software and it’s probable that this rust conversion controversy will reach a lot more projects over time.
bubi,
We could continue using unsafe languages like C/C++ indefinitely, but after so many decades of software having memory faults, vulnerabilities, breaches etc, I do think it’s time we start plotting paths for a better future for reliable software. Memory safety is a good example of something the industry has managed to improve significantly with new languages. Things like ASLR and memory tagging are bandaids that have limitations and don’t truly address the underlying cause of memory faults like safe languages do.
I admit not everyone agrees with the decision to switch, but I am afraid that many of them are sweeping objectively real problems with memory safety under the rug. This movement to safe languages will take a long time, but I do think we’ll be better off for it long term.
Why not coding in Erlang ? It is a “safe” language too.
Alfman, I agree with you.
Personally, I’m not giving a capital F on support of legacy hardware, what I want is something more robust, more modern, and I don’t really care if it’s written in Rust or anything else but now that we now that there is a possibility to skip all that memory out-of-bounds issue, I want that.
I also want a new C, a new C++, aRust supporting more archs (Cranelift, Gcc) and a more mature Zig. It’s a bit sad that they are still not here 🙂
Kochise,
Yes, it looks interesting. Most languages newer than C are safer than C, haha. I guess this is worth a debate on it’s own, but a lot of GC languages are shunned for low level work. As a garbage collected language, proposing Erlang as a replacement for C in low level code would be controversial.
Since I lack experience with Erlang, I’m not sure how it well it stacks up against other garbage collected languages that I like including dlang and c#. Someone else favored Ada, which actually started adding rust-like safety extensions. So that might be a fruitful choice, although many of these languages don’t have critical mass.
An article that went over all these modern alternatives with a pros/cons write-up would be very interesting to discuss.
Kochise
> Why not coding in Erlang ? It is a “safe” language too.
But is terribly slow if you do something else than just communicating (over network or with other Erlang processes).
Have you ever heard the expression: Dont fix it if it isnt broken?
Reimplementations for absolutely no reason, will create more security flaws not fewer.
Carewolf,
That is logical of course, but the caveat being many would say that C code IS broken. It’s not just hypothetical, C has been notoriously plagued with faults since the beginning. The “many eyes make all bugs shallow” is a cozy feeling, but has proven to be fallible even for mainstream code bases with lots of eyes. Hackers including the NSA are exploiting these faults. Though known exploits get fixed, as complexity goes up it becomes harder to know for sure whether any given code base is vulnerable or not. This is why safe languages that automate verification are useful.
That’s a fair point and it deserves to be addressed directly: You are right that sticking with C avoids teething problems with new code written in a safer language. But there is what I’ll call “the fallacy of short term solutions”; by always taking paths that are easiest in the short term, and doing so every single time, this virtually guarantees we get suck in a local maxima that keeps us from solving long term problems. Every generation could use this justification to keep using C indefinitely, but then the aggregate harm decade after decade will end up causing a lot more harm in the long run. And this is exactly what we are seeing with C code bases.
Developers are very stubborn and resist change, myself included. The way I see this playing out isn’t so much convincing stubborn programmers to accept the merits of new languages. Instead, what I expect is that old generations retire and age out of the industry and then those who are more amenable to new solutions will replace them.
It’s not just the memory faults, C has many other shortcomings too. Honestly, as a long time C programmer, I think it’s time to move on. I concede there are difficulties, but we should at least be planning it. With the benefit of hindsight, the world has learned how to make programming languages better than C/C++. There’s no reason to lock the software industry down to legacy languages just because the world decided to use them half a century ago. We deserve better languages now.
The last time I tried was 2 years ago, and my hppa C8000 ran Debian sid almost perfectly. I couldn’t get hardware acceleration on the Radeon, some nasty old bug, but everything else ran great.
I wish we would live in a society where we would start seeing energy as the endless resource it is (at least for a few billion years) and materials as finite (which they are). But we seem to be very keen into condemning good hardware to the landfill (and doing our best to exhaust our resources in a few hundred years) rather than going the extra mile to keep as many of them as functional for as long as possible.
Funny that I can do 99% of what I need to do with a computer from my nextstep 3.3 box, including printing to a new postscript printer and email. Banking I can do from my phone.
So, to add to my list of wishes, if the web would not suck so much and progress would slow down a bit so we could toss less things in the trash, would be great, really.
Indeed luckily there are more conservative options.
I’m getting a confusing message from your post. If your hppa machine is such a good fit for your current needs, how come you haven’t used it in 2 years ?
Most negative feedback to this kind of news feels so reactionary for the sake of it, with many theoretical arguments thrown around that it’s always hard to get if one is indeed sincere.
I’m generally sad to see anything hit the landfills, indeed. But hinting at open source projects being a noticeable driver for it when they deprecate architectures that have mainly ceased to see any real world usage for 15+ years, compared to what really contributes to that problem… I have a hard time seeing this as reasonable.
In any case, if the devs themselves don’t feel like maintaining the tools on their old foundations, I don’t think basic users like myself have any moral right to jeer at them. When Debian shut down the ppc port, I was sad, sure, but unless I do invest time in maintaining it, I don’t think I’m entitled to have others to do it on their free time either when it doesn’t suit them any longer (nor on paid time from their employers for whom there’s nothing much to gain).
I use it – HP-UX.
I’m not upset that developers are dumping geriatric platforms. It does indeed take time to maintain, and energy. I’m just sad at the whole “waste of resources” culture that gets us into this state where a web browser rendering about:blank consumes gigabytes of memory, and where a phone that is perfectly capable of recording and reproducing 4K video is also doomed to the landfill.
This is much bigger than any single developer or Linux distribution. It is just a result of Big Tech concentration and customers, with their incomes squeezed, coming to expect software for free and paying for the hardware. We should be using hardware as long as possible and paying for good, efficient software instead of generating trash and getting “free software”.
And I also hate that developers work for free on the things they love and get paid to deliver rubbish at Big Tech.
Shiunbird,
We’re not really in tune with software optimization like we used to be. In the past limited hardware made optimization a necessity. Now after decades of hardware improvements, software devs have become gluttonous.
That said though, I don’t think this is the main culprit with hardware being doomed to the landfill. Planned obsolescence is a bigger problem IMHO. Most phones and computer specs are overkill, but sometimes hardware is thrown out over software support issues. Unfortunately this is economically beneficial for manufactures, so they have no incentive to fix it. Even a random old phone from ten years ago has tons of potential for a second life in DIY projects. Not only are the hardware specs “good enough”, they’re often magnitudes better than needed.
For example, when I worked on a canbus project, it would have been awesome to built it out of an old used phone that will become waste. But they’re so proprietary and locked down that I end up buying a raspberry pi with fewer built in features, worse specs, no display, no battery. I’m not disparaging RPI but it’s a damn shame that we can’t make full use of the commodity hardware that so many of us already own.
Another example is an offgrid battery monitoring project. I’ve end up spending lots of money to buy new hardware because the commodity hardware being thrown out is too locked down to allow reprogramming, Imagine if reprogramming cell phones was as easy as it is with x86, we’d legit see a whole cottage industry around reusing our cell phones. So many of us recognize e-waste is a huge problem, yet I see absolutely no progress being made. Corporations are rolling in the dough and have no incentive to improve, and moreover regulators let them.
Agreed. We were better off when different parties were responsible for the hardware and software. Everything goes to shit when the hardware get locked down the the manufacturer’s software.
@Alfman
Agreed. We were better off when different parties were responsible for the hardware and software. Everything goes to shit when the hardware get locked down the the manufacturer’s software.
—
So much this. I remember MSFT going out of their way to say that Windows 95 was supported on a 386 with 4MB of RAM. Yes, it would run, but anything beyond a single instance of notepad would get the system to swap like there’s no tomorrow.
Or picking up a copy of Flight Simulator and saying that it was officially supported on NT 4, 95, 98, Me, 2000 and XP even though running it on NT 4 would condemn you to software rendering and I think even no force feedback (the buzz of the time).
I daily drive a Librem 5 but yesterday picked up an used ipod touch (last gen) just to facetime with my grandfather (because of Skype going away). I see no reason why the thing couldn’t run the latest iOS, but hey…
He could get the latest Skype release running on his 2007 iMac running Linux or his 2015 iMac running macos or his android phone and now… seriously, can’t stand Teams. =(
I digress. The problem here is the oligopoly. Just as the Oil Company can’t own the roads and the gas stations, the hardware company should not own the software company (and the warehouses, delivery services, internet service providers, colo)….
“Planned obsolescence is a bigger problem IMHO”
I completely agree. EOL devices need to be open-sourced. It is going to take a seismic change, of the order of a Civil Rights Act, to curb the runaway waste caused by corporations maximizing short-term value at the expense of future generations. This creates a level playing field. Otherwise the first mover will go out of business and we are back to square one.
Iapx432,
I partially agree. Since it would be difficult to define what was abandoned on purpose, and what was abandoned due to technical reasons. And opening source code might not always be feasible (or even legal).
Though…
There have been very egregious ones,
https://www.bbc.com/news/technology-51768574
Sonos asked people to purposefully “brick” their perfectly working smart speakers to get an upgrade discount.
That was vile and cruel, and of course wasteful too.
(And they offer no “line in” option. If your WiFi is down, the $2,000 speaker setup is a brick now).
I saw a q6600 quad core for sale and looked into how well my PC of that era stacks up with modern CPUs. It’s outperformed by a Raspberry pi using almost no power comparatively. Your argument energy is free only holds up when that energy is cleanly generated. Reality is coal/nuclear isn’t so free and is the baseload for the power grid. There is a real, but obscured cost of air pollution and nuclear waste from keeping less power efficient devices in use.
dark2,
There’s no denying ARM SBCs beat old and even new x86 desktop computers on efficiency. However I still wouldn’t recommend a low end ARM SBC on the desktop. I probably had overhyped expectations when I bought one, but even the RPI5 lags at desktop performance. I actually did replace a very old desktop from the 2000s, which had died, with an RPI5 for my in-laws who needed a basic computer to do web stuff. The mediocre performance probably has more to do with the state of drivers and hardware acceleration than the CPU itself, The RPI was really disappointing at OpenGL..
But whatever the reason, an old x86 computer (with upgraded SSD of course) could still come out ahead on performance.
You do make a valid point about energy consumption, although it depends not only on comparing the carbon emissions between an old and new product, but also the carbon emissions and environmental impact of disposing/manufacturing/packaging/shipping resources around the world, etc. I’ve read that it’s environmentally cleaner to run an old car till it gives up the ghost than to preemptively replace it with a new car even though the new car will be more efficient. I searched in vein for numbers that cover the impact of replacing old computers, but I didn’t find much detail. Most articles I found just talk about manufacturer’s EOL issues. If you do find a source that evaluates the environmental impacts in detail, then I’d be interested in reading what they say!
We should be moving to clean energy.
But, if you consider even that we can recycle 99% of our components for raw materials in the future, then if we keep wasting that 1%, one day we will run out of raw materials. Then what do we do?
I don’t think it makes much sense to think 20 years ahead and neglect 300 years ahead. What are we going to do with all the wasted GPUs in AI?
Shiunbird,
Yes, clearly! Although somehow the world has become stupid and clean energy is considered politically divisive. Too often green energy initiatives get canceled and rolled back with the political tides, undoing everything. This not only makes stretch goals unattainable, we’re failing to make much progress even on low hanging fruit.
There’s an unimaginable wealth of raw resources in the core of the earth, but we don’t have a good way of tapping those with the machines we have. Here’s an idea for a sci-fi movie: the world runs out of resources and we use our nuclear arsenal to blow a huge into the earth creating artificial volcano to access more minerals from the core, haha.
Quoting homer simpson, um…”nuclear weapons, the cause of, and solution to, all of life’s problems.”
https://www.youtube.com/watch?v=SXyrYMxa-VI
Thinking “20 years ahead”, wow you are an optimist!
Assuming we had practically infinite energy, we could recycle everything back into raw materials. The worse case scenario (ie one of the least energy efficient ways to do it) is we melt/vaporize everything down and separate/collect raw materials that way, including from landfills. It ought to at least be physically possible, we could do it in a lab setting, but it’s hard to envision this being economically viable. Scaling it up would require such a large amount of power that it doesn’t seem practical. Maybe a huge concentrated solar collector could do it. This isn’t a new idea, but even with “free” solar energy every plant is still likely to cost billions of dollars.
“Sahara (2005) – Solar Tower Fight Scene” (I was unable to find a better clip of he solar plant)
https://www.youtube.com/watch?v=LdjLrwJcIQs
Maybe this is in our future, but probably only as a last resort as we use up cheaper sources first.
@Alfman
> somehow the world has become stupid and clean energy is considered politically divisive
By “the world”, I assume that you mean the US.
https://arstechnica.com/science/2025/10/theres-a-global-boom-in-solar-except-in-the-united-states/
I would say it is more common than people realize.
For example, the “anti nuclear” movement in Europe is a very strong political force. They even shut down perfectly working nuclear reactors in many countries (including Germany, but France somehow avoided this terrible fate)
https://www.foronuclear.org/en/updates/in-depth/germanys-nuclear-shutdown-mistake-rising-prices-increased-emissions-and-economic-recession/
But they had to import natural gas and coal to make up for the deficiency. My personal conspiracy theory is this was fueled by putin and other “old” energy sellers to avoid actual clean energy solutions (again nuclear is literally the safest choice we have, even compared to solar — though depends on how you count falls from roof installations)
In response, they also ask people not to run air conditioning. A strange phenomen, that causes more deaths due to heatstroke than we have due to gun violence in the USA — including suicide. Nothing to be proud of.
sukru,
I don’t have A/C here and summers get extremely uncomfortable. Would be an unfortunate way to go, haha.
Here in the US many grid operators are operating with very little margin and excess demand can result in brownouts and blackouts. When my parents were in california, they were affected by the rolling blackouts. My understanding is that power plants could generate more power, but the grid could not safely deliver it and hot power lines were starting fires and therefor they had to cycle between those who could get power.. Texas had a couple grid collapses in recent years where people died (albeit through completely different circumstances). Here in NY during a heat wave my UPSes started beeping as the house voltage dropped below 90V (normally 120V). Every house with A/C would have been running it and the grid wasn’t able to keep up voltage. Imagine people trying to charge their electric cars on top of that. I am honestly surprised the grid stayed up.
As for being asked to turn off heavy appliances, it’s kind of a tragedy of the commons. People feel entitled to ignore those instructions. but their greed creates an existential crisis for the grid, which will be forced to shut down if enough people don’t listen.
Solar installations have significantly helped reduce grid load during the day, but it’s created a new “duck curve” problem…
https://en.wikipedia.org/wiki/Duck_curve
Solar panels tend to generate most electricity when people aren’t home. Then when they return home after work, the grid demand goes up while solar generation goes down. Solving this necessities not just solar panels, but energy storage. This is very expensive and increases the competition for lithium batteries needed by portable electronics and electric cars. Immobile energy applications will probably need to shift away from lithium towards other chemistries where density is less important. Sodium batteries are a promising alternative because the materials are so plentiful, but it’s hard to see the appeal for alternatives when they’re even more expensive than lithium.
“BLUETTI Na SODIUM ION Battery Power Station | Did They Just KILL Lithium?”
https://www.youtube.com/watch?v=OoZ_g_MShTw
We need more scales of economy to solve this.
Aflman,
As for Solar, California is making it extremely difficult to get. Long story short, politicians prefer the profits of large utility companies over the little guy, preventing brownouts, or even the environment. Basically same story everywhere.
Anyway,
As for A/C my, I got my current room an LG “inverter” window unit. It was two of them (one for the living room) for $300 each last year. And costs about $10 – $40 per month to operate depending on how much is needed.
Doing whole house upgrade is usually a “5 digits” project, but making your home office livable might be a worthwhile upgrade.
Yes, and that is why Solar + Nuclear is the best solution for our environment and also the long term economy.
For remote places, we can have “micro-grids”. They don’t need to be connected to national grid (sorry, but it really is not economical). Yet they can have Solar + Battery + Gas backup.
“But what about snowy mountains?”
They actually can generate more solar power due to less obstructions, and highly reflective properties of the snow.
sukru,
Yes, I’ve heard that homeowner regulations create impediments in some states. At least the government laws are ostensibly about safety. But these days homeowner rights are also being lost at an alarming rate via another scam that now includes a majority of new properties sold in the US: involuntary homeowner contracts with HOA corporations. These ought to be illegal. Bah.
Nuclear is greener than burning coal for sure. Although my understanding is that nuclear does not scale up and down fast enough to respond to dynamic grid conditions, where coal/natural gas plants stiff have the advantage. Hydroelectric turbines are “green” if you can get them, but with the droughts out west we’re seeing the prospect of former hydro power plants going offline rather than coming online.
I’ve been hearing about other grid scale energy storage solutions that seem interesting and might one day provide a better solution to the irregularities of renewable sources without requiring rare earth minerals for batteries: gravity storage lifting and descending weights or pumping water up and down a hill, maglev spinwheels. etc. Many of these already exist, but only on smaller scales. I can only imagine how large these facilities would need to be to scale up to service the grid, possibly exclusively, when no renewables are generating electricity.
If you are so concerned about energy, why are you using such an energy inefficient machine. Those HP-PA machines are so old, that they literally have almost no power saving/throttling support in hardware. A modern ARM machine like a Pi will run circles around it in performance at a much much lower power consumption.
We really need to have a safe but backward compatible version of C++ so that we can have the safety of Rust without all the downsides of incompatibilities.
Unopposed0108,
We already have several options, the latest popular one is Fil-C:
https://github.com/pizlonator/fil-c
sukru,
Interesting link, it’s good to learn about these projects. However I wouldn’t equate this to what rust does. From your link…
While many garbage collected languages do offer improved memory safety, this is often shunned for low level development. I suspect Fil-C would face the exact same criticism in replacing C. We have seen efforts to catch memory faults when they happen, Fil-C may offer similar benefits to memory tagging in this regard, but there’s still a gap between making a language “safe” by catching a fault when it happens versus a language that prevents it from being compiled. In mission critical software like avionics or medical devices, catching the error once it’s happened is clearly inferior to catching it in the compiler.
So the rust approach is objectively better for safety. In theory we might add the necessary metadata to C to allow the compiler to enforce C safety at compile time. It would go against what Unopposed0108 is asking for though as it requires existing programs to be modified. I can see how keeping C syntax could be more palatable than switching to rust, but it would be a tremendous effort all the same. Moreover rust’s safe by default approach offers a huge benefit because unsafe code gets confined to “unsafe” sections, which is a huge timesaver for auditing code.. It’s hard to see how C can ever reach a “safe by default” point given that 100% of pre-existing C code is unsafe today.
Perhaps this all can and should happen anyway, but it’s not an easy/automatic fix that just works. I honestly think the only way we will conceivably get there, using existing C code, is to use AI to fill in the missing metadata.
Alfman,
There are always trade-offs. Proponents of a technology usually ignore this, and want to sell a product based on positive features.
(This is not too different than other products. “Hey, we have a service called Netflix, you can watch as much as you want”. “But what happens to a content if I’m in the middle of a season, but you discontinue?” “Sorry, please ask us about how awesome the next season of Bandersnatch”. “But you removed that too” “oops, our bad”)
Anyway, basically we have three high level options.
Keep everything in C or C++, gives us maximum portability and code re-use, but also brings occasional memory related bugs.
Migrate to Fil-C, CheckedC, or similar, gives us similar portability, at the expense of occasional performance degradations.
Migrate to Rust, gives us excellent memory safety, at the expense of very high engineering effort, and will bring back many other previously fixed bugs and performance issues.
There is no free lunch, and this is even true for the best tools out there.
sukru,
To me it’s not just about performance, IMHO a new C-like language/feature that don’t address the root causes of memory faults is not worth migrating to. The industry needs to move forward by solving faults at their source.
Assuming we’d all be willing to embrace garbage collection (many will not be), then there are tons of mature languages that do this already. C# is a really nice language that doesn’t merely address most of C’s notorious shortcomings, but has added a lot of powerful features over C and even C++.
You didn’t comment on the AI, but I think that’s a realistic option. Today’s generative code LLMs have a tough job because they essentially translate english specs to C/python/whatever and english makes for a very poor specifications language. However C code is also a specification language, which specifies how a program works in great detail. Using C code as a specification to write a program in a new language should be much easier to automate given that language constructs can often be mapped directly into each other. I concede C lacks the safety features & metadata that rust uses, so it needs to be generated somehow, but it seems like training an AI to fill in these blanks would be feasible with minimal human effort compared to re-writing the software from scratch.
Alfman,
It is more nuanced, but Fil-C definitely needs better marking. They can start with the terrible name choice. But “up to 4x overhead”, “garbage collection” are also not helping.
At its core Fil-C is just a different ABI for standard C.
It is not a new language, but it changes the LLVM compiler to adhere to the new runtime rules. It also comes with some standard library updates.
And oftentimes has zero or little overhead.
Say, you have the code piece:
Here the compiler knows the exact range of memory addressed by p, and also the loop boundaries. Fil-C will generate more or less the same code as regular LLVM C compiler.
Here the compiler will include only two checks, before the loop. It can optimize out memory access overhead for each iteration.
This is the worst case, and where “4x overhead” comes from. Fil-C runtime will have to check each and every byte sized pointer access (on read, and write)
This is also the kind of code modern developers avoid. Since not only we have libraries that optimize this (much better), it is also extremely unsafe, and where most of the buffer overruns come from.
As long as the code itself is high enough quality, the Fil-C will behave very similar to native C runtime.
Garbage collection?
That is for the “hidden pointer tags”. Each pointer comes with an associated tag, which is stored elsewhere and managed by GC. Regular pointers have (almost) same exact behaviors, including sizeof(void *) = sizeof(int)
(As for LLM based code translation… This is one area they are really bad at. Yes, it would feel like the opposite would be true. But then… we would not need compilers, or ABIs like Fil-C would we)?
sukru,
I get that there’s a lot of interest in finding solutions to C that keep the language as is. But TBH I am strongly in favor of compile time verification. All of these run time solutions work better as stop gap measures and not permanent solutions.
C programs in particular notoriously lack metadata about object lifetimes and even object boundaries, which is alarmingly reckless. The onus is on the developer to write correct code by enforcing restrictions that aren’t present in the language. Trouble is that humans also struggle at inferring the correct behavior especially when it comes to using poorly documented C APIs that function as black boxes. For example…
https://learn.microsoft.com/en-us/windows/win32/secauthn/acquirecredentialshandle–schannel
What is the lifetime of the pAuthData structure?
Does it need to remain valid throughout the existence of the returned phCredential handle until FreeCredentialsHandle is called? Or can it be freed right away (ie on the stack)?
The documentation doesn’t explicitly say. and at the source code layer it’s not clear whether AcquireCredentialsHandle will continue to reference the data structure later when the acquired credential actually needs to be invoked or if it copies all the data it needs. Both are plausible and logical. Since it’s not documented, somehow you have to guess the original developer’s intention. I’ve seen 3rd party sources disagree with each other. The point of the example is that the lack of specificity in C is directly responsible for ambiguities over whether or not a memory fault exists. Better documentation would help, but rust fixes this by making object lifetime explicit so there is no confusion. Solutions that keep using C code unmodified cannot fix this because they don’t know the correct lifetimes either. The best they can do is halt when a fault happens, but even this is still bad. That’s why I am a proponent of the compile time correctness that rust offers over other solutions that continue using C code with mitigations.
A lot of these LLMs are too generic in nature and aren’t optimized for the specific scenarios where we end up wanting to use them. I think a special purpose built LLM would actually do well at code translation without all of distractions that a general purpose LLM needs to make it general purpose. Purely specialized LLMs have more promise in technical areas.
AI isn’t really necessary just to translate the syntax either. Code translators using classical techniques with abstract syntax trees work fairly well. Heck LISP makes it perfectly natural to work with it’s own code as a data structure. Once you have this AST one don’t really need AI to translate the language syntax since its already a fairly mechanical task. What’s missing from C though is “intention metadata”.
Alfman,
C has the perfect abstraction for our current machine architectures (except memory hierarchy). It is essentially a portable assembly language, and that is why it stay ubiquitous.
As for logic errors, like the lifetime of an operating system handle, no compiler can fix this. As handles can be shared across processes, sometimes can even be persisted to disc, or transferred across networks.
Trying to solve something fundamentally unsolvable by nature causes eventual rifts, where the abstractions either have to leak, or they will have to prevent people from doing perfectly valid tasks.
Yes, and we can achieve that with current C, with no or minor modifications.
(Even Python was able to move to a “type enforced” version without breaking compatibility much)
The issue is there seems to be at least 3 different approaches, which will need to be reconciled somehow.
(All in C and/or C++ domains, without other languages like Rust)
1 – Improving existing C runtime, like Fil-C. It is the same exact valid programs running with a different ABI
2 – Extending the language with optional stronger sections with annotations, and strict checks (CheckC, new C++ safety profiles)
3 – Shrinking the surface of the language to ban undefined behavior (Safe C++)
Over time, I expect these different solutions converging into one, official, and shared consensus, where C/C++ will continue to operate as the nest 40 years’ system language, and we will be able to run existing code automatically patching any safety issues.
Alfman,
I think you answer yourself here.
Functional languages, like Lisp are perfect candidates for machine manipulation without breaking anything.
Imperative languages, which is basically anything useful today, will break in subtle and very hard to find ways (or just blow out) during machine translation.
I have done many translations manually, and automatically. It is never an easy task. LLMs might help, but in my practice even the best coding ones makes it more difficult. However I benefited from asking “what does this code do, and how does it process the data”?
sukru,
I think it’s a matter of training the AI for the task at hand. Being exposed to mountains of code in general doesn’t make one good at translating it. However being exposed to before and after translations will be able to do a much better job.
C is technically a subset of assembly language, but I don’t hold this against it. Programming languages generally share the same kinds of primatives for things like variables, structures, if statements, functions, and so on. These aren’t really unique to C. To the degree that C has fewer abstractions than other languages, I wouldn’t say that’s necessarily a good thing either. Useful abstractions are good, save time, reduce bugs, help optimization, improve portability, etc.
I think the most important thing for a low level language is NOT the absence of high level features, but rather having flexibility to use only the features you need while not being forced to incur the costs of features you don’t want. I admit that C/C++ does a good job here. For example, C++ has higher level features like polymorphism and exceptions that incur run time overhead, but they’re optional and the project can determine whether to use them or not. But other languages do too. There is nothing technically special about C that hasn’t been improved upon by other languages, what really makes C distinguished is it’s popularity due to the rapid growth of unix and C being the defacto standard in the early years.
If C had been introduced later, getting anyone to use C would have been an uphill battle because we’d already be using better languages for everything. Let’s not beat around the push, C has tons of faults and problematic designs because it’s authors lacked the experience that most CS grads would have today. What the C language more important is not C’s intrinsic qualities, but rather the role it’s played as a defacto standard. This is a role it continues to play, however it’s becoming ever more clear that it’s holding back the industry and if we don’t move on it will continue to the industry back from modern features while plaguing us with faults that other languages have solved.
Or https://github.com/checkedc/checkedc
Kochise,
Thanks, I’ll check that out, too.
Kochise,
Coming back after some cursory peek at their design.
They seem to have a different approach than Fil-C. A set of language extension, and “opt in” pieces of secure code, while the legacy code stays untouched.
I might be missing things of course. But this seems to push the language more into a C++ like syntax. Which I like of course, but not everyone might be on board.
Thinking back C++ themselves had similar ideas, with more explicit memory safety. I think it was one of the big guys (Herb Sutter, or Stroustrup?) that emphasized the language had to evolve, or risk being left behind.
In any case, it is good to see all these efforts. Rust might have given a good kick to the hornet’s nest and they are finally bringing our beloved languages to modern times.
The architectures at risk here are all unofficial ports that fall out of sync with mainline Debian anyway. It is probably not the end of the world if they lag for a while or even just ship an older version of APT. But of course, this is not the last we have seen of Rust.
Longer term, the solution is to port Rust to more architectures. The easiest way to do that is to incorporate it into GCC. And good news, this is already being done in multiple ways.
https://github.com/rust-lang/rustc_codegen_gcc
https://github.com/Rust-GCC/gccrs
If you had the specific goal of making sure either of the above options could successfully build APT, I doubt it would be too much work. Neither is far enough along to rely on generally but I think they could be used to target a specific use case. I mean, it is not improbable that the rustc_codegen_gcc will be good enough to compile the Rust in APT when it first ships (even without special effort). I think it can already compile the Rust in the Linux kernel.
I expect that the success or failure of these Debian ports will continue more on developer interest than anything else. Leveraging more Rust code in Debian is not the death sentence the current wave of articles says it is.
gcc mainline is also supporting rust nowadays, you can build rust support on latest dev branches.
rustc also basically supports most scalar archs that LLVM targets.
Other than a few ancient archs that have few users, rustc works on most mainline arch/systems.
It’s a good call, start moving userland stuff over from C to Rust. It rally avoids a lot headaches in the long run, having a systems language that is memory and thread safe from the ground up. For large system codebases, at least.
I was enjoying reading this debate when something struck me as more than a curiosity. Through much of it we could if we choose substitute various words, Rust, APT, gcc, X11, Wayland and it’s the same fundamental debate and arguments that surface over and over again. From my perspective, it’s almost depressing.
Some people think the future is to be determined while others think it’s living in the past. Nothing can or should last forever. I don’t want to live in a world that was the same 10 years, or 20, 30, 40, or 50+. You can’t have progress without the progress part. I’m not saying everyone and everything should live on the bleeding edge at all times under any circumstances. That’d be just as stupid as trying to live in the past as long as possible.
IMHO I think it’s perfectly acceptable for Debian (or any other distribution) to stop supporting old platforms provided the following is met:
1) The old versions ISO images are available for download (unsupported of course)
2) The old versions source codes are available for download (can be in the distribution ISOs or a separate big tar.gz)