Redox, the rapidly improving general purpose operating system written in Rust, has amended its contribution policy to explicitly ban code regurgitated by “AI”.
Redox OS does not accept contributions generated by LLMs (Large Language Models), sometimes also referred to as “AI”. This policy is not open to discussion, any content submitted that is clearly labelled as LLM-generated (including issues, merge requests, and merge request descriptions) will be immediately closed, and any attempt to bypass this policy will result in a ban from the project.
↫ Redox’ contribution policy
Excellent news.

No-one is as blind as the people who don’t want to see. (And this goes both ways.)
Bullshit. Choosing to ban “AI” from one’s own project is an informed and sensible choice.
It’s no more being blind than choosing not to eat a Spam sandwich because you don’t like how Spam (the meat product, not the junk mail) is made. Spam fans will swear to you that it’s the same levels of protein, fat, and bonemeal as any other meat product while in a superior and more efficient package, and if you’d only hold your nose and swallow it you’d see how great it is. It doesn’t matter that it smells like three day old roadkill, it doesn’t matter that the texture is like mayonnaise with sand in it, it doesn’t matter that it gives you indigestion when you eat it; clearly you are just being willfully blind to the benefits of heavily processed animal carcasses made with questionable food quality standards. Stop hating Spam, it’s going to change the world! Yesterday I used Spam to spackle a hole in my wall, that’s how good it is! Never mind the smell and the maggots dropping out of the hole I thought I patched, there’s nothing to see here, stop hating on my precious Spam!
Now, doesn’t that sound absolutely silly and unhinged? That’s how “AI” evangelists sound to people with two brain cells to rub together.
If your goal was to make the argument for AI look reasonable in comparison, you succeeded.
Xanady Asem,
Exactly.
People have one or two bad experiences, and assume everyone else have the same issues using modern tools.
I think it’s more an utter lack of understanding than a bad experience.
E.g. there is zero chance the organization I work for will go back to non-AI workflows. The productivity gain for chip design, for example, is just something that would be idiotic to give up just to meet arbitrary expectations of random uniformed dogmatism.
At the end of the day AI is just a tool. And there are lots of different tiers/spectrums/field to what is “AI.”
Some of these ramblings make me think that if the web and open source projects had been widespread in the 70s or 80s, that a few of the “tech enthusiast” (zealots really) of the age would be ranting against using languages, compilers and preprocessors as “unholy” for software development.
Xanady Asem,
We have all been there. I remember my own fiery discussions on the merits of C++ over Java, or how using Windows was a very wrong thing do to. I was a Linux Zealot.
With age, one realizes there are other choices and priorities in life.
And we learn to live and let live.
You completely missed my point. Those who rave about how “AI” is the only way anything computer related should be done are delusional and simply wrong, and they come off as evangelical psychopaths whose lives are so wrapped up in their beloved “AI” they can’t function normally in society. This kind of thing is documented, studied, and downright scary. It’s worse than hard drugs.
I’ll say it again, try to read it with an objective mind and not through the translation of your favorite hallucinatory LLM. Choosing to ban “AI” in one’s project is just that: A choice. It should not be ridiculed, it should not be laughed at, and it should be considered just as valid as your choice to use “AI”. When you start going down the road of telling people they aren’t allowed to block “AI”, you become the asshole, and nobody likes assholes.
Who are “those people” in the second sentence?
I’ve never once heard anyone ever say, “AI is the only way,” etc. It’s true that various types of AI are highly capable in different ways. That is without debate. We have tests for that. But some absolute statement? Who other than Fox News and the present admin talks like that?
Yes, some people are having problems with AI. Humans during their entire existence have had a multitude of problems with all kinds of things. Some people have problems with most things. Alcohol kills thousands of people in the US every year in accidents. I can go by whiskey right now at my super market. It’s an educated guess that alcohol is more dangerous for more people than AI at this time in their personal lives. It’s just normalized. My logic holds as well as yours, it’s just more rational. What percentage of the population exactly has the problems you are talking about? We know some people are having problems with AI. This is important. What are the facts?
I like your statement, “your favorite hallucinatory LLM.” We know that. We know that you have to be careful and check the work. Why are you being overly emotional and accusatory.
Of course using or not using AI is a choice. Who said it wasn’t? But, if you work at a company and their choice is you have to use AI… and you don’t like that, you know what to do.
I would suggest that your messages here would appeal to more people if you were less accusatory, more accurate, and rational.
Who’s being what in this discussion?
Morgan,
My point stands: we all agree easily that SPAM is a terrible source of protein, close to soylent green.
But there are situations, when it serves its purpose. And I don’t even need to talk true survivor situations. Two weeks in Nigeria are usually enough.
So yes, setting rule like `No spam ever, no matter what` makes you appear blind. And the `bullshit` and `asshole` don’t really help that impression.
Good luck and have a nice day!
Except clearly, not everyone agrees easily that “AI” is a terrible way to do things. Otherwise this wouldn’t be such a heated discussion every time it comes up.
No, it’s a personal choice and I’m so fucking tired of being attacked and belittled for daring to not use the tools others choose to. To be clear: I don’t give a shit if Xanady Asem and Sukru and Alfman and you decide to use these broken, hallucinatory tools in your work and leisure; that’s your choice. However, I have an opinion about the tools themselves, and the massive amounts of clean water and other vital resources being diverted from people who need them so some rich white guys can play with their shiny new toys. So don’t give me that “survivor situation” hypocritical bullshit. It’s the same thing whenever I reveal that I’ve never drank alcohol; as a child I saw how it ruined my birth father’s life and culminated in his suicide, and I made the choice then to never drink. All my life I’ve been belittled and made fun of for not being “man enough” to drink. Fuck that noise, it’s a personal choice and anyone who puts me down for it can go fuck themselves with a razor blade.
As for the “asshole” comment, that wasn’t directed at you, it was for anyone who jokes about trying to use “AI” to get around a developer’s guardrails that are in place to block “AI”. Again, it’s the developer’s choice to not allow “AI” into their project, and calling anyone who tries to circumvent that in retaliation an asshole is being charitable on my part.
You appear tense! Maybe have a drink or a smoke or whatever tickles your fancy? Life is short my friend …
@Andreas Reichel:
Clearly you have chosen to mock me instead of having a serious discussion. To say something like that after reading about my personal choice about drinking shows exactly the kind of person you are.
Morgan,
I am not cheering for AI and my own use has been limited to learning what it’s capable of since I think it’s important to know. However I don’t understand why my position is being deemed unreasonable or controversial? Like in the recent article about wikipedia translations – I’m really not against human professionals to do the work, but as a realist the problem is the money isn’t there to seriously do that. Do you think it makes me bad or wrong to acknowledge it? Yes there’s a lot of problems with AI, but to get the best outcome in the corporate race the bottom we need to be more strategic in planning for the future. Pretending fundamental transformations are not happening in the industry is not a good plan. People may not want change, but it’s happening anyway and we have to do the best with the hand we’re dealt. This is what most of my posts have been about.
Hi Andreas,
if you have the time, I would like to ask you something, since (I guess) you like the support of AI during the development process.
I get some tips from AI, but I do not like to use AI for the architecture of my software. Some Colleagues, now much more productive than me, use massively AI for software development, what is fine. However, I realize they have a kind of specification-text as input and get the output from the LLM. However, hardly they can make changes in the final code in case of adjustments here and there are needed, which of course I can do.
May I ask Andreas, how do you manage your relation with the AI to have control over the details of the final product?
That’s easy: you just dump these problems on someone else. That’s where “soft skills” are valuable: if you can cobble together half-working proof-of-concept, impress the management and pocket the bonus before the whole thing will collapse then you win, massively.
And the only thing you need is to not care about what would happen to your creation after you would drop in someone’s else lap.
The plat have a flaw, though: with more and more developers who can support your garbage simply refusing to touch it the competition for them grows more and more fierce.
And, worst of all, they may even be viewed as more competent than you in spite of the fact that they couldn’t produce tons of toxic slop!
That’s why it’s important to pressure them as much as possible and ensure that they couldn’t refuse to participate in that game.
Greetings @Bemcl.
Yes, I have been using LLMs/AI for the last 2 years and I am “all in” since Claude Opus 4.6 which became a game changer for me. Before that, LLMs were just better API lookup tools (especially when I was not familiar w/ the API). But Opus 4.6 really gets stuff done. Faster and better than I can program it myself.
On the architecture: That is where YOU have to shine. Domain knowledge, put into a product story, which you will feed into the prompt. Also, I let the LLMs only work on well encapsulated modules. Never the whole big project. I follow the Unix philosophy that every program solves exactly one problem well. And only I decide how to plug the modules together.
This also matches the Token limitations of the LLMs/AI. Your goal must be to get a solution within the token maximum or else all your effort will not yield in a solution. So small, controllable units and steps are the key.
As a Civil Engineer, I may offer a picture: When building a house, you will need an Architect, a stone mason, an electrician and a plumber. The Architect will need to understand, what the mason and the plumber need for executing their work. Pro Tip: a knowledgeable Architect always starts the design with the staircase before thinking of anything else.
Become the Architect! Plumbing and masonry is more a less a solved problem now.
The policy seems to written with the implied acknowledgement that they cannot prevent LLM generated code from being submitted and accepted, it just bans anyone from clearly labeling it as such.
Alfman,
It is also so obvious, LLMs happen to be the “solution” to this as well.
“Please make sure the Pull Request does not look like it comes from an AI, remove cheery voice, remove smileys, cut the tests by half, no human writes that many amount of tests, unless they are psychopaths. And add in 2-3 small grammar mistakes just in case”
This is exactly what is wrong with AI zealots.
Why is it not enough for an individual project to say no to AI? Why can’t you just say “no worries, I’ll contribute to a different project”, or even “no worries, I’ll fork this project and do my own thing”. Why must the response be “I’ll show them, I’ll just sneak my AI code in regardless of their policy”?
Redox is a passion project, it’s not beholden to deadlines or corporate profits. Why would anyone even want to use AI to contribute to a passion project?
(I believe personal insults are a sign of being unable to articulate a point) But anyway, I think you completely misunderstood. I not only have zero desire to contribute to this particular project, I also have no interest in using it either.
I’m just presenting a friendly observation
+ “Your front door is open”
– “No! How can you say that!” is probably not the right response.
Apologies, was not my intention at all, and I didn’t mean to imply that you specifically are a zealot. It was supposed to be a more general statement, so where I said “why can’t you …” I should have said “why can’t people …”. In the same way that you are saying generally that people will find workarounds, I am asking generally why anyone would want to, where is the joy in this behaviour.
djhayman,
No worries then
My thinking is:
“The first job of an engineer is to think how a system could break down”
So that is why I will offer “advice” in that way.
When you make a policy like this it’s not because you have a foolproof way of always detecting bad actors who would knowingly violate the policy. It’s so you can direct good actors to follow the policy, easily filter out sloppy bad actors as well as bots that are signposting themselves as such, and have a simple recourse (banning according to policy) in case someone is found out to have been violating it. You can probably fly under the radar somewhat effectively if you really want to, but anyone who does that *are* just being an asshole.
@Book Squirrel,
in my experience, when you announce policies that you can’t enforce you won’t be taken very serious. Drive a car in Manila, Jakarta or Lagos and you will understand, what I mean.
@Andreas Reichel
Well I don’t have a driver’s license but I’ve sat as a passenger in many cars, and I have on the whole watched many obvious transgressions of traffic law both from the driver in my car and from drivers in other cars. Vast majority never caught and fined because no enforcement was around.
From this I have to conclude that there’s not actually that much enforcement in the countries I’ve lived in. Most transgressions are seemingly never caught. This is Denmark and Sweden, statistically two of the safest countries you can drive in. Most people still choose to be good actors who try to follow the rules most of the time. And sure, bad actors who consistently don’t will probably be caught sooner or later by the relatively limited enforcement we do have.
Honestly, a code project like Redox will probably be more able to consistently enforce this no AI policy than traffic police can consistently enforce traffic laws. But I doubt they’ll have to most of the time, so long as people read the policy before contributing.
IMHO you all are reading too much into this. It might just be a way to try to shield themselves from any potential licensing issues down the road. If you don’t see how that could be an issue, you’re not deep enough yet in llm world.
l3v1,
This is why FOSS advocates should be calling for LLMs that adhere to FOSS licenses. I’ve brought it up a few times and although I think it logically addresses the concern in good faith, I get the distinct impression many against AI don’t actually want problems to be addressed. Citing problems to fit the narrative is the goal, not fixing them.
Of course everyone’s entitled to have whatever opinion they want about AI, I’m not changing anyone’s mind on that. However I believe AI is here to stay whether we like it or not. By not using our voices to help shape AI positively in its nascent years, which is now, we are actually ceding a lot of ground to the future incumbents. With no AI from team FOSS, they’ll grow completely unopposed and I think it could ultimately be a very costly mistake for FOSS in the future.
I totally agree with that. For exactly that reason, I started studying all the ethical implications of usage (and abuse) of AI in worksplaces. This is a very fascinating, underestimated and complex field, btw.
Said that, let’s be realistic: companies are not going to allow us to make code in the old ways. The requests in terms of productivity are rised exponentially and are not going to lower anytime soon. So, unless you plan to spend your entire life working 16hrs per day for the same salary (spoiler alert: I’m not), a coding assistant is existential for having a decent work-life balance. I’m not say it’s right, because of course it’s not. But it’s a fact.
A _conscious_ use of coding assistant is btw benefical. In many dev fields (mine is one of those, actually) the generated code is more secure, well tested, well documented and full-featured than the best one I could write within the time constains my company imposes to me. I’m not fully trashing the sovereignity, tou. Of course I carefully inspect the code, I don’t trust by design (this is important). But it’s the tip of the iceberg.
Personally and ironically, as a computer engineer, I was finally able to do the job I was educated for, instead of working as a computer scientist as I used to for years (because there is typically no budget for an engineer also, in dev projects)
This remind me a flame I read some months ago on Haiku forum. One new user noticed that the webcams are not working in Haiku, due to very well-known a bug present in the code since…probably the beginning of the project, 20 years ago. He asked Claude to try to fix, and the agent did: the webcam started working. The user DID NO pull the merge request to the codebase, he just pasted the fix on forum, clearly saying that the fix was AI-generated. AI-generated code cannot be merged, which is wise, due to the licensing nightmare you’re going to land into if you allows so. But that specific story ended in just a flame against AI, the user, the fix and how infamoous is the whole play. The story end? Several months after, I read rumors of someone else started working on a human-made fix which is “highly inspired by something which has been surfaced recently…”
Yes, I’ve been getting the same impression. People’s reaction to AI seems to me to be very misfocused. There are very scary aspects of the potential use of AI, but arguing that they’re not useful is beginning to sound silly at this point. And copyright may be a gray area (copyright in general is; compared to other fields it’s not that gray), but it’s far from the thing people should be worried about.
I’m actually happy there are projects out there that choose not to use AI, and I want to keep writing some of those myself.
where is RMS when you need him? We need GPL 4.0!
Command and Conquer Generals got open sourced by EA last year. As of this week it has working Linux builds because of AI. We were able to clean up and get the game working with libSDL3 and DXVK for graphics rendering, OpenAL for sound and it fully builds/runs on Linux. Is it buggy? Yup, and we’re rapidly smashing out the bugs too.
I’ve rebuilt gnome-mplayer as an MPV gui for GTK3/MATE desktop and made a drawing tool that’s simpler to use than GIMP and supports shape drawing unlike GIMP. AI isn’t perfect but it’s a big leap forward in usability for Linux.
I actively encourage everyone to use unchecked AI slop in safety critical systems such as avionics, healthcare or elevators. They want developers to use AI so they will get it.
Actually turns out Rust has become a vibe coders darling, mostly because the sophisticated static data flow checks keep AI generated slop in check (pun intended). Having said that the amount of publicly available quality material (little) and the degree of self discipline needed makes such a policy justified for this category of software. When LLMs assist in preparing formally verifiable software this might change.
lots of formal verification tools are already interfacing with LLMs.