A lot of open source projects are struggling what to do with the “AI” bubble, and Fedora is no different. This whole past year, the project’s been struggling to formulate any official policies on the use of “AI”, and LWN.net’s Joe Brockmeier has just done an amazing job summarising the various positions, opinions, and people influencing this process. His conclusion:
There appears to be a growing tension between what Red Hat and IBM would like to see from Fedora versus what its users and community contributors want from the project. Red Hat and IBM have already come down in favor of AI as part of their product strategies, the only real questions are what to develop and offer to the customers or partners.
The Fedora community, on the other hand, has quite a few people who feel strongly against AI technologies for various ethical, practical, and social reasons. The results, so far, of turning people loose with generative AI tools on unsuspecting open-source projects has not been universally positive. People join communities to collaborate with other people, not to sift through the output of large language models. It is possible that Red Hat will persuade Fedora to formally endorse a policy of accepting AI-assisted content, but it may be at the expense of users and contributors.
↫ Joe Brockmeier at LWN.net
Reading through Brockmeier’s excellent article, the various forces pulling and pushing on Fedora become quite clear, and the fact we’ve got IBM/Red Hat in favour of “AI”, and Fedora’s community of developers and users against it, shouldn’t come as a surprise to anyone. Wherever “AI” makes an appearance, it’s almost exclusively a top-down process with corporate interests pushing “AI” hard on a largely indifferent userbase. It seems Fedora is no different.
The massive rift between IBM/Red Hat on one side, and the Fedora community on the other is probably best illustrated by a remark from Graham White, technical lead for the Granite AI agents at IBM. One of the earlier policy proposals referenced “AI” slop, and White was offended by this, stating:
I’ve been working in the industry and building AI models for a shade over 20 years and never come across “AI slop”. This seems derogatory to me and an unnecessary addition to the policy.
↫ Graham White, as quoted by Joe Brockmeier at LWN.net
Us regular users are bombarded with “AI” slop every day, and I just can’t understand how disconnected from reality you must be to not only deny it’s a problem, but to deny its existence entirely, when virtually every single Google query will drop you in “AI” muck. If such denial is commonplace within IBM/Red Hat, it’s really no wonder there’s such a big rift between them and Fedora. It is wholly unsurprising, then, that Fedora is having such a hard time formulating an “AI” policy.
The current version of the proposed policy seems to view “AI” and its use in or by Fedora mildly positively, which certainly has me, as a Fedora/KDE user, on edge. I don’t want “AI” anywhere near my operating system for a whole variety of reasons, and if the upcoming vote on the new policy ends up in favour of it, I might have to consider moving away from Fedora.

I run Fedora on all my machines. If we start getting this AI crap in the distro I will absolutely look to move off Fedora and onto something else. The bubble needs to burst so we can suffer through the fallout and break the fever.
lakerssuperman2,
Redhat is extremely influential in linux circles and their projects & code regularly makes it’s way to every corner of linux (often at the expense of alternatives). I don’t know how Fedora’s policy will go, but I don’t see a world where Fedora will always be able to reject LLM generated code from redhat and others in the future. Where redhat goes, the majority of the linux ecosystem follows, sometimes kicking and screaming.
I see two reasons rejecting LLM generated code is going to be highly impractical:
1) Forked projects that diverge because of rejected LLM patches are going to require more resources and become increasingly incompatible and difficult to maintain.
2) Detecting LLM submissions is bound to produce both false positives and false negatives to the point where enforcing the policy on them becomes a fools errand.
I actually think FOSS advocates should train their own LLM to address concerns that LLMs are infringing copyrights. A FOSS friendly LLM would solve this and no longer be using code without permission. Consider that the GPL license explicitly allows the code to be used for any purpose, so long as the resulting product retains the GPL license. A GPL compliant LLM could absolutely be built.
What will happens boils down to three possibilities:
– IBM backs down and let the community do whatever they want.
– IBM don’t back down and force his will, since… well…. they can. Who has the root user of git is the one that actually has the last word, and explain a lot why corporations love to rewrite OSS projects under a variety of excuses. Community fork it, but without IBM/red hat money and resources, and with split mind-share, it will be interesting to see where they will go. Fedora suddenly began to be treated by broad Linux community like a plague.
– IBM don’t back down, but let Fedora go (his own foundation?). No fork, but no IBM/Red Hat resources neither.
Slop, like bullshit and diarrhea, is not derogatory but complimentary, as the description still implies that “ai” output is useful for nurturing something.
I don’t mind ML or LLM stuff, as long as it’s useful. But if you need to advertise some feature as “powered by AI” you can almost be sure it won’t be very useful.
That’s funny. I would actually expect Fedora to want to have AI assisted coding more than RedHat, since corporates want things predictable and boring. But hey, IBM is weird, and even weirder than when I used to work in back in the days.
Anyhow, personally I don’t see too many issues with AI assisted coding. I use it in my lab and I managed to introduce lots of new functionalities into the apps that I self host. Quality of code is OK, everything works. I think we’re demonizing it out of fear and frequently lack of understanding.
Linus’s stance on AI generated code is quite down to earth: https://www.youtube.com/watch?v=VHHT6W-N0ak
cheemosabe,
Thank you for linking that. I think he’s being pragmatic. Officially banning AI generated code wouldn’t stop people from using it and submitting it anyway. Then what are you supposed to do about it? The ban would then turn into a which hunt. IMHO it’s more important to apply quality standards like we’re already doing: Users that submit quality patches get promoted, those with sub-standard patches get demoted. Answering whether or not AI was used isn’t that important. I do think AI tools will improve to the point where devs who use them will ultimately be more productive in the long run.
Alfman,
Yes, it is akin to “do not use an IDE autocomplete, only hand written code is allowed”
How will they “detect” AI written code, if the AI itself can be utilized to avoid detection?
Done! Now as long as you double check the output against usual AI hallucinations, you now have a perfectly passable “human written” piece of code.
sukru,
I don’t think everyone realizes this, but if you have an algorithm that can distinguish AI generated code from human code, then you implicitly have an algorithm that can train AI further.. The same algorithm that improves detection rate leads to improving the AI’s ability to foil the detector. This is more or less how adversarial training works.
When it comes to generating music/art, “make it look human” is probably a sensible AI training goal. But with code generation it leads to an ironic outcome. If we prioritize “make my output look human” over “make my algorithm work well”, then it might actually have to increase the bug rate and decrease optimizations in order to fool the human heuristic because our own limitations become the differentiator.
Detecting super-human moves is the same way we detect cheaters in chess. We could easily make a chess AI that passes the human test…by shifting the goal from winning to making more human mistakes. In the “chess industry” we’ve accepted that machines have beaten us, but in other industries there are still those who don’t think AI can win. I think it’s only a matter of time before they do though and eventually people are going to have to accept it. The big question is whether society is prepared for these changes, especially when it involves people loosing their livelihoods. I am afraid we’re going to be in for a world of hurt on our current path, at least in countries that are regressing on social safety nets.
Alfman,
Yes, that is exactly how adversarial training works.
(And that is why many “responsible” companies don’t publicly share their detection algorithms. We can take their claims any way we like)
That being said, it is still possible to be “human” and still correct.
“Hey, write a few easily detectable issues, like out of sync doc comments, and wrong naming, or some edge exceptions. Make sure algorithms still work. Give me 5 different versions”
Or something similar would work.
The main thing is the “real human” (which is us) still being able to understand the code. Since if there is an actual algorithmic bug, the AI not only won’t be able to catch it, it usually insists that its solution is correct.
Alfman,
On this second point.
We as a human society have always invented “make believe” work for less productive members of our species. For example, pet psychic, elevator operator (though could be considered a service job), or politician.
We will still be able to find “work” as long as our “total production” > “total consumption” of resources.
This is a concern if you entirely remove the human from the loop.
There are AI tools that will assist with:
1 – Generating end user feature requirements
2 – Getting that requirements in a github issue format
3 – Looking at a github issue and writing code
4 – Looking at a piece of code and writing unit tests
5 – Automatically reviewing a code during PR submission
If you let all the steps done with AI, you’d get real slop. But if you interject in at least several places (and definitely in the final code review), you’d still retain ownership.