On the Fedora forums, there’s a long-running thread about a proposal for Fedora to build a variant of the distribution aimed specifically at “AI”. The “problem” identified in the proposal is that setting up the various parts that a developer in the “AI” space needs is currently quite difficult on Fedora, and as such, a bunch of technical steps need to be taken to make this easier. Setting aside the “AI” of the proposal and ensuing discussion, it’s actually a very interesting read, going deep into the weeds about consequential questions like building an LTS kernel on Fedora, support for out-of-tree kernel mods, and a lot more.
To spoil the ending: the proposal has already been approved unanimously by the Fedora Council, meaning the efforts laid out in the proposal will be undertaken. This means that, depending on progress, we’ll see a Fedora “AI” Desktop or whatever it’s going to be called somewhere in the timeframe from Fedora 45 to Fedora 47. As a Fedora user on all my machines, I’m obviously not too happy about this, since I’d much rather the scarce resources of a project like Fedora goes towards things not as ethically bankrupt, environmentally destructive, and artistically deficient as “AI”, but in the end it’s a project owned and controlled by IBM, so it’s not exactly unexpected.
What really surprised me in this entire discussion is a post by Fedora Project Leader Jef Spaleta, responding to worries people in the thread were having about such a big “AI” undertaking under the Fedora branding causing serious reputational damage to Fedora as a whole. These concerns are clearly valid, as people really fucking hate “AI”, doubly so in the open source community whose work especially “AI” coding tools are built on without any form of consent. As such, Fedora undertaking a big “AI” desktop project is bound to have a negative impact on Fedora’s image. Just look at what aggressively pushing Copilot has done to Windows 11’s already shit reputation.
Spaleta, however, just doesn’t care. Literally.
As the Fedora Project Leader, I am absolutely not concerned about the reputational damage to this project that comes with setting up an entirely new output attractive to developers who want to make use of Ai tools.
↫ Jef Spaleta
I’ve been looking at this line on and off for a few days now, and I just can’t wrap my head around how the leader of an open source project built on and relying on the free labour of thousands of contributors says he doesn’t care about reputational damage to the project he’s leading. Effective and capable open source contributors are not exactly a commodity, and a lot of the decisions they make about what projects to donate their time to are based on vibes and personal convictions – you can’t really pay them to look the other way. Saying you don’t care about reputational damage to your huge open source project seems rather shortsighted, but of course, I don’t lead a huge open source project so what do I know?
In the linked thread alone, one long-time Fedora contributor, Fernando Mancera, already decided to leave the project on the spot, and I have a sneaking suspicion he won’t be the last. “AI” is a deeply tainted hype on many levels, and the more you try to chase this dragon, the more capable people you’ll end up chasing away.

This seems completely reasonable to me. (Emphasis added.)
runciblebatleth,
This is what I’ve been saying too. Many don’t like it, but AI is here to stay. Despite all the cons we can talk about here, the labor cost savings provide an enormous incentive for big businesses and honestly I don’t see a likely scenario where they steer us away. So long as they want AI, that’s all it takes for it to succeed. Even a recession would likely result in human layoffs and position AI for more growth in the future.
I think the interests of FOSS/privacy/responsible/etc advocates are best served not by staying clear of AI, but by putting forward ethical AI of our own that sets a good example and helps to ensure we have more options in the future. Otherwise I think we may inadvertently make the future of AI worse as our actions today cede ground to some future proprietary corporate AI monopolies. Not only does that fail to stop AI, but it weakens FOSS in the future when AI keeps becoming more important.
Have you read the whole discussion? I did. They are not against AI explicitly but concerned about saying Fedora AI desktop is now a Fedora objective. A project leader that does not care about the image of the project is.. something.
All the work could have happened without calling it Fedora AI but Red Hat wants to sell the AI fantasy to everyone. The tone from the Fedora Project Leader in the whole thread is also off-putting. “I am the most important person in the room”. At least I don’t use Fedora.
sssv78,
Jef Spaleta did speak somewhat to that point.
I do not think leaders should dismiss or ignore the community, that’s a problem. I currently don’t have the data to say which way the community leaning, however I still feel that the words runciblebatleth highlighted have truth to them. My position on AI generally hasn’t been to deny there are cons, Not everyone follows my endless rants 🙂 but know that I am very sympathetic to the view that AI will make things worse for many workers, especially entry level labor that is the low hanging fruit for AI. My position is not that the cons aren’t real or aren’t significant, and I don’t even deny there’s a lot of AI garbage. If people want to criticize that, fine, but we need to see the forest through the trees; cherry picking the worst examples of AI is not a great indicator for where things are headed. The AI that will last is that which serves corporate interests, for better or worse they don’t really give a damn about our concerns and will proceed with their own interests in mind. It seems very unlikely to me that corporations would change so much as to make AI less attractive to them in the future.
I didn’t realize Lennart Poettering was there 🙂
I get the sentiment, but ask yourself this: in a world where AI underpins the future, would you rather the FOSS community be at the table to help shape AI favorably, or let proprietary companies take the reins without any challengers?
Thom Holderda,
Some nuance is required here. Many FOSS licenses do in fact give consent for downstream projects to use the code without asking. That’s the whole point of licenses like the GPL. GPL requires downstream projects to be licensed under the GPL. There’s a debate whether or not LLMs works are “transformative”, which put’s it out out of copyright scope. But even for those who take the position that LLMs are not transformative and are copyrightable, there’s nothing inherently anti-FOSS about LLMs, The problem is not that copyleft can’t (or shouldn’t) be used to train LLMs, but that LLMs works should themselves be copyleft also.
In fact the GPL licenses (v2 and v3 alike) prohibit us from adding new AI restrictions…
https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
I’m highlighting a sentence here but it’s not that long read it in full to see that you really can use GPL code to train an LLM in good faith if your LLM’s output also respects the GPL. I really believe this would be a stronger case to advance the cause of FOSS than to argue that AI can’t comply with FOSS. This isn’t only untrue, but I believe there’s a very serious risk of it backfiring by creating a void of FOSS compliant AI.
Alfman,
That is actually a philosophical question, but I’m sure the courts would want to step in.
After all, how do humans train our young is not very different than training LLMs. Give them a bunch of material, have them repeat 100 times, and make mistakes. Give them feedback on those mistakes. The only difference is scale and speed.
But many are not ready for those discussions.
I have to agree that it’s damn near impossible to have any reasonable or productive conversation on the subject. Normies aren’t that interested, as usual, and partisans on both sides are living in different versions of la-la land. The heavy stakes that are attached to the issue for many individuals and for humanity in general foster some strong emotions, making objective discussion much more difficult. Despite understanding this and feeling a general empathy for these people, it is frustrating. I do believe LLM technology is world altering, and when married with other technologies, potentially world devastating or even ending. There are some things that really need to be worked out, and soon, and as far as I can tell no one is working on them.
Baylan Tano,
LLMs are just pretty impressive high level languages for the computers.
They are not magic, but they might seem like so. They just enable the innate functionalities of computers, and can no more do anything that is not already possible one way or another.
(Well… they solve the unstructured general data parsing problem)
They are very good amplifiers, though, yes, at the wrong hands (ill intentioned, or just plain stupid) can be pretty dangerous.
I’m not sure that LLM’s that were trained using any GPL code can be made GPL compliant without a blanket licence covering all emitted code. It is tempting to think of an LLM like a database, but that’s really not what it is. The compiled models are very dense, and I don’t see how existing models would track individual attributions accurately. Another option would be compiling new models and ensuring that only public domain and liberally licensed code made it into the model. Existing LLM’s could perhaps help speed that up. I know people are already doing ‘ethically sourced’ LLM’s, but I’m not sure how competitive they are with the front runners. Anyone can get up tomorrow and train an LLM on the Gutenberg library – but what you’d have once you were done might not be very useful in the modern world.
Baylan Tano,
I don’t think anyone has shown interest in discussing it before. The simplest approach would to be to ensure the output and input use compatible licenses. A single-license LLM would be easy. A multi-license LLM could be possible but like you mention it would require attribution data that existing LLMs may lack. Conceivably someone who sets out to train LLMs with license attribution could do it though.
I am curious to see what an LLM generated from purely public domain/liberal FOSS sources would be capable of.
I see your point, although if you’re open to discussing this deeper…
An LLM’s strong suit is interacting with humans, and it turns out they do this very well. However LLMs aren’t as strong at deep logical domains like programming. Other AI techniques don’t get much attention, but we should be discussing them because they are far more promising at logic tasks.
LLMs are trained and optimized to try and mimic what a human would do. This is valuable to say, an employer, but it doesn’t necessarily mean they can do better work. For that you need an NNs that can evolve without human training, and for logic programs this exists. Take chess or go bots, these may have started out as copying humans, but that only gets them so far. Eventually these became much better than humans using well tuned fitness functions and self-reinforcement learning and adversarial learning techniques. The results blow the humans away.
LLMs are still going to be a part of the solution because they are so good at interacting with humans and this is a very valuable asset, however I see other AI techniques beating both LLMs and humans at logic problems including programming; The future of AI is hybrid.
He said he is not concerned, not that he doesn’t care. The difference is subtle, but important. He said that it doesn’t seem AI use is an issue for their reputation, while your framing would imply that he knows it’s important, but chooses to ignore the issue.
He is right, of course.
I think FLOSS community should just avoid the past errors of being just NO NO and instead understand the potentialities and the reasons behind.
All this AI thing reminds me the secure boot thing: microsoft is evil, they want to cut us out of PCs. But somebody looked over that, saw and understood the reasons and worked with, not against. And now we have that Linux works fine with secure boot systems and it improve the security posture of Linux systems as well.
So Thom position, also in the previous article about Ubuntu talking about AI, is a little bit short sight IMHO: AI will remain and all Agentic and AI things can also be in an ethical way. OpenSource has always been the bright star of innovation, why missing this opportunity to create the best env where to run AI agents on your PC?
People can run LLMs on their PC and everything, why not cover those users?
Nobody will force you from using it (sort of 😉 https://www.theregister.com/ai-and-ml/2026/05/07/chrome-silently-installs-a-4-gb-local-llm-on-your-computer/5230893 )
Plus, nobody said: there will be only Fedora AI.
I appreciate the pragmatism of the project. Also this reflect one of the core values of Fedora – First – offering a constantly update distro with the most up to date but stable components, so it does not surprise me that this discussion is happening in this community.
andreamtp,
Yes that was a close call. The end result was less than ideal (Microsoft still keeps the keys), but at least Linux had a say. The PC could have been fully locked down by manufacturers.
I think that is the crucial part. From what I can see the community is divided. Some are arguing against AI citing reasons like potential job loss (which is valid), or environmental concerns (which it not so much). Others are pretty happy they can now delegate tasks they find boring.
I’m not sure reconciliation between the opposites would be quick or easy.
sukru,
I agree, things could be far worse today. Has microsoft given any long term commitments to not make things worse in the future? Having to trust in microsoft’s altruism on secure boot is awkward to say the least. Last I heard Microsoft has dropped the OEM requirements requiring manufactures to allow owners to have control over secure boot on x86, though I’ve only experienced that on one model. Microsoft originally established this OEM requirement to placate linux users complaining about being locked out of secure boot. So the backtrack is a bit concerning. Hypothetically if microsoft changed OEM requirements to make secure boot stricter or microsoft ceased their key signing program, the old secure boot concerns might crop up again.
Yeah, people take AI issues quite personally and I also don’t think reconciliation is possible because it has more to do with ones personal philosophy than strictly logical matters. People gravitate towards they arguments they like while rejecting those they don’t, agreement probably isn’t in the cards…However I believe we’re all going to find ourselves facing the same future, with AI being a prevalent part of it.
Alfman,
Isn’t that what they already did in Windows for ARM? Are there any Windows PCs with ARM motherboard that can be adopted to open source.
But for one reason or another x86 seemed mostly reliable in this department. And… they opened up Surface ARM too… Which is a net positive.
https://github.com/dwhinham/linux-surface-pro-11
But here Qualcomm is being the bad guy, and locking down basically 50% of the hardware (the touchscreen does not work… on a tablet form factor device)
sukru,
Yes I recall the ARM secure boot restrictions you are referring to. As far as I know the linux distros that can boot on microsoft’s ARM devices do so using the same secure boot key signing program that lets linux distros function under secure boot settings on x86.
I don’t have any information on whether anyone is able to boot ARM hardware designed for MS windows without being signed through microsoft’s key? I haven’t stayed on top of this market, but if you find out more let me know.
You’re preaching to the choir 🙂
It is worth noting that the outcome of the secure boot controversy did not happen in a vacuum. The hue and cry that was raised assuredly influenced that outcome. Using that outcome as an example of people making too much to do about nothing seems disingenuous to me, and not very convincing.
Baylan Tano,
Yes, I know, I was there.
Microsoft has (had?) a tendency to listen to backlash. They had been stubborn many times, but more often than not, backlash worked… eventually.
And there would always be manufacturers that don’t follow Microsoft. They are not a monopoly even though they are pretty close to one.
Now the situation is a bit more dire. Companies like “Open” AI want to use government to block their competitors, and open source is a major one. It won’t matter if you have the “keys”, if the models you want to use locally are artificially restricted by non-market sources.
Call me crazy but as long as all of this AI fluffs on a different “spin” and not the mainline distro who cares? (or even a AI integration or tools software selection at install time) being a Linux also means removing all of the ai related fluff can be removed simply.
As far as AI i have a love hate relationship with it, and have only used it once after a diagnosis of ALS and the bot did help me locate recourses for support i would have neve been able to find myself. But i was using it as a glorified search tool with a more conversational style
I would also like to add that there is a subtle but ultimately significant difference between “Not concerned” and “Doesn’t care.”
I would very much care if a stadium-sized meteor smashes into my city. However, that is also not something I am concerned about.
I think his lack of concern regarding Fedora’s reputational hit for introducing changes that enable easier AI development isn’t because he doesn’t care about Fedora’s reputation. Rather, he’s just dismissing the risk of reputational harm. He doesn’t think it will negatively affect Fedora’s reputation.
The sentence could be parsed either way, but I agree that the context supports this interpretation. Reading his other posts, they guy seems like anything but a reckless individual.
Only time will tell if he is correct. So far, he is wrong in at least one case – the developer that left. Presuming this developer actually made worthwhile contributions, Fedora will indeed be suffering a loss here due to reputation damage. If more developers follow, it could be a thing. If many do, it could be a crisis. Many tech people are very passionate about this issue, both for and against. I’m not sure it’s wise to underestimate that.