IBM owns Red Hat which in turn runs Fedora, the popular desktop Linux distribution. Sadly, shit rolls downhill, so we’re starting to see some worrying signs that Fedora is going to be used a means to push “AI”. Case in point, this article in the Fedora Magazine:
Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables generate AI systems to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!
↫ Máirín Duffy and Brian Smith at Fedora Magazine
This “linux-mcp-server” tool is developed by IBM’s Red Hat, and of course, IBM has a vested interest in further increasing the size of the “AI” bubble. As such, it makes sense from their perspective to start pushing “AI” services and tools all the way down to the Fedora community, ending up with articles like this one. What’s sad is that even in this article, which surely uses the best possible examples, it’s hard to see how any of it could possibly be any faster than doing the example tasks without the “help” of an “AI”.
In the first example, the “AI” is supposed to figure out why the computer is having Wi-Fi connection issues, and while it does figure that out, the solutions it presents are really dumb and utterly wrong. Most notably, even though this is an article about running these tools on a Fedora system, written for Fedora Magazine, the “AI” stubbornly insists on using apt for every solution, which is a basic, stupid mistake that doesn’t exactly instill confidence in any of its other findings being accurate.
The second example involves asking the “AI” to explain how much disk space the system is using, and why. The “prompt” (the human-created “question” the “AI” is supposed to “answer”) is bonkers long – it’s a 117 words long monstrosity, formatted into several individual questions – and the output is so verbose and it takes such a scattershot approach that following-up on everything is going to take a huge amount of time. Within that same time frame, it would’ve been not only much faster, but also much more user-friendly to just open Filelight (installed by default as part of KDE), which creates a nice diagram which instantly shows you what is taking up space, and why.
The third example is about creating an update readiness report for upgrading from Fedora 42 to Fedora 43, and its “prompt” is even longer at 190 words, and writing that up with all those individual questions must’ve taken more time than to just… Do a simple dry-run of a dnf system upgrade which gets you like 90% of the way there. Here, too, the “AI” blurts out so much information, much of which entirely useless, that going through it all takes more time than just manually checking up on a dnf dry run and peaking at your disk space usage.
All this effort to set all of this up, and so much effort to carefully craft complex “prompts”, only to end up with clearly wrong information, and way too much superfluous information that just ends up distracting you from the task you set out to accmplish. Is this really the kind of future of computing we’re supposed to be rooting for? Is this the kind of stuff Fedora’s new “AI” policy is supposed to enable?
If so, I’m afraid the disconnect between Fedora’s leadership and whatever its users actually use Fedora for is far, far wider than I imagined.

LLM command prompts haven’t caught on with me, but honestly I see people using it to good effect. Personally I do have have a difficult time trusting the output…my instinct is to verify everything. But as a productivity aid it’s getting harder to make the case that they don’t work. I watched someone using LLM to create a fairly simple DIY embedded system from scratch without any experience. It’s a program I could have written easily, but he wasn’t about to pay me and LLMs can make it possible for inexperienced people to do more (whether we believe they should or not).
Does this step on my turf as a professional software engineer? Maybe not yet, IMHO complex jobs still benefit from professional experience, but for the low hanging fruit, yea LLM’s are advancing quickly. I’ve been very concerned with AI taking over especially entry level jobs. While many people have argued this won’t happen because AI isn’t good enough, I think advocates of this view are going to have to keep moving the goalposts. We don’t have to acknowledge that AI is creeping into these roles in order for it to happen. AI doesn’t need to do does 100% of the job on day #1, but it starts at 10% and goes up from there.
Some people are hoping for a collapse, and there may well be one, but I think it will turn out like the dot-com bubble…massive consolidation with a few dominant winners growing out of the collapse. This isn’t really the vision for the future that I want, but realistically I think it’s very improbable that we’ll see employers (and even governments) standing up for workers. They are greedy and their profit motives strongly favor cutting headcount in favor of AI. Employees are their greatest cost by far, and this will continue to present opportunities for AI even post-collapse. AI consolidation=yes, going away=doubtful.
Such is the world.
Code generators have been around for a long time. Who writes assembly when they can use a compiler? The plus side is, Moore’s law is dead, and resources are finite. Maybe we’ll get the highly optimized code which runs on minimal resources we’ve all been pining for.
I keep thinking about a point someone made. “At one time, we were going to cook all our food in a microwave.” That didn’t happen. LOL
People forget computers are tools, and the tool is supposed to work for the person using it. This was forgotten when IT became a profession and people needed the complexity to generate an income for themselves. Billable hours is the alter tech worships at. It’s more convoluted complexity on top of convoluted complexity for the sake of convoluted complexity.
I’m pessimistic about AI because of the people in charge are absolute idiots. They aren’t capable of solving problems, and SV isn’t either. They aren’t solving hard problems. It’s cheap tricks. They’ve managed to make computers unreliable. Reproducibility is the entire reason for computers! LOL
There are things LLMs do well, like speech processing. I’ve seen it used to good effect, and it was a productivity booster.
Code is also an area where LLMs would do well. Programming languages are very regimented. They’re designed, so they’re easy to be processed by primitive machines.
I’m betting on another AI winter. The current model is unsustainable, and it’s only popular because it centralizes power in a few companies which have the money to buy the equipment needed. It’s the same reason blockchain got popular. It created haves and have-nots. It centralized an inherently decentralized system and brought it under control of some rich elites.
Plus, this is probably a pump and dump scheme. People know it’s going to crater, but they’re going to get cash out of it.
AI is probably going to be with us, but the future is models which are resource light models which can be trained on consumer devices like phones. Phone are by far the dominant form factor, and it makes sense to target them for development.
This pretty normal for Fedora. LOL Originally, Fedora was supposed to be a place where RH shook out bugs for the next RHEL, like CentOS is today, but they community didn’t want that and took it in a different direction.
This is a tool RH introduced in RHEL 10, I think, but it didn’t go through the normal Fedora -> CentOS -> RHEL pipeline like everything else.