IBM owns Red Hat which in turn runs Fedora, the popular desktop Linux distribution. Sadly, shit rolls downhill, so we’re starting to see some worrying signs that Fedora is going to be used a means to push “AI”. Case in point, this article in the Fedora Magazine:
Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables generate AI systems to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!
↫ Máirín Duffy and Brian Smith at Fedora Magazine
This “linux-mcp-server” tool is developed by IBM’s Red Hat, and of course, IBM has a vested interest in further increasing the size of the “AI” bubble. As such, it makes sense from their perspective to start pushing “AI” services and tools all the way down to the Fedora community, ending up with articles like this one. What’s sad is that even in this article, which surely uses the best possible examples, it’s hard to see how any of it could possibly be any faster than doing the example tasks without the “help” of an “AI”.
In the first example, the “AI” is supposed to figure out why the computer is having Wi-Fi connection issues, and while it does figure that out, the solutions it presents are really dumb and utterly wrong. Most notably, even though this is an article about running these tools on a Fedora system, written for Fedora Magazine, the “AI” stubbornly insists on using apt for every solution, which is a basic, stupid mistake that doesn’t exactly instill confidence in any of its other findings being accurate.
The second example involves asking the “AI” to explain how much disk space the system is using, and why. The “prompt” (the human-created “question” the “AI” is supposed to “answer”) is bonkers long – it’s a 117 words long monstrosity, formatted into several individual questions – and the output is so verbose and it takes such a scattershot approach that following-up on everything is going to take a huge amount of time. Within that same time frame, it would’ve been not only much faster, but also much more user-friendly to just open Filelight (installed by default as part of KDE), which creates a nice diagram which instantly shows you what is taking up space, and why.
The third example is about creating an update readiness report for upgrading from Fedora 42 to Fedora 43, and its “prompt” is even longer at 190 words, and writing that up with all those individual questions must’ve taken more time than to just… Do a simple dry-run of a dnf system upgrade which gets you like 90% of the way there. Here, too, the “AI” blurts out so much information, much of which entirely useless, that going through it all takes more time than just manually checking up on a dnf dry run and peaking at your disk space usage.
All this effort to set all of this up, and so much effort to carefully craft complex “prompts”, only to end up with clearly wrong information, and way too much superfluous information that just ends up distracting you from the task you set out to accmplish. Is this really the kind of future of computing we’re supposed to be rooting for? Is this the kind of stuff Fedora’s new “AI” policy is supposed to enable?
If so, I’m afraid the disconnect between Fedora’s leadership and whatever its users actually use Fedora for is far, far wider than I imagined.

LLM command prompts haven’t caught on with me, but honestly I see people using it to good effect. Personally I do have have a difficult time trusting the output…my instinct is to verify everything. But as a productivity aid it’s getting harder to make the case that they don’t work. I watched someone using LLM to create a fairly simple DIY embedded system from scratch without any experience. It’s a program I could have written easily, but he wasn’t about to pay me and LLMs can make it possible for inexperienced people to do more (whether we believe they should or not).
Does this step on my turf as a professional software engineer? Maybe not yet, IMHO complex jobs still benefit from professional experience, but for the low hanging fruit, yea LLM’s are advancing quickly. I’ve been very concerned with AI taking over especially entry level jobs. While many people have argued this won’t happen because AI isn’t good enough, I think advocates of this view are going to have to keep moving the goalposts. We don’t have to acknowledge that AI is creeping into these roles in order for it to happen. AI doesn’t need to do does 100% of the job on day #1, but it starts at 10% and goes up from there.
Some people are hoping for a collapse, and there may well be one, but I think it will turn out like the dot-com bubble…massive consolidation with a few dominant winners growing out of the collapse. This isn’t really the vision for the future that I want, but realistically I think it’s very improbable that we’ll see employers (and even governments) standing up for workers. They are greedy and their profit motives strongly favor cutting headcount in favor of AI. Employees are their greatest cost by far, and this will continue to present opportunities for AI even post-collapse. AI consolidation=yes, going away=doubtful.
Such is the world.
Code generators have been around for a long time. Who writes assembly when they can use a compiler? The plus side is, Moore’s law is dead, and resources are finite. Maybe we’ll get the highly optimized code which runs on minimal resources we’ve all been pining for.
I keep thinking about a point someone made. “At one time, we were going to cook all our food in a microwave.” That didn’t happen. LOL
People forget computers are tools, and the tool is supposed to work for the person using it. This was forgotten when IT became a profession and people needed the complexity to generate an income for themselves. Billable hours is the alter tech worships at. It’s more convoluted complexity on top of convoluted complexity for the sake of convoluted complexity.
I’m pessimistic about AI because of the people in charge are absolute idiots. They aren’t capable of solving problems, and SV isn’t either. They aren’t solving hard problems. It’s cheap tricks. They’ve managed to make computers unreliable. Reproducibility is the entire reason for computers! LOL
There are things LLMs do well, like speech processing. I’ve seen it used to good effect, and it was a productivity booster.
Code is also an area where LLMs would do well. Programming languages are very regimented. They’re designed, so they’re easy to be processed by primitive machines.
I’m betting on another AI winter. The current model is unsustainable, and it’s only popular because it centralizes power in a few companies which have the money to buy the equipment needed. It’s the same reason blockchain got popular. It created haves and have-nots. It centralized an inherently decentralized system and brought it under control of some rich elites.
Plus, this is probably a pump and dump scheme. People know it’s going to crater, but they’re going to get cash out of it.
AI is probably going to be with us, but the future is models which are resource light models which can be trained on consumer devices like phones. Phone are by far the dominant form factor, and it makes sense to target them for development.
Flatland_Spider,
Fortunately we have local llms and open source AI
A setup of ollama + open-webui on a semi decent machine is only roughly 6 months behind the latest commercial offerings, including the UI elements (like uploading all your PDFs, writing code, or interacting with images and so on)
https://ollama.com/
https://github.com/open-webui/open-webui
They might go burst, but we will forever have these tools
Flatland_Spider,
I feel blockchain is a solution looking for a problem. AI is in a different boat though because it could end up delivering on automation goals that many employers have been dreaming of but was always technologically out of reach. Now that these goals are becoming reachable, I think many/most corporations are eager to replace employees with AI to improve profits. This transformation is underway and I don’t believe these changes will be reversible.
We can look back at history and compare AI to older inventions like automobiles, which did create new jobs. Most other industries behaved like this as well. However all of those jobs were still held exclusively by humans, the situation is becoming different today. Skilled humans can still be expected to program the robots and the like, but the ratio simply won’t make up for hundreds of millions of workers being displaced, it won’t even be close.
I find it ironic that AI and automation could provide the foundations for paradise in principal, but only if we solve the economic inequalities first. Otherwise, a few oligopolies that own all the means of production could end up creating a dystopia where the working class become impoverished and the owners get to treat them all as expendable.
Alfman,
I have no idea about the IBM / RedHat implementation. But I had experimented with Claude Code on the command line.
Basically it allows “talking” to my git repo. Yes, it makes mistakes, and is expensive, but ultimately helpful for boring tasks (like large refactoring, or explaining unknown code libraries)
The key is: always verifying commands and edits before applying them. I would never trust it blindly to run commands, which could easily include rm -rf /
I think there are alternatives:
* Google has “gemini cli”
* Perplexity has “pplx”
* For open source there is “oollama”, completely local (unless you ask otheriwse). Obviously best for privacy
Yes, people act as if Amazon, eBay, or even Yahoo! did not “survive” dot-com burst.
And they will be subject to market “corrections”. AI is not to reduce headcount (in many cases), but to increase the productivity of your average worker.
I’ve tried to speed up my development with AI, to see if the “if you can’t beat them, join them.” strategy works: two things, Yes, it can speed up my development, but their is a risk: muscle memory atrophy or in this case more like keeping your brain trained well. There are 2 solutions to this: hope AI can solve everything with enough time and just don’t code anymore and just be a reviewer (also not as fun) and hope you understand what to check for. Or more likely: keep doing some coding at least.
So the real issue is similar to: don’t snack to much and exercise.
I think it will be similar to what Edsger Dijkstra (EWD) said about high level languages: when high level languages were introduced people thought it would make programming easier, but the demands and requirements went up and the compiler/computer automated the easy parts and the programmer was left doing the hard parts, thus making their life harder.
About the jobs, we are going to make a lot more software as the human race with LLMs, I think it also means the price of software goes down, but also demand for software will go up. So maybe it will be fine.
There will be a lot of verification work, to make sure it adheres to all kinds of regulations, policies, etc.
Lennie,
This seems like a truism to me, however the bigger question may be whether it even matters if we loose stills that are becoming obsolete. Of course I don’t mean to sweep all the nuance under the rug, including Idiocracy outcomes.
That’s an interesting take I don’t believe I’d heard before. Assembly isn’t hard. I’ve done it professionally a few times but the abstractions are so low that it’s not an efficient use of time so that work quickly died off. Higher abstractions made sense. For now there is still room to debate the role of LLMs in software development, but long term I wonder if the same fate that happened to assembly programmers could be coming for conventional software engineers?
Maybe until someone comes up with an AI classifier to do the verification work too.
To be perfectly honest though IMHO humans suck at code verification. Although we’re talking about AI writing the code and humans verifying it, ironically I think today’s human developers could really benefit from AI verification tools.
“This seems like a truism to me, however the bigger question may be whether it even matters if we loose stills that are becoming obsolete.” – it’s on my mind, but I think you’ll need it for being able to do design and code review, etc. You need to have ‘taste’.
I do think it’s a possible future: that LLMs still do all the relatively simple stuff like compilers have done and humans will be stuck with making sense of a huge code base that can’t fit in the context window of the LLM. So humans will have to deal with the complexity of figuring out how to modularize/abstract the things people and/or LLMs skipped over in the past to try and reduce some of the complexity.
It’s true humans and LLMs have different failure modes. There are already are LLM code review tools, which help to make suggestions on pull requests on issue trackers. Some can also make a suggested fix ready to merge.
Linux Torvalds recently made a good point: compilers have made us 1000 times more productive, while people talk about LLMs making us 10 or 100 times more productive.
This pretty normal for Fedora. LOL Originally, Fedora was supposed to be a place where RH shook out bugs for the next RHEL, like CentOS is today, but they community didn’t want that and took it in a different direction.
This is a tool RH introduced in RHEL 10, I think, but it didn’t go through the normal Fedora -> CentOS -> RHEL pipeline like everything else.
I went through the AI’s proposed fixes for the wifi. There isn’t a single correct piece of advice…
all the module parameters are made up
the kernel module it recommends to modprobe doesn’t exist (its only ath12k, not ath12k_pci)
the modprobe.conf recommendation isn’t correctly formatted, and is missing module names.
it conflated resetting the PCI device (echo 1 > /sys/pci/ …/ reset) with passing a parameter to an unspecified kernel module.
And the final straw is recommending apt on fedora…
Absolutely none of the proposed fixes are workable. This is completely ridiculous, and will waste more time than going on a forum and asking.
And if anyone’s wondering, yes, I commented the exact same thing on the original blogpost.
No idea about fedora’s tool, but I have used LLM’s successfully to troubleshoot tricky issues on multiple Linux distros, including fedora.
The black and white thinking in the AI space is hilarious. You have fedora putting out something ridiculous and calling it amazing, and then you have Thom convinced it is always wrong and never useful. Shades of grey are hard for many to see.
that_guy,
I get your sentiment. However it would be a long challenging path to convince people the AI tools will be useful. Especially when this very particular example falls too short (see the comment above from Serafean). It is very easy to paint everything the same based on faulty anectode.
Let’s be real, asking those questions to a small local model running on a laptop was always a bad idea. Even large models can fail when asked such questions. What were they thinking?
kurkosdr,
Keep in mind that what makes large models so huge is that they contain data on nearly everything, most of which is irrelevant to a task specific AI. So The efficacy of a small model for a specific task depends on whether the small model is simply a large model on a diet, or if the small model actually specialized for the task. In this case I don’t really know what they did, but in principal a specialized model trained for a specific task could produce more effective results despite being small. While most of the headlines keep focusing on large models covering broad topics, smaller more specialized models might prove to be very useful at this sort of thing. Not saying red hat got it right here, but that doesn’t mean nobody ever will.
Yes and no, LLMs are not just next word predictors. It’s has been shown: training an existing model with more data about coding also makes them better at math. This means ‘logical thinking’ improved in the model. There are many overlaps between fields some are obvious, others are not.
Lennie,
While I’d agree it could be tough to actually comb through mountains of raw data and determine what data could be helpful. Nevertheless it’s still pretty clear that the vast majority of topics and data aggregated within these huge models are not needed for specific applications (*). In a way it’s kind of similar to software bloat over decades causing software to grow by several orders of magnitude and becoming normal. We can use unoptimized generic LLMs even for specialized applications and hypothetically if we ever reach a point when terabytes and capacity and petaflops of compute power is normalized, then I suppose nobody will care about the LLM’s bloat. However for today, I still think it’s safe to say it would extremely advantageous to have smaller but more specialized LLMs that don’t need as many resources.
* Note the opposite is true too. A lot of these LLMs are likely lacking data that could be very helpful for specific applications. For example I see a lot of people testing out LLM by generate games….but AFAIK these LLMs haven’t been widely trained on actual gameplay mechanics, which creates a knowledge gap.
“While I’d agree it could be tough to actually comb through mountains of raw data and determine what data could be helpful. Nevertheless it’s still pretty clear that the vast majority of topics and data aggregated within these huge models are not needed for specific applications”
The LLM is a terrible solution to the problem, but it’s the best at doing this right now if you want to make something generic. It’s a bit like any software project, make it very generic and it’s gonna be slow and bloat software. Which is I guess why frameworks often add bloat, because they are generic.
Lennie,
The big LLMs today are trained to be a jack of all trades, purhaps even to a fault. I think specially trained LLMs could end up performing better at specific tasks and being more efficient at it. I could see there being a market for specialized LLMs for different types of work. After all, this is clearly already the case with humans, I don’t see a reason LLMs should/could not become specialized in the field were it is to be used.
Believers of AGI would probably want AI to move the in the exact opposite direction “one AI to be everything to everyone”. But that’s further out. Also perhaps we ought to be questioning whether AI doing the work should be all that intelligent. They need to be intelligent enough to do the job, but not so intelligent that they could develop an existential crisis 🙂
“What is my purpose”
“You pass butter”
https://youtu.be/X7HmltUWXgs?t=32
Yes, specialized LLMs are better (or at the very least more efficient) and reinforcement learning (RL) is even better if you need ‘narrow AI’. The best Go player in the world was or is a RL system: AlphaGoZero (the best after that was AlphaGo). What is so interesting about AlphaGoZero and for me is such a big deal is: in the past a program would play chess (when Deep Blue beat) or go (AlphaGo) against a human, it would be a computer software made by humans and based on human knowledge. AlphaGoZero is a huge deal because it’s a RL system that learned playing the game of Go by playing against itself, no human knowledge was used – it’s an environment that has the board in numbers and the rules are applies and it’s both sides of the board, probably two instances. Personally I think this is a bigger deal than LLMs itself. A LLM just takes in all existing human knowledge and doesn’t really do better than humans. A RL self-play system can take in no human knowledge and out perform all humans.
I wanted to add:
LLMs are very much like parts of the left brain, it knows language and it’s similar to the Interpreter part of the (left) brain.
We can see that because hallucinations with especially split brain humans and LLMs are so similar, they are confidently wrong.
Info about split brain patients and comparison to LLMs:
https://www.youtube.com/watch?v=wfYbgdo8e-8
https://sebastianpdw.medium.com/llms-and-split-brain-experiments-e81e41262836
Lennie,
Yes, depending on the task, I agree reinforcement and adversarial learning AI techniques are important to having an AI develop skills and become better than human. Even though LLMs get criticized for it, the ability to statistically predict what a human would say/do actually aligns very well for things like Turing tests and other tasks that involve communicating with humans, or creating art/music for humans. But obviously human mimicry is just one of many AI goals and LLMs are just one technique.
Older LLMs that merely output text worked as a black box, it was hard to gain insight into what it was actually “thinking”. I don’t know if you’ve played around with them, but there are newer reasoning LLMs where you can turn on the output of the reasoning stream in english. It feels like looking at a subconsciousness and seeing how it rationalizes it’s output. You can watch the thought process more directly and it’s really eye opening to see how an LLM deliberates a point before outputting the answer.
“You can watch the thought process more directly and it’s really eye opening to see how an LLM deliberates a point before outputting the answer.”
Sadly reasoning LLMs don’t give real thinking (has been proven in research), it’s a bit like this (exaggeration to make a point): thinking through a problem with a colleague at the same desk, you might talk about the problem at hand, verbalize and write down your thinking process, but at the same time not talk about how attractive the colleague looks or how you like how they smell and your feelings towards them while you talk to them. So it’s a limited look into the thinking of the LLM.
Olmo 3 is probably the state of the art:
But turns out the tracing is not tracing, but at the moment still more reconstructive afterwards than tracing, but I think they are working on it.
Lennie,
There’s so much AI research, it’s not clear to me what research you are referring to?
To me calling out AI for not having “real thinking” is a truism: AI can’t qualify as real by definition. Yet this distinction, however true it is, falls short of describing the empirical qualities of AI and doesn’t do a good job at quantifying an AI’s ability to mimic humans. This is the whole point of the Turing test, which wisely decided not to presume the mechanics needed for intelligence, but instead went with black box testing methods that measure fitness in a more scientifically rigorous blind testing setting.
Lots of papers on Chain-of-Thought not being reasoning.
As I understand it most AI researchers would say Turing Test was never meant to be a good test just means can you simulate good enough to pass for a human. It’s not proof of real intelligence.
Lennie,
That’s why I’m asking if you were referencing something specific.
I mention it is because I feel there’s a double standard in the way we judge AI versus the way we judge humans. Conducting tests with black boxes is crucial to the integrity of our conclusions because it keeps them based on empirical results rather than our prejudice and assumptions about what we expect the mechanics of intelligence should look like.
LLMs are great at association, and so some people want to rule that out as a form of intelligence. But most of our human behaviors are also driven by association. Even our academic lives comprise of “monkey see monkey do”.. If we seriously want to make the case that AI that does this isn’t intelligent, then I’m afraid that it’s only fair to similarly discredit the intelligence in most human activities as well.
I think we can all agree that the best of humanity is still more intelligent than modern LLMs, However when we start looking at average populations, the picture becomes a lot blurrier because most humans, even those who do well academically, are using monkey see monkey do intelligence. While it is very interesting to look at LLM intelligence in the abstract, Turing tests helps add context to where their intelligence fits relative to typical humans.
“RHEL/systemd Focused: Optimized for Red Hat Enterprise Linux systems”
Let me guess, the training data of LLMs include a lot of apt-get, etc. so they need a way to make it easier for a LLM to make it work on their systems ?
In case you do want to talk about Turing Test, there is one unusual experiment and it’s getting pretty real, literally in the past few days/hours:
https://www.youtube.com/watch?v=wZ0osmPlSaY
Recently in 3D and it’s getting impressive:
https://www.youtube.com/watch?v=LBEtS-sXqgY
https://www.youtube.com/watch?v=iOEgjIMYa1o
https://www.youtube.com/watch?v=g7Ak6VpEIvs
https://www.youtube.com/watch?v=9FPBF-Uz-jY
Lennie,
Yeah, I’ve seen different youtube streamers using AI companions. It doesn’t bother me so much. They’re not really trying to hide the fact, it’s just a gimmick for the channel where the AI acts more like a co-host.
I don’t know if we’re at the point where AI can do it undetectably (ie without viewers suspecting AI), but eventually it will get there. Incidentally I strongly dislike channels where the entire video is just an overlay speaking on top of videos made by others. It amazes me that such low effort reaction content has become so popular will millions of views with nothing original, just reacting to others. Meanwhile the original creators who are putting in real work don’t get nearly the same recognition.
I’m not so sure I’d feel much sympathy if the “react” creators end up getting replaced by AI. Am I a bad person for thinking that? Haha.
Something like AI vtuber ‘Neuro Sama’ is the closest thing to Pinocchio. People having been watching for 3 years now seeing this child-like AI improve over time. It is still the same LLM at the center (with other specialized AI probably RL for walking in 3D, playing games, etc.). With an understanding of the world that it can recognize itself, etc. but problems with object permanence just like a child (playing hide and seek in 3D, turns around – you can’t see me).
Yes, the channels on Youtube that don’t show it’s AI are very common now, lots of humans can’t distinguish as easily anymore. Especially if it’s just audio.
I see a bunch of channels just using Notebook LM, dump some information in Notebook LM and generate the audio needed for a video, but they aren’t popular, maybe Youtube can recognize them and just gives them low views ?
A bunch of channels do something else they impersonate a some what famous person in a field, most just do audio, some even with crappy video. My dad can’t distinguishes form the real deal or doesn’t care (he’s having a: ‘all news is fake anyway’).