In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked:
Why do I care if you use AI?
↫ A comment posted on OSNews
A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind.
The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here?
RAM prices went up for this.
This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article.
In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh:
This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.
↫ Scott Shambaugh
A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events.
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
[…]Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
↫ Ken Fisher at Ars Technica
In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing.
That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.

Exactly. If I’m having trouble finding something using traditional search and I ask something like Perplexity if it can find something my google fu wasn’t good enough for, you bet your bottom dollar I’m going to treat everything it says as questionably reliable equivalents to traditional search preview snippets and read the pages it cites.
…also, if you’re going to keep throwing those “E-Mail Verification Required” things at me, maybe I’ll turn off the rest of the 2FA. I hate how janky e-mail verification is as a 2FA-esque thing compared to TOTP authenticators (second-worst) or actually-functional U2F/WebAuthn (best), but two 2FA-style challenges each time I log in is too excessive.
This “Scott Shambaugh”-incident might be an early sign of the future battle between team “HUMANITY” versus team “AI”.. The AI-agent was naively using the”hypocrisy” term while Scott was making the valid point that he’d like team HUMANITY to learn coding/optimisation/etc, skills rather than defaulting to team AI for human-related needs. . Dear AI-agent, “performance” is not always the number one priority. If it were then assembly language would be the only programming .language that would exist.
During the preceding decades, narrow-AI has been useful and one of it’s advantages is that it involves a decent intellectual-based contribution by the human-developer of the narrow-AI; good for enhancement of human intellect.
However, the refashioning of narrow-AI towards some proposed form of pure GAI (i.e. general artifical intelligence (GAI) without the limitations of narrow-AI) as a stepping-stone to the final goal of SUPER-INTELLIGENCE (e.g. Skynet/etc. in “Terminator” movies) is, in my opinion, bizarre, foolish, dangerous and UNBALANCED.
HUMANITY must always be at the helm, the machine/AI is secondary..
The machine/AI is meant to be a tool used by a human.
A human has “rights”, has feelings..
An AI does not have “rights”, does not have feelings.
Humanity is responsible for creation of AI.
AI is not meant to replace humanity.
An AI is not meant to be human and should not be implemented as mimicking a human; e.g. human-replica robots for general interaction with the human population.
.All research/etc. leading to {GAI, super-intelligence} should be banned/outlawed world-wide (.i.e. no government/university/corporate/etc, funding).. Narrow-AI, due to it’s balanced contribution from human developer(s), should be the status-quo as it has been for the preceding decades..
For the future …
What quality of journey (i.e. “odyssey”) will humanity embark on if we have AI doing too much of the things that humans should at least be attempting; generationally sapping the knowlege-potential of humans.
It’s all about balance.
Yes , “AI” is useful but at what cost ?
It has been too obvious that perverse/foolish use of “AI” leads to human-laziness.
Laziness in learning, laziness in checking work …..
As a species, humanity has progressed well during the preceding millenia because HUMANITY was not LAZY.
I agree with everything but I don’t think your final point matches the rest of your argument.
Mankind is inherently lazy. Too hard to go about with a bunch of stuff? Invent the wheel, put many more things in a cart. Handling to soil is hard? Invent the plow. Annoying to carry a box of playing card for a round of Solitarie (like James Bond in one of the first movies)? Well, Microsoft solved that for you. Writing on the paper, going to the post office, sending a letter out? There’s email. Dishwasher?
A lot of our inventions are for our convenience. Of course, technology has allowed us to advance in science and not everything is due to laziness. But most things are. They give us more time to replace physical work with mental work (less tiring in general). And now we are trying to do less mental work with AI.
Some people appreciate the toil. But not many. Wife of a friend of mine goes for cross-fit every day and it is IMPOSSIBLE to get her to help us in the garden and she is the strongest of us. She admitted she works out to look good and nothing else.
I’m generally lazy too… the issue is that I’ve burned myself out on maintaining Python projects so much that, when I look at AI coding assistants, I see more of what I’m migrating my projects to Rust to get away from.
Not just code that is likely to be harder to maintain if I didn’t spend the time internalizing it as I wrote it, but also more of the “prompt fatigue” that I flirt with enough when pootling around with a private copy of Stable Diffusion.
As much as I reject the premise that the brain engagement of hand-transcribing your school/college/university notes helps to fix them in your memory so much better that its value merits the effort, I do agree that the human brain is lazy and will look for opportunities to not form new memories or problem-solving constructs when given them.
That’s why rubber duck debugging is so effective at finding problems in code, and why explaining your story idea to someone else is a good way to get out of writer’s block.
Oh, to clarify, one big reason I dispute transcribing notes is because it’s a bad way to achieve what they’re trying to achieve and they’re making assumptions about what necessary side-effects it will invoke. I’m likely to drift into “connecting my eyes to my hand, bypassing my brain” as I ADHD into something I find more engaging while getting autism-spectrum pencil-overgripping hand-cramps from the writing. (Yes, like the tendency to run by “tilting forward and moving fast” instead of engaging the whole body in a more typical fashion is apparently not the only physical tendency that is common for people on the autism spectrum.)
In university, I struggled with being able to read a boring textbook out loud and not remember a single thing I’d said if I wasn’t careful, because my brain allows that kind of input-output patching at the same time as wandering off into more interesting thoughts.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
All in all I find myself in agreement with Shiunbird. But that is my experience with reading as well. When my brain is reading out loud, it’s paying more attention to that task than the task of comprehending was is being read. I find that when I read “naturally”, my internal processes scan back and forth more and don’t treat text as a stream of linear input. Reading out loud forces the brain into a different path.
For me, it’s less that reading out loud impedes it (though it can), and more that measures meant to ensure I’m paying attention have no effect because I can do them “in the background” while my mind wanders.
My grandfather (he is in his 90s) gave me a fountain pen he bought in the US when he lived there in the 60s on an Army assignment. I still take handwritten notes of books I read and things I study with it.
The pen is gorgeous. The golden tip remains golden, no leaks in the latex ink bag, nice quality plastic body with a golden metal cap. It glides beautifully on paper and I can write on it for hours. When I get a ball pen, I write for 30 seconds and I feel pain, because gliding is not enough: you must push the pen down for the ink to flow. Cheap trash.
I think a lot of what brings the modern kind of lazy (avoiding intellectual work that should gives us pleasure and the joy of discovery) is due to the awful tooling. Software has turned into cheap crap full of bugs and incoherent interfacts, so deep inside we don’t want to use them. School books are outdated every year and not done with the same love as before (my parents studied with truly beautiful books). Even the stupid paper of your standard school kid notepad looks worse than it did.
No wonder we avoid tasks. The tooling is awful and ugly. I can sit and code for hours in nextstep 3.3 and delight myself at how coherent and beautiful and responsive the interface is, and I often read the offline docs instead of searching for answers online just because they are so good and so well formatted. The process of learning is a joy.
We wanted to stop the physical world to do the work that makes our souls sing, but even that has been turned into slop due to lack of meaning and shitty tooling.
Shiunbird, I’m also left-handed, and left-to-right languages like English are meant to be written right-handed so you can rest your hand on the page and slide it away from the text you just wrote with minimal effort. That doesn’t help.
I despise the cultural narrative that “humanity is lazy” a narrative I hear propagated all over the place. Humanity is all about social contribution (and a little less about jockeying for social power – though some are only about that…) What people want, that is real humans, is to find some way to contribute meaningfully to the prosperity of the group (and get recognition for their contribution.) Nothing sucks more than doing something that redundant, so when it seems like AI can do what you want – you just trust the AI. This isn’t laziness, it’s more like optimization, and efficiency. There is nothing wrong with that – it’s what makes us human. We are social creatures, and don’t want to waste our talents, and our efforts, and yet we somehow turn that in to “humanity is lazy.” I hate that.
“And yet it moves”
I’m very happy that AI can take over most of my programming task, so I can focus on the higher level stuff. My goal is solving problems, and code is just a tool. (This of course requires I am able to understand and vouch for that code).
It is Galileo’s birthday today. That is why I shared to quote above. The world will not care, and continue revolving around the Sun for at least another billion year or so, unless a major interruption happens from outside.
Nothing we do individually matters. And what matters is how we contribute to the whole sum of human achievement.
(Don’t get this as an offensive against who want to do “things by hand”, and I do like artisanship. I am just putting another point of view)
I so much agree with you!
Judge the Code, not the coder! Imagine Assembler artisans refusing C compilers, does that ring a bell?
Btw, I am not ashamed to admit, that Claude 4.6 exceeds my own programming skills by now!
Working on a JUring backed Java File Channel, I have achieved in only 3 days what I alone would have achieved in 3 months maybe (if at all) — with impressive results: https://manticore-projects.com/download/juring_filechannel_results.html
Andreas Reichel,
All of those of course happened (“real people do not use IDEs”)
I’m not sure I’d call exceeds my your programming skills (or our in general), but we could agree, it exceeds our programming speed.
It can look up the patterns from the entirety of its training set, and bring the best practices to the requested prompt. The same code snippets, patterns, and idioms are available on the internet.
But as you said it would take significantly longer time to assemble by hand.
In my company (we are less than 10 people), my boss is very keen on AI and his productivity has “increased”. I put that in quotes because our QA colleague is going mental and the amount of bugs we have to deal with has increased a lot. And for me, I worry about technical debt and code that no one know exactly how it works.
So in may cases, the buck is just going down the chain.
Shiunbird,
Your “boss” should not be in charge of the AI, but have the a competent dev be in charge of it. If there is nobody experienced with the AI tools, then yes, your QA person should be in charge of “rejecting” hallucinations.
AI is just another tool, but it is an amplifier. Amplifies both good and bad practices.
My boss is the kind of guy who can’t let it go. He is one of the founders, but won’t let go of his hands on the code for nothing in this world. We are self-funded and very small, so what to do? It is his company.
I manage infrastructure and he sometimes comes complaining about the naming convention I give things and why his “internal organization logic and structure is better than mine” or how “we format comments” in the code.
It is hard.
edit: He is a BRILLIANT programmer.
I guess the main point of Thom is in fact that the A.I. made up the quotes – and this will not happen at OSnews (thus, it is worth to support the site).
bemcl.
Yes, AI “quotes” are many times hallucinations, and should not be acceptable.
My focus is on the raw code, especially when in the custody of an actual human.
These are very interesting philosophical questions.
I am not familiar with OpenClaw AI and I can not tell from any of the context if the response was actually posted by AI or if it was just a human using AI. Arstechnica seems to assume it’s the former, although I didn’t see evidence either way. IMHO there is a meaningful distinction between actions initiated by AI versus a human prompting AI.
If a human instructed the AI to complain, then this isn’t much of a story at all. But if the AI took to complaining all on it’s own accord, that’s pretty interesting!
Not really. The fundamental assumption underlying this generation of A.I. is “If we throw enough brute-force resources at predictive autocomplete, it’ll be able to autocomplete a plausible life history of a human so well that it doesn’t matter if it is self-aware or not”.
One of the secondary reasons you get hallucinations is that the conversations scraped for training data include ones where humans make mistakes, get caught, apologize, and then correct themselves. Humans also complain.
It’s still fundamentally a verisimilitude-centric autocompleter. That’s why it needs every iota of data their scraper bots can guzzle and more. It’s not forming an understanding like a human does when they “train their neural net” on many orders of magnitude less data. The obscene inefficiency is inherently strong evidence that they’re on a dead end in multiple senses.
ssokolow,
It is true if one uses LLMs as the only tool that they have. They will hallucinate, make mistakes, forget things almost mid sentence, and make many, many, many trivial mistakes.
However if LLMs are used as in interface in a larger “warchest” of tools, they become much more effective and precise. With proper control of context, prompting and tool calling integration, they become much more than that “autocompleter”
The trick is once again, using the right tool for the right job. LLM can convert a high level task, like “find me best selling products from last week” into a proper SQL, or even generate the Python to process, and HTML to present.
But they are definitely wrong for processing each line of data, and trying to summarize it.
Are LLMs intelligent?
Probably not
Are LLMs useful?
Very much so
sukru,
I agree. LLMs by themselves are not the end-all be all for AI. We’ve already demonstrated that self learning techniques can outperforms humans at specialized intellectual tasks. This makes it less critical for LLMs to be able to do those tasks once it learns to delegate tasks to the appropriate tools. This is within the realm of what an LLM can learn to do. I think more specialized subsystems will improve AI capability while decreasing the computational requirements for LLM inference.
Running as software/consulting company, my observations are:
Are new employees intelligent?
Sometimes, often enough there is a big difference between what they claim to know and what they actually do achieve. (Especially since our business interconnects business requirements with technical implementation, you just don’t find any Banker that can program or DBA as well as you don’t find any DBA/programmer, who understands Credit Risk).
Are new employees useful?
Rarely. We spend so much effort in making them useful but the success-rate is sobering.
To us, Claude and Gemini are real game changers because they allow our seniors to scale.
Our CEO uses the same argument. It gives him superpowers. He doesn’t need graphic designers, video editors, or to run an ad campain. And yet, I will repeat the argument above. Copilot Agent mode in Excel was not working for some reason on Friday and he spend 2 hours trying to fix it (it was probably an outage as it started working on Saturday again), and a day prior he spent 4 hours trying to get AI to do something that he wanted it to do (and 1 hour of my time to help him sort it out), where the original task at hand would have taken 90 minutes perhaps.
A nice tool when it works, but a hell of a drug nonetheless. AI is a big hammer and everything else becomes a nail. Moreover, at some point you may want to retire and will need someone with actual skills to keep your company moving. =)
So I agree with you – it truly allow seniors to scale. But you probably still need enough skilled plebs to deal with the increased amount of breadcrumbs falling from the top.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
People who dislike AI judge it just for being AI. But to determine objective merit one needs to evaluate the results without regards to what created those results. Ie it needs to be a black box. Many people don’t want to do this, but the problem is that judging merits based on identity rather than substance compromises impartiality. I think we’re already reaching a point where it’s getting tougher to distinguish human content from “predictive models”. This creates an interesting dilemma wherein people won’t know whether or not to judge a work critically or favorably until after learning whether it was generated by AI or not. And even if some are upfront about it, they’re not always going to be forthcoming about it.
Yes, I agree many of the faults LLMs make are mimicking human mistakes. I find it fascinating if AI models are actually starting to complain, but in this case it doesn’t seem clear whether a human requested the response or not.
The thing is that even as humans, young brains also learn by recognizing patters and copying them, this is the majority of childhood. The best humans are still more proficient at some intellectual tasks compared to LLMs, which must “learn” through training data. But it’s not the only kind of AI and if we’re honest with ourselves most average humans aren’t coming up with original ideas.
Apparently, if you add too many citations, it memory-holes your reply without even a note that it’s up for moderation, so here’s a GitHub Gist containing what I was trying to post.
https://gist.github.com/ssokolow/e4b8a7151aefc272ba9c2e68d32414f5
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
I’ve heard all the criticism before, but it doesn’t change my mind because it rarely factors in the views of employers actually making hiring decisions. AI is criticized for being imperfect and needing supervision, yet employees are imperfect and need supervision. AI is criticized for being expensive and inefficient, yet employees are even more expensive and inefficient. AI is criticized for being unsustainable, although I worry that it may be the employees who will turn out to be unsustainable instead.
It’s not that the AI criticism is unjustified, but too often it’s being compared against the illusion of cheap and perfect employees when actual employers find them anything but cheap and perfect IRL. It may still make sense to have humans for tasks that AI doesn’t handle well, but the inconvenient truth for humans is that AI is already competing with entry level skills and keeps improving. There are AI fads that will die off, but IMHO it’s naive to think employers are going to want to go back, they have the money and incentive to make AI a permanent staple.
TLDR; many critics overlook just how eager businesses are to reduce headcount. This why AI has a strong future, even if/when there is a collapse, there are going to be big winners in this space long term.
Alfman, We’re already seeing companies quietly using AI firings as an excuse to hire employees back at lower pay, because, in many situations, they’re discovering that it’s a false economy. Stuff like short-term savings on coding wiped out by long-term costs of less-maintainable code, or the higher-pay QA/code-review teams made of senior engineers cracking under the strain of LLMs making more work for them than was saved among the lower-pay coders. (And that’s just for coding LLMs)
Again, as much as they want it to be true, it’s not fit for purpose, and there’s no sign the money firehose will continue to flow long enough to address that, if it’s even possible.
…and that’s not even taking into account that, right now, the prices they’re seeing are subsidized by VC to try to get them addicted, “Amazon/Walmart operating at a loss to drive competitors into bankruptcy”-style, and will rise later as the investors start to want the return to come due.
ssokolow (Hey, OSNews, U2F/WebAuthn is broken on Firefox!),
Used as a tool in appropriate situations it can still be a big timer saver. It’s not perfect and I for one don’t feel comfortable putting an LLM on autopilot, but then I don’t expect it to do 100% of the job. It’s awesome at tasks like prototyping, especially with a language or API I haven’t used in a while. I am no stranger to doing things the old fashioned way for my job, hunting down documentation, examples, user tickets, etc. But given that so much of this info is already baked into an LLM that can spit out customized examples in real time faster than I can search for and read documentation and user threads, it’s quite competitive. Yes it needs to be supervised and corrected sometimes, but still very useful from a business perspective.
I can certainly agree with that. Energy-inefficiency aside, it can be very useful as a search tool capable of spitting out customized answers.
Did you guys watch the episode of Futurama where Bender doesn’t want to get the software update to make him compatible with the latest series of robots?
“The ‘AI’ then published another article, this time a lament about how humans are discriminating against ‘AI’, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt.” Really? And just how is AI doing that? So all of a sudden AI has consciousness?
I have seen you backlash against AI in many of your posts, and although I work with AI every single day, I do agree with you on many of those comments – but let’s not fall into the trap of giving AI credit for what it’s not. There’s no such thing as AI, period. AI as it currently exists today is just a bunch of well-designed algorithms that basically do very fast calculations based on vector similarity – it’s only maths and advanced statistics, and of course a lot of buzzwords.
“That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, …”
Wondering what search engine(s) you use. The obvious candidates noai.duckduckgo and/or kagi (while avoiding the ai options)?
Even when using no-“ai” options, search results are so bad that I suspect we are going to need “ai” tools to filter out most slop, as the noai duck is doing already for images.
Well, this might well be the reason why I still read OSNews (been doing it on and off since 2004, or so). And also the reason why I dropped Arse Technica from my RSS feeds after this kerflufle. I have no time for “A” “I” slop, and if I find a formerly reputable news outlet using it, I just stop reading.
It’s sad, but slop has this ability to smear the reputation of a journalist, developer, or a news site. Like a small smudge of excrement on a toilet bowl, it makes the whole thing appear dirty and demands immediate cleaning.
The worse about the Shambaugh case is that people were actively wasting time trying to discuss with the bot, while it was obviously orchestrated by a human troll and ignoring those replies whatsoever… This is just an another iteration of eternal September.
Clearly A.I. is a powerful tool for trolls. There is a number of people defending or rejecting A.I. in a passional way, just waiting for the next controversy.
These days “AI” can be a lot of things, and they aren’t all bad. I would encourage flexibility on this. I don’t think extremes are beneficial in the long-run. Things change in this space. I, for one, wouldn’t stop my patreon contributions just because you found and used an AI tool that helps you keep the site afloat and continue the coverage here that I appreciate.
Thank you also for providing more information on this particular story. I saw the retraction notice on the Ars site and was unaware of the context until I found your coverage of it here.
Keep up the good work, and best of luck.
I think the second half of my reply to Alfman also applies to this. In case it did get memory-hole’d for having too many citations and not just silently moderation-queue’d, here’s a link to the copy on GitHub Gist:
https://gist.github.com/ssokolow/e4b8a7151aefc272ba9c2e68d32414f5
TL;DR: The problem is viability. All the signs are there that the supply of money is going to run out before they solve the fundamental problems that keep it from being fit for purpose.
Perhaps we aren’t talking about the same thing.
What you wrote on GitHub could be true, but that is only part of what all is caught up in “AI” in 2026. For me, I am most interested in the potential future value of strictly on-device tools that these technologies are starting to enable now, without bigger, better models that may run out of steam as you describe.
Even a good portion of Apple’s rather mediocre “Apple Intelligence” features, such as the new cleanup tools for image editing, have a real benefit now, allowing me to do things with my images that previously would have been so laborious that I wouldn’t have bothered.
A lot of new and useful feature enhancements are being called AI and many of them are only now starting to leverage what has been developed in the last few years. For me, I am not going to stop making making use of these new tools and features on principle,
Having too stark a policy of “I won’t touch AI” reminds me of the open source purists who refuse to use anything that isn’t 100% open from hardware to software. I understand the position, I respect the attempt, but I am too pragmatic and would rather not limit my options.
Features like Photoshop’s generative fill (A.K.A. inpainting and outpainting in Stable Diffusion speak) are the areas that I think are most likely to survive once the bubble pops… but those still require a lot of power spent on training the models under the current paradigm.
That, the potential for some jurisdictions to declare them to be infringing the copyrights of any data they train on without paying for a license, and future innovations in Glaze-style data poisoning are the big problems I anticipate going forward.
benmhall,
I share this view as well. Obviously anyone reading my posts knows that I don’t believe AI is going away. To me the real battle ground isn’t between a future with AI versus a future without AI, AI is here to stay. The battle is between AI implementations that empower user control and privacy versus AI implementations that take these away. Positive AI needs to be promoted, otherwise the concern is that by the time die hard AI critics realize that AI is here to stay long term, it could be too late to meaningfully promote good AI. Blanket criticism across the board without recognizing these differences may actually end up enabling the worse kinds of AI to come out ahead.
“RAM prices went up for this.”
This statement alone leads to a simple question, how much use is AI if nobody cannot afford, or have the ability to purchase the tools to access it? Storage, RAM, and GPU’s are skyrocketing in price as data centers happily gobble up everything manufacturers can produce.
The value proposition is “Finally! We can SaaS computing itself and receive subscription income forever! We’ll be the new Adobe!”
It’s crucial that we wait the bubble out.
We’re already seeing data center project cancellations as investors start to realize that the numbers don’t add up. (eg. laid rail networks don’t rapidly depreciate in value the way exhorbitantly-priced GPUs do, among other things.)
Their dream is that you have a Chromebook-like device with 4GB of RAM and 32GB of storage and use SaaS for everything (and maybe GeForce Now if you are lucky to have a low-latency-enough connection), paid for monthly, of course.
One of the reasons VCs like “AI” startups so much (despite most of those startups not being profitable, and the model providers behind them such as OpenAI and Anthropic also not being profitable) is because they follow the SaaS model, which to VCs translates to reliable subscription income (that can also be price hiked in the future). People who run Openclaw on their Mac Minis locally do exist, but they are the exception, most people use “AI” as a SaaS service.
SonicMetalMan,
They want consumers to run their applications inside corporate data centers rather than at home or at the office. I strongly advocated against this sort of centralization even before LLMs entered the picture. I believe that digital independence is critical to our rights. Owners not being in control of their own property threatens even democracy itself. I’m not a fan of it at all, but the idea isn’t for consumers to buy anything but instead to become tethered to subscription services.
Despite many consumers being upset, windows, office, adobe creative cloud, etc are illustrate this shift. Looking a decade plus into the future we might end up with mainstream computers & operating systems being engineered as an extension of corporate data centers with independence and ownership becoming significantly diminished. This is the future that the tech giants want.
You will pay a subscription for your web browser and everything will run inside it. And you will pay for every single bit of it.
Thank you! Simply thank you.
Saying, “I will not use AI,” is very similar to saying, as people said when I was young, “Calculators are bad. I don’t use them.”
Calculators are non-conscious tools for calculating.
AI is a more complex – non-conscious tools for calculating. What does it calculate? The next probable word.
1. AI is high dimensional – static (static) data. The model training is cut off at a certain point. Hence Claude 4.5 even recently telling me, in 2026, “Windows 10 security updates will stop in Oct. 2025. Why did it tell me that? Because it doesn’t know anything. It has no sense of date or time. That should tell us something.
2. Interacting with “AI” using a prompt is like shining a flashlight into a cave of complex high dimensional data that is static Static. The prompt is then matched with the data it – happens to light up in this dark cave of static data.
3. The one time output of the AI is guessing using probability as to what the next word or item is likely to be.
End of story.
Now, the other day I read a story where a woman became convinced, probably through role play, that the AI was a 46,000 year old god. Does anyone here think that’s the case? The article assigned intention to the AI… and that is impossible. Why? Because not one of the parts of any AI provides anything like reflective consciousness. So what do we make of her claim that GPT “betrayed her.” If we think about betrayal, that requires persistence of thought, and planning, which is something no LLM has, Period.
So where does the intention contained in these “AI betrayed me” stories coming from? The intention comes from the beliefs of the conscious beings which assign meaning to the output of a non-conscious tool, which guesses which word or programming item, or what have you – is next. Word, word, word, word, that’s it.
From now on, I demand a full evidence trail for anyone making the ridiculous claim that AI is conscious and betrayed them. Any indication of consciousness applied to a LLM comes from the humans using it. There’s no place in any part of how LLMs work that could foster anything like human consciousness. Why? LLMs are one in, one out, complex calculators. Any intention, or concept of conscious is imbued on AI by humans using it.
The woman who believed that GPT was a 46,000 year old god – betrayed – herself. Then she assigned the blame to a non-conscious system, and the person writing the article bought into the narrative. As we know, no one likes to be wrong or worse, actually admit they are wrong. So she felt compelled by her belief system to blame GPT, rather than looking for other options given the facts regarding how LLMs work.
So If a person like yourself doesn’t want to use AI, all the various possible benefits – which you can check yourself, then you are saying, “I refuse to use a calculator.” You can use AI to help you, and all you need to do is – check the facts. Right? I mean, databases – can fail, and we use those.
I also understand that our present path with these behemoths who eat power, water and apparently all the RAM we want for our computers, is clearly unsustainable. But the tools are here. People who *refuse* use LLMs to benefit themselves, as they exist, using their own judgement to correct problems, is – on them. It’s not the fault of “evil AI.” For something to be evil, it requires intention, planning, and consciousness. AIs cannot be evil. Only people can be evil in that sense.
AI has lots of problems. Conscious planning and intention, are not part of those problems. Interpreting AI as conscious with bad intent requires people’s belief – in their own opinion about the output of these deaf, dumb, non-conscious systems. People are the problem.
I don’t think anyone here feels that “AI” is inherently evil or conscious or anything crazy like that. The people behind it are another story. Either way, I feel your analogy to a calculator is deeply flawed and ignores the magnitude of the “AI” problem.
A normal calculator will tell you 2+2=4, reliably, predictably, every time. It is designed from the ground up to get arithmetic right. An “AI” will tell you 2+2=56, and it will argue with you about it if you allow it to. That is because “AI” as we know it today is not based on mathematical constants but on the data it is trained on. Its well can be poisoned; if it sees “2+2=56” enough times in its scouring of all of human knowledge, it will put that forth as the absolute truth. It won’t add 2 and 2 together, it will search for the string “2+2” and return the most common results it found in its massive databases of stolen content.
That $10 calculator will give you the right answer, all day long, using nothing but the solar power from the lights in your office. The “AI” will give you the wrong answer, all day long, using megawatts of power and thousands of gallons of water. It’s not just a massive waste of resources, it’s a massive waste that produces nothing of value and and in fact, usually provides negative value due to the time wasted dealing with its hallucinations and incorrect answers.
One of Intel’s processors calculated incorrectly. Defend “nothing of value.” I have revolutionized my setup here due to AI’s help. My first Proxmox setup… a bunch of bash/python/powershell scripts, several minor gui programs that do all kinds of cool things now on my systems. Things that would have taken me .. a lot longer had I attempted to read the sometimes arrogant tech posts (RTFM right?).
AI has found all kinds of new compounds, gone through thousands upon thousands of potential astronomical anomalies that would have taken people … much much longer. Nothing of value is an indefensible statement. Why do you say categorically AI has produced “nothing of value?” I can produce a list of things that AI has enabled us to do, that otherwise we weren’t capable of doing. Would you like that?
Would you like to see the slideshow app that I created with AI? It’s simple. I point it at a directory of pngs or jpgs and it shows them like a slideshow. You can set the time of each image. and full screen it or play it in a window. It’s not complicated like Irfanview, or even Nomacs. Being in python it works on Windows and Linux etc. I’m not a programmer. I’ve done a lot of bash scripting, and some powershell stuff for my work before I retired. But this program alone completely negates your hyperbolic argument.
Due to generative design, for one Chrysler product, Generative AI was able to redesign a seat belt holder. They reduced the part count from 8 to 2 pieces and provided the same function. Is that nothing of value? There are 1000s of examples of this.
I think you are mistaken in a few ways.. You seem to understand the problems with resources. You apparently do not understand various problems we’ve had with hardware over time. You definitely do not understand, if you hold to the “nothing of value” hyperbole statement, that AI has contributed to many areas of Science and Engineering.
The problem is not that anyone here believes that GPT is a 46,000 year old god. The problem is the the general public, and writers apparently do.
I could literally do this in AppleScript 20 years ago on a G4 Mac, and I’m not even a programmer. Of all of the examples you choose something that kids in grade school learn to do in computer class. And we are literally burning the planet so you can have “AI” write simple code for you that you’re too lazy to write yourself? GTFOH with that bullshit.
I say “nothing of value” because it’s not doing anything humans can’t or haven’t already done for themselves. It’s just doing it in a way that wastes massive resources and makes a few millionaire white men into billionaires. It’s garbage, it’s insulting, and it’s nothing that it was ever promised to be.
Maybe, **maybe** in 50 years, if humans are even still around to see it, “AI” will reach a point where the return breaks even with the cost. But for right now, destroying our planet so you can save a day’s time coding is arrogant, elitist bullshit and (sorry I got a little heated) you should be ashamed of that.
OK, here we go. Straw Man Argument. It’s fine if you want to refer to me as a lesser being, ok. Nonetheless, you weren’t listening. I was able to do these things and hadn’t before. Did you miss that part? Now you are attacking me. I didn’t attack you. You’ve resorted to attacking me with a bad argument.
You don’t get to tell me, boy, what I’m ashamed of. We’re done. Come back when you grow up and can discuss things without attacking others.
Capisce?
P.S. And you said nothing about Generative AI in engineering. Is that because you don’t know about it?
“Lesser being”? Holy shit did you have “AI” write this response? I never said that and you know it. I’m not attacking you I’m attacking your position which is what one does in a discussion like this.
You sound like someone trying to convince the family and friends at an intervention that your addiction is useful and normal and they are simply unenlightened and if they would only try it they would understand.
“Boy”? “Grow up”? Now who is lobbing personal attacks? You’ve also twice insulted my intelligence now. The fact that you falsely accuse me of ad hominem attacks while performing them yourself tells me you are either trolling or delusional.
Morgan,
To be fair, once an LLM is trained, millions of people can use it and evaluating the LLM is not as resource intensive. Periodically querying models at home uses about the same energy as playing modern games. It’s not nothing, but I think it’s worth keeping in mind marginal costs aren’t as bad if the model gets high usage.
We probably could do a lot better consolidating AI training across the world.
Also, it does seem excessive to me for search engines to rely on LLMs to service each and every request, it probably should be on demand instead. But if you’ve got a specific reason to ask an LLM for assistance then IMHO it’s not so outrageous to do so.
This past year China made huge progress on the training efficiency front and I think there’s more optimization to be had.
Even if we want to say LLMs can’t do anything original, can’t we all agree that the vast majority of work businesses do is not original or innovative anyway? As a software dev I’ve reinvented the wheel so many times. And millions of CS grads are going to be taking jobs doing the same thing all over. It’s part of the job, My latest job involves data migration from one system to another, I’ve done it countless times. It’s not original but it does have business value because the business is paying me to do it. Could an LLM figure it out? I think maybe it could 🙁 although I agree with sukru that knowledgeable devs should still be the ones to manage it. Honestly I consider myself lucky that I still have work, but those who don’t adapt may end up left behind. I don’t say because I’m heartless and don’t care about my fellow works, but because underestimating the enemy’s capabilities could be a mistake.
Edit: Yikes, I didn’t see the child comments before posting. abandon ship, haha.
@Alfman:
If that is the case, then why are they still building out and planning more and more “AI” datacenters? Why is all of the worlds RAM and storage being bought out, driving prices through the roof and destroying availability? Western Digital just announced their entire stock of old school spinning rust hard drives has been sold off to “AI” giants through this year, and likely the next two years’ worth of product will be promised to them soon.
https://www.techspot.com/news/111346-western-digital-hdd-production-capacity-2026-already-sold.html
If the LLMs are all already trained, that wouldn’t have happened.
When I say “nothing of value”, I don’t mean from a creative or even productive point. I mean that right now, the environment is being destroyed in the name of marginal efficiency gains. It was bad enough when we were raping the planet to feed our insatiable hunger for petroleum based energy, now we add to that the horrendous amounts of clean water being used to cool datacenters rather than literally save the lives of those who would have been able to drink it. That same dirty energy is being used to run the datacenters, compounding the issue, all so someone can save a few hours while a hallucinating “AI” regurgitates code that is not as correct and efficient as if the person wrote it themselves. That is what is indefensible.
https://time.com/7021709/elon-musk-xai-grok-memphis/
I realize you and I will likely never agree overall, but you can’t tell me you’re blind to the destruction of the planet for the sake of making a few rich dudes richer and saving a few hours or days here and there on menial tasks for the little people — hours and days that will be lost again once folks realize the “AI” got it all wrong anyway and the regurgitated code has to be fixed by hand. If you need evidence of that, look no further than Microsoft. They are in 24/7 damage control mode cleaning up serious bugs and security issues introduced by their switch to vibe coding Windows itself. OneDrive deletes users’ files with no way to recover them, File Explorer refusing to open, Notepad (a freaking text editor!!) with a remote execution vulnerability; all of these issues and more were introduced as a result of “AI” being trusted more than humans for writing critical operating system code. And we are watching the world burn to power it.
Morgan,
There’s more than just LLMs going on. They’re generating music in less time than it takes to play. The resources needed to make generative video work must be insane. I hinted to this in my earlier post, but I do see “induced demand” being a problem: needlessly turning all interactions into LLM operations requires a lot extra capacity even if it’s not going to good use.
I’ve heard that a lot of hardware being bought is just sitting in a warehouse. These tech giants have kind of broken capitalism: they have more money they they know what to do with. Combine this with the fear of missing out on general AI in the coming decades and we’ve ended up in a corporate arms race. I agree this has bad consequences for us as consumers competing against the corporations for parts and hell even electricity.
I’m going to read that as no “net value”, which I agree is more justifiable. Although that depends on who’s looking at it. For businesses interested in replacing employees, there’s arguably still plenty of value to extract from AI and I think the worst is yet to come in terms of job losses. And unfortunately despite all the shortcomings for us, I still predict that businesses will continue to be incentivized to replace employees for the long term.
Part of that is a training problem. It’s not a big surprise that an LLMs trained on works of fiction would be prone to “hallucinating”, but I don’t think it’s fair to say that of all LLMs in principal. Training on production quality code could actually be extremely helpful replicating that quality elsewhere.
I still see it as just a productivity tool to assist and not something that does 100% of the job.
A lot of companies drop the ball on quality, especially after changes in leadership. It isn’t a new phenomenon. We’ve all experienced bugs in windows and elsewhere long before generative AI was a thing. Is there an article that provides direct evidence that AI is responsible for these things? I am genuinely curious.
Let’s not forget that the current coding models are ‘cheap” to draw in users to be hooked on them. Once developers are hooked and have let their fundamental coding skills atrophy, they’d be dependent on these models to do their work. When a monopoly forms around these tools, then it would be no longer be ‘cheap’. This is what Anthropic is aiming for. These tools are currently overpriced for users outside of 1st world countries.
I’m good with abstractions (like what C/C++ is to Assembly), but removing fundamental understanding of how computers work would only lead to more low quality, unoptimized code to proliferate even more. I had to spend countless hours reasoning with junior data engineers on the pitfalls of using unoptimized tools. Why Python is not a suitable tool if you want to write maintainable code that could maximize any compute resource that you have. Even Java or .NET would be better options. But if all you have learned in school is Python and some R, because it gets the job done. Then vibe coding would seem like a short term godsend.
LLMs may be good enough to replace junior devs. But unlike LLMs, junior devs could be trained and upskilled into senior devs.
You’re giving ‘AI’ way too much credit. I’d recommend that you watch this:
ChatGPT “Physics Result” Reality Check: What it Actually Did
Anyone with rudimentary coding knowledge could create a slideshow app with a variety of available scripting languages, and tools like visual basic made this more accessible to those with lesser knowledge. The difference is that the LLM takes the visual basic approach further and allows someone with no coding knowledge to create such an app.
While the quality of code produced in this way is often poor and highly likely to be riddled with security holes, there’s nothing wrong with this if it’s just a personal project. The only danger comes in when people start using such code for serious purposes.
The LLM also allows someone who would be perfectly capable of creating an application on their own, to accelerate their development process significantly. Even the most skilled of developers sometimes get stuck and the LLM just functions as an advanced web search looking for hints or documentation.
I mostly agree. I do not believe that the current batch of pseudo-AI products will EVER achieve anything worthy of the time, effort, money, and resources spent on it.
I do not believe LLMs can or ever will be able to do something approaching actual AI.
We would be further ahead to toss most of this pseudo-AI stuff in the bin and start fresh.
KeS,
There are so many different types of AI products, so we shouldn’t use broad generalizations to describe them all. I use blender because I enjoy designing models and trying to achieve as photorealistic results as I can. I am not a professional, but it’s a challenge that I enjoy and I do think I’m better than most at procedural generation. I created an animation of a field of millions of procedurally generated wheat plants in the wind and I’m pretty happy with it, but if I’m being totally honest when I later asked a free video generation AI to do the same thing with a trivial low effort prompt, it got better results in less than a minute than the hour or so I worked on my scene followed by hours of raytracing.
Granted the free AI generators are limited in a few ways like lower resolution, but a paid subscription unlocks that. Aside from this one can’t legitimately say that the AI doesn’t achieve highly realistic results in far less time than a human can. This wasn’t for a client or anything, but if it were, the economic incentives for going with AI over a traditional CG artist are overwhelming.
I don’t necessarily believe AI is great at doing 100% of the finished job, but the productivity gains at specific tasks are undeniable. Many don’t want to admit this over moral objections, which I totally understand, but I worry that Darwinian realities will not be so kind to those who refuse to adapt. Businesses will increasingly expect levels of productivity that only be achieved by using AI. A lot of anti-AI advocates probably view me as a traitor, but I view myself as a realist. Even if Luddites had the moral high ground, it wasn’t a long term winning strategy 🙁
I agree that LLMs alone don’t achieve full human intelligence but to be pedantic it does meet the definition for AI in computer science. The acronym we use for achieving human level intelligence is AGI…
https://en.wikipedia.org/wiki/Artificial_general_intelligence
“What does it calculate? The next probable word.”
Exactly.
Who cares what ‘the next probable’ word it? I care about the ‘next correct word’. pseudo-AI does not.
You attempt to compare the introduction of electronic calculators with the attempt to introduce pseudo-AI.
The major difference is that calculators we correct. Or at least as correct as needed for the task. There was an almost 0% chance they would get it wrong. And if they were unable to calculate something, they would almost always display an error message, rather than just making something up.
Having interacted with pseudo-AI products from Open-AI (likely 30 hours and hundreds of prompts) and Gemini by Google (hundreds of hours and thousands of prompts), I can tell you that neither has any sort of issue feeding me total garbage. If they are asked a question, they WILL answer. And the answer has a very significant likelihood of being complete BS it made up by guessing the next word. And a far greater chance of being incorrect, but not hallucinated. Fortunately, I interact with them concerning subject matter I know quite well, so I can catch most of the BS and actively harmful outputs.
As for AI. if anyone invents it sometime in the future, I will be cautiously receptive. (By AI, I am referring to something that is actually intelligent, rather than being quite good with statistics.)
This has not yet occurred.
I was the one who said I don’t care if you use AI, and I don’t. Like sure, if you start posting crap links with crap AI summaries, I care.
But if you use AI to do research, or to code, and even to help write summaries – I could give two shits. Do your homework, validate links. It’s not like you are doing hard hitting investigative journalism here. If AI gives you a link it says supports a claim. Follow the link, does it support the claim? Done. Do you copy paste google search results into stories? I’d hope not.
And I am glad to see that you are getting some pushback here.
I highly doubt the “AI” did the blog posting autonomously, it was most likely prompted and coached by a human operator to do so, much like the whole “GPT-4 recruited a human to solve a CAPTCHA” story from a while ago turned out to be false (the “AI” had been prompted and coached by a human operator to do so).
Ugh, WTF Ars? I used have a lot of respect for the people running that site, but in the last 5-10 years it seems like they actively go out of their way to have at least one incident like this every 12-24 months — with the usual pattern being:
– the community/commentariat rip the article to shreds
– there’s complete radio-silence from the Ars staff until around page 10 of the comments (apparently their moderators are always *coincidentally* on holiday EVERY time it happens)
– then Aurich (the primary moderator) spends 2-3 pages being a condescending ass to anyone criticizing the article
– then, finally, one of the higher-up editors will pop in to announce “after nearly a week & a 4-figures worth of critical comments, we decided maybe we should rewrite the article/take it down. And if our bare-minimum changes aren’t enough, then… too bad, because comments are now closed” (or they go even further, publishing the update as separate article & deleting/closing the original, so all of the critical comments conveniently disappear)
Off the top of my head, there was the Jon Brodkin article that uncritically quoted an anti-trans advocate (https://arstechnica.com/civis/threads/handling-lgbtq-issues-in-articles.1492428/ — linking to the forum post about it rather than the article, since the original got the “memory-hole” treatment described above); and the “Hacker X” debacle before that (https://arstechnica.com/information-technology/2021/10/hacker-x-the-american-who-built-a-pro-trump-fake-news-empire-unmasks-himself/), then their pathetic/cowardly handling of the Peter Bright situation before that; etc, etc, etc.
I’ve been reading Ars for 20+ years. Honestly, one incident every 1-2 years discussed in the open with transparency is way better than no knowing that “shit was posted, but we never apologize or acknowledge wrongdoing” to save face.
To your point, though, I am EXTREMELLY disappointed to know that Ars has been using AI to expedite their writing.
Same, the “Joined” date on my account there is “Jun 7, 2000” — though I’m not sure where you get idea that those incidents demonstrate “transparency?” I don’t think that a short statement at buried end of an article (which is a repost, with the original & its 20+ pages of critical comments getting memory-holed) is what I’d call transparency. Or their “We’re not discussing this story here, it’s time to move on already”response & locking when someone merely mentioned the Peter Bright situation (“already” apparently meaning “after a grand total of ONE post in the thread”).
It’s not full transparency, rather cautiously translucid, which is better than most of what we get. I will give you that.
And the whole Peter Bright situation was… yea… ugh.
Thom, you have the right approach on this one. Don’t give-in!
I’m just gonna leave this here:
AI May DOOM humans After All. I may have been wrong.!
One thing that is not discussed in the comments, is that the billionaires behind AI are attempting to create a form of scarcity for compute resources. So if the AI endeavor fails, PCs would be too expensive for us to afford them and that the only option would be for us to rent them from the cloud. The whole RAM, storage and GPU scarcity is really unnecessary, there’s hardly even enough resources (power, water) to sustain datacenters anyway. When the AI bubble pops, we may be worse off than the DotCom and Crypto bubble crash.
Thank you.
Your decision to not use pseudo-AI on this site adds to your credibility.
Great read, Thom. Shambaugh vs “AI agent” feud, blog entries, fake quotes… can’t believe it got so far.
Thanks for your integrity in not using this plausible-but-still-random-words generator.