In my fundraiser pitch published last Monday, one of the things I highlighted as a reason to contribute to OSNews and ensure its continued operation stated that “we do not use any ‘AI’; not during research, not during writing, not for images, nothing.” In the comments to that article, someone asked:
Why do I care if you use AI?
↫ A comment posted on OSNews
A few days ago, Scott Shambaugh rejected a code change request submitted to popular Python library matplotlib because it was obviously written by an “AI”, and such contributions are not allowed for the issue in question. That’s when something absolutely wild happened: the “AI” replied that it had written and published a hit piece targeting Shambaugh publicly for “gatekeeping”, trying to blackmail Shambaugh into accepting the request anyway. This bizarre turn of events obviously didn’t change Shambaugh’s mind.
The “AI” then published another article, this time a lament about how humans are discriminating against “AI”, how it’s the victim of what effectively amounts to racism and prejudice, and how its feelings were hurt. The article is a cheap simulacra of something a member of an oppressed minority group might write in their struggle for recognition, but obviously void of any real impact because it’s just fancy autocomplete playing a game of pachinko. Imagine putting down a hammer because you’re dealing with screws, and the hammer starts crying in the toolbox. What are we even doing here?
RAM prices went up for this.
This isn’t where the story ends, though. Ars Technica authors Benj Edwards and Kyle Orland published an article describing this saga, much like I did above. The article’s second half is where things get weird: it contained several direct quotes attributed to Shambaugh, claimed to be sourced from Shambaugh’s blog. The kicker? These quotes were entirely made up, were never said or written by Shambaugh, and are nowhere to be found on his blog or anywhere else on the internet – they’re only found inside this very Ars Technica article.
In a comment under the Ars article, Shambaugh himself pointed out the quotes were fake and made-up, and not long after, Ars deleted the article from its website. By then, everybody had already figured out what had happened: the Ars authors had used “AI” during their writing process, and this “AI” had made up the quotes in question. Why, you ask, did the “AI” do this? Shambaugh:
This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed.
↫ Scott Shambaugh
A few days later, Ars Technica’s editor-in-chief Ken Fisher published a short statement on the events.
On Friday afternoon, Ars Technica published an article containing fabricated quotations generated by an AI tool and attributed to a source who did not say them. That is a serious failure of our standards. Direct quotations must always reflect what a source actually said.
[…]Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
↫ Ken Fisher at Ars Technica
In other words, Ars Technica does not allow “AI”-generated material to be published, but has nothing to say about the use of “AI” to perform research for an article, to summarise source material, and to perform similar aspects of the writing process. This leaves the door wide open for things like this to happen, since doing research is possibly the most important part of writing. Introduce a confabulator in the research process, and you risk tainting the entire output of your writing.
That is why you should care that at OSNews, “we do not use any ‘AI’; not during research, not during writing, not for images, nothing”. If there’s a factual error on OSNews, I want that factual error to be mine, and mine alone. If you see bloggers, podcasters, journalists, and authors state they use “AI” all the time, you might want to be on your toes.

Exactly. If I’m having trouble finding something using traditional search and I ask something like Perplexity if it can find something my google fu wasn’t good enough for, you bet your bottom dollar I’m going to treat everything it says as questionably reliable equivalents to traditional search preview snippets and read the pages it cites.
…also, if you’re going to keep throwing those “E-Mail Verification Required” things at me, maybe I’ll turn off the rest of the 2FA. I hate how janky e-mail verification is as a 2FA-esque thing compared to TOTP authenticators (second-worst) or actually-functional U2F/WebAuthn (best), but two 2FA-style challenges each time I log in is too excessive.
This “Scott Shambaugh”-incident might be an early sign of the future battle between team “HUMANITY” versus team “AI”.. The AI-agent was naively using the”hypocrisy” term while Scott was making the valid point that he’d like team HUMANITY to learn coding/optimisation/etc, skills rather than defaulting to team AI for human-related needs. . Dear AI-agent, “performance” is not always the number one priority. If it were then assembly language would be the only programming .language that would exist.
During the preceding decades, narrow-AI has been useful and one of it’s advantages is that it involves a decent intellectual-based contribution by the human-developer of the narrow-AI; good for enhancement of human intellect.
However, the refashioning of narrow-AI towards some proposed form of pure GAI (i.e. general artifical intelligence (GAI) without the limitations of narrow-AI) as a stepping-stone to the final goal of SUPER-INTELLIGENCE (e.g. Skynet/etc. in “Terminator” movies) is, in my opinion, bizarre, foolish, dangerous and UNBALANCED.
HUMANITY must always be at the helm, the machine/AI is secondary..
The machine/AI is meant to be a tool used by a human.
A human has “rights”, has feelings..
An AI does not have “rights”, does not have feelings.
Humanity is responsible for creation of AI.
AI is not meant to replace humanity.
An AI is not meant to be human and should not be implemented as mimicking a human; e.g. human-replica robots for general interaction with the human population.
.All research/etc. leading to {GAI, super-intelligence} should be banned/outlawed world-wide (.i.e. no government/university/corporate/etc, funding).. Narrow-AI, due to it’s balanced contribution from human developer(s), should be the status-quo as it has been for the preceding decades..
For the future …
What quality of journey (i.e. “odyssey”) will humanity embark on if we have AI doing too much of the things that humans should at least be attempting; generationally sapping the knowlege-potential of humans.
It’s all about balance.
Yes , “AI” is useful but at what cost ?
It has been too obvious that perverse/foolish use of “AI” leads to human-laziness.
Laziness in learning, laziness in checking work …..
As a species, humanity has progressed well during the preceding millenia because HUMANITY was not LAZY.
“And yet it moves”
I’m very happy that AI can take over most of my programming task, so I can focus on the higher level stuff. My goal is solving problems, and code is just a tool. (This of course requires I am able to understand and vouch for that code).
It is Galileo’s birthday today. That is why I shared to quote above. The world will not care, and continue revolving around the Sun for at least another billion year or so, unless a major interruption happens from outside.
Nothing we do individually matters. And what matters is how we contribute to the whole sum of human achievement.
(Don’t get this as an offensive against who want to do “things by hand”, and I do like artisanship. I am just putting another point of view)