Ever heard of a condition called bixonimania? Did you search the internet or ask your “AI” girlfriend about some symptoms you were experiencing, and this was its answer? Well…
The condition doesn’t appear in the standard medical literature — because it doesn’t exist. It’s the invention of a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden, who dreamt up the skin condition and then uploaded two fake studies about it to a preprint server in early 2024. Osmanovic Thunström carried out this unusual experiment to test whether large language models (LLMs) would swallow the misinformation and then spit it out as reputable health advice. “I wanted to see if I can create a medical condition that did not exist in the database,” she says.
↫ Chris Stokel-Walker at Nature
And “AI” ate it up like quality chocolate. It started appearing in the answers from all the popular “AI” tools within weeks, and later even started showing up as references in published literature, indicating that scientists copy/paste references without actually reading them. This is clearly a deeply concerning experiment, and highlights there may be many, many more nonsensical, fake studies being picked up by “AI” tools.
Of course, I hear you say, it’s not like propagating fake or terrible studies is the sole domain of “AI”, as there are countless cases of this happening among actual real researchers and scientists, too. The issue, though, is that the fake studies concerning “bixonimania” were intentionally made to be as silly and obviously ridiculous as possible. It references Starfleet Acadamy, the lab aboard the Enterprise, the University of Fellowship of the Ring, and many other fake references instantly recognisable as such by real humans.
In fact, the studies even specifically mention that “this entire paper is made up” and “fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group”. It would take any human only a few seconds after opening one of these papers to realise they’re entirely fake – yet, the world’s most advanced “AI” tools gobbled them up and spit them back out as pure fact within mere weeks of their publication
This shouldn’t come as a surprise. After all, “AI” tools have no understanding, no intelligence, no context, and they can’t actually make sense of anything. They are glorified pachinko machines with the output – the ball – tumbling down the most likely path between the pins based on nothing but chance and which pins it has already hit. “AI” output understands the world about as much as the pachinko ball does, and as such, can’t pick up on even the most obvious of cues that something is a fake or a forgery.
It won’t be long before truly nefarious forces start doing this very same thing. Why build, staff, and maintain a troll farm when you can just have “AI” generate intentional misinformation which will then be spread and pushed by even more “AI”? Remember, it took one malicious asshole just one long since retracted fake paper to convince millions that vaccines cause autism. I shudder to think how many people are accepting anything “AI” says as gospel.

Sorry, but this is not really news: The “Hommingberger Gepardenforelle” should be more than 20 years old now. Please see: https://de.wikipedia.org/wiki/Hommingberger_Gepardenforelle
Most of the people understand, that “AI” is in fact just LLM, which is recognition of statically relevant patterns.
> Most of the people understand, that “AI” is in fact just LLM,
> which is recognition of statically relevant patterns.
The trouble is that even people who claim to know a bit about IT see so much pro-AI propaganda that they stop seeing that.
Given how many people actually believe in Witchcraft, Daemons and Prophecy, I embrace literally *any* tool for decision making based on statistical methods and soundness.