Technofascism Archive

AI as a fascist artifact

In that reading „AI“ is a machine for the creation of epistemic injustice and the replacement of truth with what a tech elite wants it to be in order to control the population. This is a Fascist project that not so subtly aligns with Fascism’s totalitarian will to power and control as well as its reliance in replacing reasoning and debate with belief in power and the leader. ↫ Jürgen Geute The purpose of a system is what it does, and what “AI” does is stunt users’ own abilities and development and concentrate power and wealth even further in the hands of a very small privileged few – a privileged few who consistently espouse fascist ideology and promote and implement fascist ideas. Jürgen Geute lays it out in much more detail backed by solid references and concrete examples, but the conclusion is clear. And uncomfortable to many, as such conclusions always are.

Will “AI” chatbots be the tobacco of the future?

Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. “I had some time, so I thought: let’s have a look at this new technology everyone is talking about,” he says. “Very quickly, I became fascinated.” Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling “a little isolated”. He smoked a bit of cannabis some evenings to “chill”, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk €100,000 (about £83,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself. ↫ Anna Moore at The Guardian These stories are absolutely heart-wrenching, and it doesn’t just happen to people who have had a history of mental illness or other things you might associate with priming someone for “falling for” an “AI” chatbot. Just a few years in, and it’s already clear that these tools pose a real danger to a group of people of indeterminate size, and proper research into the causes is absolutely warranted and needed. On top of that, if there’s any evidence of wrongdoing from the companies behind these chatbots – intentionally making them more addictive, luring people in, ignoring established dangers, covering up addiction cases, etc. – lawsuits and regulation are definitely in order. Only yesterday, Facebook and Google lost a landmark trial in the US, ruling the companies intentionally made social media as addictive as possible, thereby destroying a person’s life in the process. Countless similar lawsuits are underway all over the world, and I have a feeling that in a few years to decades, we’ll look at unregulated, rampant social media the same way we look at tobacco now. Perhaps “AI” chatbots will join their ranks, too.

Meta and TikTok let harmful content rise after evidence outrage drove engagement, say whistleblowers

Once again, social media giants Facebook and TikTok have been caught red-handed. More than a dozen whistleblowers and insiders have laid bare how the companies took risks with safety on issues including violence, sexual blackmail and terrorism as they battled for users’ attention. An engineer at Meta, which owns Facebook and Instagram, described how he had been told by senior management to allow more “borderline” harmful content – which includes misogyny and conspiracy theories – in user’s feeds to compete with TikTok. “They sort of told us that it’s because the stock price is down,” the engineer said. ↫ Marianna Spring and Mike Radford at the BBC Meta, TikTok, and Twitter are criminal enterprises, and their executives should be trembling in court instead of scheming on yachts. Their role in legitimising far-right extremism will eventually catch up to them, and once that happens, no yacht is going to keep them safe.