Clown car Archive
An extensive study by the European Broadcasting Union and the BBC highlights just how deeply inaccurate and untrustworthy “AI” news results really are. “AI” sucks even at its most basic function. It’s incredible how much money is being pumped into this scam, and how many people are wholeheartedly defending these bullshit generators as if their lives depended on it. If these tools can’t even summarise a text – something you learn in early primary school as a basic skill – how on earth are they supposed to perform more complex tasks like coding, making medical assessments, distinguish between a chips bag and a gun? Maybe we deserve it.
If you’re eating a bag of chips in an area where “AI” software is being used to monitor people’s behaviour, you might want to reconsider. Some high school kid in the US was hanging out with his friends, when all of a sudden, he was being swarmed by police officers with with guns drawn. Held at gunpoint, he was told to lie down, after which he was detained. Obviously, this is a rather unpleasant experience, so say the least, especially considering the kid in question is a person of colour. In the US. Anyway, the “AI” software used by the police department to monitor citizens’ behaviour mistook an empty chips bag in his pocket for a gun. US police officers, who only receive a few weeks of training, didn’t question what the computer told them and pointed guns at a teenager. In a statement, Omnilert expressed regret over the incident, acknowledging that the image “closely resembled a gun being held.” The company called it a “false positive,” but defended the system’s response, stating it “functioned as intended: to prioritize safety and awareness through rapid human verification.” ↫ Alexa Dikos and Rebecca Pryor at FOX45 News I’ve been warning that the implementation of “AI” was going to lead to people dying, and while this poor kid got lucky this time, you know it’s only a matter of time before people start getting shot by US police because they’re too stupid to question their computer overlords. Add in the fact that “AI” is well-known to be deeply racist, and we have a very deadly cocktail of failures.
I can exclusively reveal today Anthropic’s spending on Amazon Web Services for the entirety of 2024, and for every month in 2025 up until September, and that that Anthropic’s spend on compute far exceeds that previously reported. Furthermore, I can confirm that through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue. ↫ Ed Zitron These numbers do not even include what the company spends on Google’s services. Going through all the numbers and reporting, Zitron explains that the more “successful” Anthropic becomes, the bigger the gap between income from paying customers and its spending on Amazon and Google services becomes. It’s simply unsustainable, and the longer we keep this scam going, the worse the consequences will be when the bubble pops. Sadly, nobody will go to jail once hell breaks loose.
Open AI has recently announced deals worth $600 Billion with Nvidia, AMD, and Oracle. OpenAI is able to spend hundreds of billions of dollars they do not have because those companies are paying that same money back to OpenAI via investment. The infinite money glitch means that stocks keep going higher as more circular revenue cycles between the same players. ↫ Sasha Yanshin The scam is so brazen, so public, so obvious. The foxes aren’t just in the hen house – they bought the whole goddamn hen house.
Framework, the maker of repairable laptops, is embroiled in a controversy, as the company and its CEO are openly supporting people with, well, questionable views. If you know a little bit about PR in social media space, you might note that, right out of the gate, a project by a vocal white nationalist known for splitting communities by their mere presence, is not a great highlight choice for an overtly non-left-right-political company like Framework. Does it get worse from here? Sadly, it does. ↫ Arya Bread Crumbs The questionable views we’re talking about here are… Let’s just say we’re not talking about milquetoast stuff like “we should be a bit stricter with immigration” or “lower taxes on the rich”, but views that are far, far outside of the mainstream in most places in the world. Framework has stated in no uncertain terms that it is supporting and embracing people like this. That’s a choice they are entirely free to make, but I, and many with me, then, are entirely free to choose not to buy and/or promote products by Framework. I still sincerely hope that all of this is just a massive breakdown of PR and common sense at Framework and its CEO, but since they’ve already doubled-down, I’m not holding my breath. This whole thing is going to haunt them, especially since I’m fairly sure a huge chunk of their community and users – who are buying into hardware that is, in truth, overpriced – are not even remotely aligned with such extremist views. I care deeply about Framework’s mission, but I don’t give a single rat’s ass about Framework itself. There are countless alternatives to Framework, some of which I’ve even reviewed here (like the MNT Reform or the NovaCustom V54), and if you, too, feel a deep sense of the ick when it comes to supporting extremist views like the above, I urge you to take them into consideration.
Every single “vibe coding is the future,” “the power of AI,” and “AI job loss” story written perpetuates a myth that will only lead to more regular people getting hurt when the bubble bursts. Every article written about OpenAI or NVIDIA or Oracle that doesn’t explicitly state that the money doesn’t exist, that the revenues are impossible, that one of the companies involved burns billions of dollars and has no path to profitability, is an act of irresponsible make believe and mythos. ↫ Edward Zitron The numbers are clear. People aren’t paying for “AI”, and those that do, are using up way more resources than they’re actually paying for. The profits required to make all of this work just aren’t realistic in any way, shape, or form. The money being pumped around doesn’t even exist. It’s a scam of such utterly massive proportions, it’s easier for many of us to just assume it can’t possibly be one. Too big to fail? Too many promises to be a scam. It’s going to be a bloodbath, but as usual when the finance and tech bros scam entire sectors, it’s us normal folk who will be left to foot the bill. Let’s blame immigrants some more while we implement harsh austerity measures to bail out the billionaire class. Again.
We’re all being told that “AI” is revolutionizing programming. Whether the marketing is coming from Cursor, Copilot, Claude, Google, or the countless other players in this area, it’s all emphasizing the massive productivity and speed gains programmers who use “AI” tools will achieve. The relentless marketing is clearly influencing both managers and programmers alike, with the former forcing “AI” down their subordinates’ throats, and the latter claiming to see absolutely bizarre productivity gains. The impact of the marketing is real – people are being fired, programmers are expected to be ridiculously more productive without commensurate pay raises, and anyone questioning this new corporate gospel will probably end up on the chopping block next. It’s like the industry has become a nunnery, and all the nuns are meowing like cats. The reality seems to be, though, that none of these “AI” programming tools are making anyone more productive. Up until recently, Mike Judge truly believed “AI” was making him a much more productive programmer – until he ran the numbers of his own work, and realised that he was not one bit more productive at all, and his point is that if the marketing is true, and programmers are indeed becoming vastly more productive, where’s the evidence? And yet, despite the most widespread adoption one could imagine, these tools don’t work. My argument: If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware? We should be seeing apps of all shapes and sizes, video games, new websites, mobile apps, software-as-a-service apps — we should be drowning in choice. We should be in the middle of an indie software revolution. We should be seeing 10,000 Tetris clones on Steam. ↫ Mike Judge He proceeded to collect tons of data about new software releases on the iOS App Store, the Play Store, Steam, GitHub, and so on, as well as the number of domain registrations, and the numbers paint a very different picture from the exuberant marketing. Every single metric is flat. There’s no spike in new games, new applications, new repositories, new domain registrations. It’s all proceeding as if “AI” had had zero effect on productivity. This whole thing is bullshit. So if you’re a developer feeling pressured to adopt these tools — by your manager, your peers, or the general industry hysteria — trust your gut. If these tools feel clunky, if they’re slowing you down, if you’re confused how other people can be so productive, you’re not broken. The data backs up what you’re experiencing. You’re not falling behind by sticking with what you know works. If you’re feeling brave, show your manager these charts and ask them what they think about it. If you take away anything from this it should be that (A) developers aren’t shipping anything more than they were before (that’s the only metric that matters), and (B) if someone — whether it’s your CEO, your tech lead, or some Reddit dork — claims they’re now a 10xer because of AI, that’s almost assuredly untrue, demand they show receipts or shut the fuck up. ↫ Mike Judge Extraordinary claims require extraordinary evidence, and the evidence just isn’t there. The corporate world has an endless list of productivity metrics – some more reliable than others – and I have the sneaking suspicion we’re only fed marketing instead of facts because none of those metrics are showing any impact of “AI” whatsoever, because if they did, we know the “AI” pushers wouldn’t shut the fuck up about it. Show me more than meowing nuns, and I’ll believe the hype is real.
A little over a year ago, DC District Court Judge Amit Mehta ruled that Google is a monopolist and violated US antitrust law. Today, Mehta ruled that while Google violated the law, there won’t be any punishment for the search giant. They don’t have to divest Chrome or Android, they can keep paying third parties to preload their services and products, and they can keep paying Apple €20 billion a year to be the default search engine on iOS. Mehta declined to grant some of the more ambitious proposals from the Justice Department to remedy Google’s behavior and restore competition to the market. Besides letting Google keep Chrome, he’ll also let the company continue to pay distribution partners for preloading or placement of its search or AI products. But he did order Google to share some valuable search information with rivals that could help jumpstart their ability to compete, and bar the search giant from making exclusive deals to distribute its search or AI assistant products in ways that might cut off distribution for rivals. ↫ Lauren Feiner at The Verge Mehta granted Google a massive win here, further underlining that as long as you’re wealthy, a corporation, or better yet, both, you are free to break the law and engage in criminal behaviour. The only thing you’ll get is some mild negative press and a gentle pat on the wrist, and you can be on your merry way to continue your illegal behaviour. None of it is surprising, except perhaps for the brazenness of the class justice on display here. The events during and course of this antitrust case mirrors those of the antitrust case involving Microsoft, over 25 years ago. Microsoft, too, had a long, documented, and proven history of illegal behaviour, but like Google today, also got away with a similar gentle pat on the wrist. It’s likely that the antitrust cases currently running against Apple and Amazon will end in similar gentle pats on the wrist, further solidifying that you can break the law all you want, as long as you’re rich. Thank god the real criminal scum is behind bars.
The following chart shows how the Adobe Reader installer has grown in size over the years. When possible, 64-bit versions of installers were used. ↫ Alexander Gromnitsky Disk space is cheap, sure, but this is insanity.
We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — robots.txt files. The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling. ↫ The CloudFlare Blog Never forget they destroyed Aaron Swartz’s life – literally – for downloading a few JSTOR articles.
“Why is my burger blue?” I asked, innocently. “Oh! We’re making all of our food blue, all the best restaurants are doing it now.” the waiter explained. But I didn’t want my burger to be blue. ↫ Luna Winters “Blue” food isn’t food.
Clifton Sellers attended a Zoom meeting last month where robots outnumbered humans. He counted six people on the call including himself, Sellers recounted in an interview. The 10 others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe and summarize the meeting. ↫ Lisa Bonos and Danielle Abril at The Bezos Post Management strongly encourages – mandates – that everyone use “AI” to improve productivity, but then gets all uppity when people actually do. Welcome to “finding out”.
London-based Builder.ai, once valued at $1.5 billion and backed by Microsoft and Qatar’s sovereign wealth fund, has filed for bankruptcy after reports that its “AI-powered” app development platform was actually operated by Indian engineers, said to be around 700 of them, pretending to be artificial intelligence. The startup, which raised over $445 million from investors including Microsoft and the Qatar Investment Authority, promised to make software development “as easy as ordering pizza” through its AI assistant “Natasha.” However, as per the reports, the company’s technology was largely smoke and mirrors, human developers in India manually wrote code based on customer requests while the company marketed their work as AI-generated output. ↫ The Times of India I hope those 700 engineers manage to get something out of this, but I doubt it. I wouldn’t be surprised if they were unaware they were part of the “AI” scam.
And the “copilot” branding. A real copilot? That’s a peer. That’s a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he’s shouting over your shoulder, “Let me code that bit real quick, I saw it in a Slashdot comment!” At that point, you’re not working with a copilot. You’re playing Russian roulette with a loaded dependency graph. You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit. ↫ Jj at Blogmobly The world has no clue yet that we’re about to enter a period of incredible decline in software quality. “AI” is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.
Who doesn’t love a bug bounty program? Fix some bugs, get some money – you scratch my back, I pay you for it. The CycloneDX Rust (Cargo) Plugin decided to run one, funded by the Bug Resilience Program run by the Sovereign Tech Fund. That is, until “AI” killed it. We received almost entirely AI slop reports that are irrelevant to our tool. It’s a library and most reporters didn’t even bother to read the rules or even look at what the intended purpose of the tool is/was. This caused a lot of extra work which is why we decided to abandon the program. Thanks AI. ↫ Lars Francke On a slightly related note, I had to do search the web today because I’m having some issues getting OpenIndiana to boot properly on my mini PC. For whatever reason, starting LightDM fails when booting the live USB, and LightDM’s log is giving some helpful error messages. So, I searched for "failed to get list of logind seats" openindiana, and Google’s automatic “AI Overview” ‘feature’, which takes up everything above the fold so is impossible to miss, confidently told me to check the status of the logind service… With systemctl. We’ve automated stupidity.
Let’s check in on TrueNAS, who apparently employ “AI” to handle customer service tickets. Kyle Kingsbury had to have dealings with TrueNAS’ customer support, and it was a complete trashfire of irrelevance and obviously wrong answers, spiraling all the way into utter lies. The “AI” couldn’t generate its way out of a paper bag, and for a paying customer who is entitled to support, that’s not a great experience. Kingsbury concludes: I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives. Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers. ↫ Kyle Kingsbury This time, it’s just about an upgrade process for a NAS, and the worst possible outcome “AI” generated bullshit could lead to is a few lost files. Potentially disastrous on a personal level for the customer involved, but not exactly a massive problem. However, once we’re talking support for medical devices, medication, dangerous power tools, and worse, this could – and trust me, will – lead to injury and death. TrueNAS, for its part, contacted Kingsbury after his blog post blew up, and assured him that “their support process does not normally incorporate LLMs”, and that they would investigate internally what, exactly, happened. I hope the popularity of Kingsbury’s post has jolted whomever is responsible for customer service at TrueNAS that farming out customer service to text generators is a surefire way to damage your reputation.
You want more “AI”? No? Well, too damn bad, here’s “AI” in your file manager. With AI actions in File Explorer, you can interact more deeply with your files by right-clicking to quickly take actions like editing images or summarizing documents. Like with Click to Do, AI actions in File Explorer allow you to stay in your flow while leveraging the power of AI to take advantage of editing tools in apps or Copilot functionality without having to open your file. AI actions in File Explorer are easily accessible – to try out AI actions in File Explorer, just right-click on a file and you will see a new AI actions entry on the content menu that allows you to choose from available options for your file. ↫ Amanda Langowski and Brandon LeBlanc at the Windows Blogs What, you don’t like it? There, “AI” that reads all your email and sifts through your Google Drive to barf up stunt, soulless replies. Gmail’s smart replies, which suggest potential replies to your emails, will be able to pull information from your Gmail inbox and from your Google Drive and better match your tone and style, all with help from Gemini, the company announced at I/O. ↫ Jay Peters at The Verge Ready to submit? No? Your browser now has “AI” integrated and will do your browsing for usyou. Starting tomorrow, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information. In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf. ↫ Josh Woodward Mercy? You want mercy? You sure give up easily, but we’re not done yet. We destroyed internet search and now we’re replacing it with “AI”, and you will like it. Announced today at Google I/O, AI Mode is now available to all US users. The focused version of Google Search distills results into AI-generated summaries with links to certain topics. Unlike AI Overviews, which appear above traditional search results, AI Mode is a dedicated interface where you interact almost exclusively with AI. ↫ Ben Schoon at 9To5Google We’re going to assume control of your phone, too. The technology powering Gemini Live’s camera and screen sharing is called Project Astra. It’s available as an Android app for trusted testers, and Google today unveiled agentic capabilities for Project Astra, including how it can control your Android phone. ↫ Abner Li at 9To5Google And just to make sure our “AI” can control your phone, we’ll let it instruct developers how to make applications, too. That’s precisely the problem Stitch aims to solve – Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes. ↫ Vincent Nallatamby, Arnaud Benard, and Sam El-Husseini You are not needed. You will be replaced. Submit.
Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of “AI”-generated security reports the curl project has to deal with. That’s it. I’ve had it. I’m putting my foot down on this craziness. 1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions) 2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help. ↫ Daniel Stenberg This is the real impact of “AI”: streams of digital trash real humans have to clean up. While proponents of “AI” keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what “AI” is really doing is create more work for others to deal with by barfing useless garbage into other people’s backyards. It’s like the digital version of the western world sending its trash to third-world countries to deal with. The best possible sign that “AI” is a toxic trash heap you wouldn’t want to have anything to do with are the people fighting for team “AI”. In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances. ↫ Meghan Bobrowsky at the WSJ Mark Zuckerberg, who built his empire by using people’s photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists’ email accounts because they were about to publish a negative story about him, who called Facebook users “dumb fucks” for entrusting their personal information to him, is on the forefront fighting for “AI”. If that isn’t the ultimate proof there’s something deeply wrong and ethically unsound about “AI”, I don’t know what is.
CISA says the U.S. government has extended MITRE’s funding to ensure no continuity issues with the critical Common Vulnerabilities and Exposures (CVE) program. The announcement follows a warning from MITRE Vice President Yosry Barsoum that government funding for the CVE and CWE programs was set to expire today, April 16, potentially leading to widespread disruption across the cybersecurity industry. ↫ Sergiu Gatlan at BleepingComputer Elect clowns, live in a circus.
Elon Musk’s Tesla is waving a red flag, warning that Donald Trump’s trade war risks dooming US electric vehicle makers, triggering job losses, and hurting the economy. In an unsigned letter to the US Trade Representative (USTR), Tesla cautioned that Trump’s tariffs could increase costs of manufacturing EVs in the US and forecast that any retaliatory tariffs from other nations could spike costs of exports. ↫ Ashley Belanger at Ars Technica Back in 2020, scientists at the University of Twente, The Netherlands, created the smallest string instrument that can produce tones audible by human ears when amplified. Its strings were a mere micrometer thin, or one millionth of a meter, and about half to one millimeter long. Using a system of tiny weights and combs producing tiny vibrations, tones can be created. And yet, this tiny violin still isn’t small enough for Tesla.