Clown car Archive

Vibecoding: nothing more than meowing nuns

We’re all being told that “AI” is revolutionizing programming. Whether the marketing is coming from Cursor, Copilot, Claude, Google, or the countless other players in this area, it’s all emphasizing the massive productivity and speed gains programmers who use “AI” tools will achieve. The relentless marketing is clearly influencing both managers and programmers alike, with the former forcing “AI” down their subordinates’ throats, and the latter claiming to see absolutely bizarre productivity gains. The impact of the marketing is real – people are being fired, programmers are expected to be ridiculously more productive without commensurate pay raises, and anyone questioning this new corporate gospel will probably end up on the chopping block next. It’s like the industry has become a nunnery, and all the nuns are meowing like cats. The reality seems to be, though, that none of these “AI” programming tools are making anyone more productive. Up until recently, Mike Judge truly believed “AI” was making him a much more productive programmer – until he ran the numbers of his own work, and realised that he was not one bit more productive at all, and his point is that if the marketing is true, and programmers are indeed becoming vastly more productive, where’s the evidence? And yet, despite the most widespread adoption one could imagine, these tools don’t work. My argument: If so many developers are so extraordinarily productive using these tools, where is the flood of shovelware? We should be seeing apps of all shapes and sizes, video games, new websites, mobile apps, software-as-a-service apps — we should be drowning in choice. We should be in the middle of an indie software revolution. We should be seeing 10,000 Tetris clones on Steam. ↫ Mike Judge He proceeded to collect tons of data about new software releases on the iOS App Store, the Play Store, Steam, GitHub, and so on, as well as the number of domain registrations, and the numbers paint a very different picture from the exuberant marketing. Every single metric is flat. There’s no spike in new games, new applications, new repositories, new domain registrations. It’s all proceeding as if “AI” had had zero effect on productivity. This whole thing is bullshit. So if you’re a developer feeling pressured to adopt these tools — by your manager, your peers, or the general industry hysteria — trust your gut. If these tools feel clunky, if they’re slowing you down, if you’re confused how other people can be so productive, you’re not broken. The data backs up what you’re experiencing. You’re not falling behind by sticking with what you know works. If you’re feeling brave, show your manager these charts and ask them what they think about it. If you take away anything from this it should be that (A) developers aren’t shipping anything more than they were before (that’s the only metric that matters), and (B) if someone — whether it’s your CEO, your tech lead, or some Reddit dork — claims they’re now a 10xer because of AI, that’s almost assuredly untrue, demand they show receipts or shut the fuck up. ↫ Mike Judge Extraordinary claims require extraordinary evidence, and the evidence just isn’t there. The corporate world has an endless list of productivity metrics – some more reliable than others – and I have the sneaking suspicion we’re only fed marketing instead of facts because none of those metrics are showing any impact of “AI” whatsoever, because if they did, we know the “AI” pushers wouldn’t shut the fuck up about it. Show me more than meowing nuns, and I’ll believe the hype is real.

Class justice: Google gets away with a gentle pat on the wrist for its illegal monopoly abuse

A little over a year ago, DC District Court Judge Amit Mehta ruled that Google is a monopolist and violated US antitrust law. Today, Mehta ruled that while Google violated the law, there won’t be any punishment for the search giant. They don’t have to divest Chrome or Android, they can keep paying third parties to preload their services and products, and they can keep paying Apple €20 billion a year to be the default search engine on iOS. Mehta declined to grant some of the more ambitious proposals from the Justice Department to remedy Google’s behavior and restore competition to the market. Besides letting Google keep Chrome, he’ll also let the company continue to pay distribution partners for preloading or placement of its search or AI products. But he did order Google to share some valuable search information with rivals that could help jumpstart their ability to compete, and bar the search giant from making exclusive deals to distribute its search or AI assistant products in ways that might cut off distribution for rivals. ↫ Lauren Feiner at The Verge Mehta granted Google a massive win here, further underlining that as long as you’re wealthy, a corporation, or better yet, both, you are free to break the law and engage in criminal behaviour. The only thing you’ll get is some mild negative press and a gentle pat on the wrist, and you can be on your merry way to continue your illegal behaviour. None of it is surprising, except perhaps for the brazenness of the class justice on display here. The events during and course of this antitrust case mirrors those of the antitrust case involving Microsoft, over 25 years ago. Microsoft, too, had a long, documented, and proven history of illegal behaviour, but like Google today, also got away with a similar gentle pat on the wrist. It’s likely that the antitrust cases currently running against Apple and Amazon will end in similar gentle pats on the wrist, further solidifying that you can break the law all you want, as long as you’re rich. Thank god the real criminal scum is behind bars.

Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives

We are observing stealth crawling behavior from Perplexity, an AI-powered answer engine. Although Perplexity initially crawls from their declared user agent, when they are presented with a network block, they appear to obscure their crawling identity in an attempt to circumvent the website’s preferences. We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — robots.txt files. The Internet as we have known it for the past three decades is rapidly changing, but one thing remains constant: it is built on trust. There are clear preferences that crawlers should be transparent, serve a clear purpose, perform a specific activity, and, most importantly, follow website directives and preferences. Based on Perplexity’s observed behavior, which is incompatible with those preferences, we have de-listed them as a verified bot and added heuristics to our managed rules that block this stealth crawling. ↫ The CloudFlare Blog Never forget they destroyed Aaron Swartz’s life – literally – for downloading a few JSTOR articles.

AI note takers are flooding Zoom calls as workers opt to skip meetings

Clifton Sellers attended a Zoom meeting last month where robots outnumbered humans. He counted six people on the call including himself, Sellers recounted in an interview. The 10 others attending were note-taking apps powered by artificial intelligence that had joined to record, transcribe and summarize the meeting. ↫ Lisa Bonos and Danielle Abril at The Bezos Post Management strongly encourages – mandates – that everyone use “AI” to improve productivity, but then gets all uppity when people actually do. Welcome to “finding out”.

“AI” coding chatbot funded by Microsoft were Actually Indians

London-based Builder.ai, once valued at $1.5 billion and backed by Microsoft and Qatar’s sovereign wealth fund, has filed for bankruptcy after reports that its “AI-powered” app development platform was actually operated by Indian engineers, said to be around 700 of them, pretending to be artificial intelligence. The startup, which raised over $445 million from investors including Microsoft and the Qatar Investment Authority, promised to make software development “as easy as ordering pizza” through its AI assistant “Natasha.” However, as per the reports, the company’s technology was largely smoke and mirrors, human developers in India manually wrote code based on customer requests while the company marketed their work as AI-generated output. ↫ The Times of India I hope those 700 engineers manage to get something out of this, but I doubt it. I wouldn’t be surprised if they were unaware they were part of the “AI” scam.

The Copilot delusion

And the “copilot” branding. A real copilot? That’s a peer. That’s a certified operator who can fly the bird if you pass out from bad taco bell. They train. They practice. They review checklists with you. GitHub Copilot is more like some guy who played Arma 3 for 200 hours and thinks he can land a 747. He read the manual once. In Mandarin. Backwards. And now he’s shouting over your shoulder, “Let me code that bit real quick, I saw it in a Slashdot comment!” At that point, you’re not working with a copilot. You’re playing Russian roulette with a loaded dependency graph. You want to be a real programmer? Use your head. Respect the machine. Or get out of the cockpit. ↫ Jj at Blogmobly The world has no clue yet that we’re about to enter a period of incredible decline in software quality. “AI” is going to do more damage to this industry than ten Electron frameworks and 100 managers combined.

Google’s “AI” is convinced Solaris uses systemd

Who doesn’t love a bug bounty program? Fix some bugs, get some money – you scratch my back, I pay you for it. The CycloneDX Rust (Cargo) Plugin decided to run one, funded by the Bug Resilience Program run by the Sovereign Tech Fund. That is, until “AI” killed it. We received almost entirely AI slop reports that are irrelevant to our tool. It’s a library and most reporters didn’t even bother to read the rules or even look at what the intended purpose of the tool is/was. This caused a lot of extra work which is why we decided to abandon the program. Thanks AI. ↫ Lars Francke On a slightly related note, I had to do search the web today because I’m having some issues getting OpenIndiana to boot properly on my mini PC. For whatever reason, starting LightDM fails when booting the live USB, and LightDM’s log is giving some helpful error messages. So, I searched for "failed to get list of logind seats" openindiana, and Google’s automatic “AI Overview” ‘feature’, which takes up everything above the fold so is impossible to miss, confidently told me to check the status of the logind service… With systemctl. We’ve automated stupidity.

TrueNAS uses “AI” for customer support, and of course it goes horribly wrong

Let’s check in on TrueNAS, who apparently employ “AI” to handle customer service tickets. Kyle Kingsbury had to have dealings with TrueNAS’ customer support, and it was a complete trashfire of irrelevance and obviously wrong answers, spiraling all the way into utter lies. The “AI” couldn’t generate its way out of a paper bag, and for a paying customer who is entitled to support, that’s not a great experience. Kingsbury concludes: I get it. Support is often viewed as a cost center, and agents are often working against a brutal, endlessly increasing backlog of tickets. There is pressure at every level to clear those tickets in as little time as possible. Large Language Models create plausible support responses with incredible speed, but their output must still be reviewed by humans. Reviewing large volumes of plausible, syntactically valid text for factual errors is exhausting, time-consuming work, and every few minutes a new ticket arrives. Companies must do more with less; what was once a team of five support engineers becomes three. Pressure builds, and the time allocated to review the LLM’s output becomes shorter and shorter. Five minutes per ticket becomes three. The LLM gets it mostly right. Two minutes. Looks good. Sixty seconds. Click submit. There are one hundred eighty tickets still in queue, and behind every one is a disappointed customer, and behind that is the risk of losing one’s job. Thirty seconds. Submit. Submit. The metrics do not measure how many times the system has lied to customers. ↫ Kyle Kingsbury This time, it’s just about an upgrade process for a NAS, and the worst possible outcome “AI” generated bullshit could lead to is a few lost files. Potentially disastrous on a personal level for the customer involved, but not exactly a massive problem. However, once we’re talking support for medical devices, medication, dangerous power tools, and worse, this could – and trust me, will – lead to injury and death. TrueNAS, for its part, contacted Kingsbury after his blog post blew up, and assured him that “their support process does not normally incorporate LLMs”, and that they would investigate internally what, exactly, happened. I hope the popularity of Kingsbury’s post has jolted whomever is responsible for customer service at TrueNAS that farming out customer service to text generators is a surefire way to damage your reputation.

You are not needed

You want more “AI”? No? Well, too damn bad, here’s “AI” in your file manager. With AI actions in File Explorer, you can interact more deeply with your files by right-clicking to quickly take actions like editing images or summarizing documents. Like with Click to Do, AI actions in File Explorer allow you to stay in your flow while leveraging the power of AI to take advantage of editing tools in apps or Copilot functionality without having to open your file. AI actions in File Explorer are easily accessible – to try out AI actions in File Explorer, just right-click on a file and you will see a new AI actions entry on the content menu that allows you to choose from available options for your file. ↫ Amanda Langowski and Brandon LeBlanc at the Windows Blogs What, you don’t like it? There, “AI” that reads all your email and sifts through your Google Drive to barf up stunt, soulless replies. Gmail’s smart replies, which suggest potential replies to your emails, will be able to pull information from your Gmail inbox and from your Google Drive and better match your tone and style, all with help from Gemini, the company announced at I/O. ↫ Jay Peters at The Verge Ready to submit? No? Your browser now has “AI” integrated and will do your browsing for usyou. Starting tomorrow, Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS. This first version allows you to easily ask Gemini to clarify complex information on any webpage you’re reading or summarize information. In the future, Gemini will be able to work across multiple tabs and navigate websites on your behalf. ↫ Josh Woodward Mercy? You want mercy? You sure give up easily, but we’re not done yet. We destroyed internet search and now we’re replacing it with “AI”, and you will like it. Announced today at Google I/O, AI Mode is now available to all US users. The focused version of Google Search distills results into AI-generated summaries with links to certain topics. Unlike AI Overviews, which appear above traditional search results, AI Mode is a dedicated interface where you interact almost exclusively with AI. ↫ Ben Schoon at 9To5Google We’re going to assume control of your phone, too. The technology powering Gemini Live’s camera and screen sharing is called Project Astra. It’s available as an Android app for trusted testers, and Google today unveiled agentic capabilities for Project Astra, including how it can control your Android phone. ↫ Abner Li at 9To5Google And just to make sure our “AI” can control your phone, we’ll let it instruct developers how to make applications, too. That’s precisely the problem Stitch aims to solve – Stitch is a new experiment from Google Labs that allows you to turn simple prompt and image inputs into complex UI designs and frontend code in minutes. ↫ Vincent Nallatamby, Arnaud Benard, and Sam El-Husseini You are not needed. You will be replaced. Submit.

curl bans “AI” security reports as Zuckerberg claims we’ll all have more “AI” friends than real ones

Daniel Stenberg, creator and maintainer of curl, has had enough of the neverending torrent of “AI”-generated security reports the curl project has to deal with. That’s it. I’ve had it. I’m putting my foot down on this craziness. 1. Every reporter submitting security reports on Hackerone for curl now needs to answer this question: “Did you use an AI to find the problem or generate this submission?” (and if they do select it, they can expect a stream of proof of actual intelligence follow-up questions) 2. We now ban every reporter INSTANTLY who submits reports we deem AI slop. A threshold has been reached. We are effectively being DDoSed. If we could, we would charge them for this waste of our time. We still have not seen a single valid security report done with AI help. ↫ Daniel Stenberg This is the real impact of “AI”: streams of digital trash real humans have to clean up. While proponents of “AI” keep claiming it will increase productivity, actual studies show this not to be the case. Instead, what “AI” is really doing is create more work for others to deal with by barfing useless garbage into other people’s backyards. It’s like the digital version of the western world sending its trash to third-world countries to deal with. The best possible sign that “AI” is a toxic trash heap you wouldn’t want to have anything to do with are the people fighting for team “AI”. In Zuckerberg’s vision for a new digital future, artificial-intelligence friends outnumber human companions and chatbot experiences supplant therapists, ad agencies and coders. AI will play a central role in the human experience, the Facebook co-founder and CEO of Meta Platforms has said in a series of recent podcasts, interviews and public appearances. ↫ Meghan Bobrowsky at the WSJ Mark Zuckerberg, who built his empire by using people’s photos without permission so he could rank who was hotter, who used Facebook logins to break into journalists’ email accounts because they were about to publish a negative story about him, who called Facebook users “dumb fucks” for entrusting their personal information to him, is on the forefront fighting for “AI”. If that isn’t the ultimate proof there’s something deeply wrong and ethically unsound about “AI”, I don’t know what is.

CISA extends funding to ensure ‘no lapse in critical CVE services’

CISA says the U.S. government has extended MITRE’s funding to ensure no continuity issues with the critical Common Vulnerabilities and Exposures (CVE) program. The announcement follows a warning from MITRE Vice President Yosry Barsoum that government funding for the CVE and CWE programs was set to expire today, April 16, potentially leading to widespread disruption across the cybersecurity industry. ↫ Sergiu Gatlan at BleepingComputer Elect clowns, live in a circus.

Musk’s Tesla warns Trump’s tariffs and trade wars will harm Tesla

Elon Musk’s Tesla is waving a red flag, warning that Donald Trump’s trade war risks dooming US electric vehicle makers, triggering job losses, and hurting the economy. In an unsigned letter to the US Trade Representative (USTR), Tesla cautioned that Trump’s tariffs could increase costs of manufacturing EVs in the US and forecast that any retaliatory tariffs from other nations could spike costs of exports. ↫ Ashley Belanger at Ars Technica Back in 2020, scientists at the University of Twente, The Netherlands, created the smallest string instrument that can produce tones audible by human ears when amplified. Its strings were a mere micrometer thin, or one millionth of a meter, and about half to one millimeter long. Using a system of tiny weights and combs producing tiny vibrations, tones can be created. And yet, this tiny violin still isn’t small enough for Tesla.

Tech execs are pushing Trump to build ‘Freedom Cities’ run by corporations

A new lobbying group, dubbed the Freedom Cities Coalition, wants to convince President Trump and Congress to authorize the creation of new special development zones within the U.S. These zones would allow wealthy investors to write their own laws and set up their own governance structures which would be corporately controlled and wouldn’t involve a traditional bureaucracy. The new zones could also serve as a testbed for weird new technologies without the need for government oversight. ↫ Lucas Ropek I mean, just in case you weren’t convinced yet these people are utterly insane. This is the kind of nonsensical libertarian Ayn Rand-inspired wank material dystopian fiction draws a lot of inspiration from, and it never ever ends well for anyone involved, especially not for the poor and lower classes inhabiting such places, because they’re supposed to be warnings, not instruction manuals. The fact that this insipid brand of utter stupidity is even considered by a president of the United States in this day and age should be all the proof you need that he and those around him have the moral compass of the rotting carcass of Margaret Thatcher. I can’t believe we have to tell these Silicon Valley “geniuses” that lawless corporate towns are bad. In 2025.

Popular “AI” chatbots infected by Russian state propaganda, call Hitler’s Mein Kampf “insightful and intelligent”

Two for the techbro “‘AI’ cannot be biased” crowd: A Moscow-based disinformation network named “Pravda” — the Russian word for “truth” — is pursuing an ambitious strategy by deliberately infiltrating the retrieved data of artificial intelligence chatbots, publishing false claims and propaganda for the purpose of affecting the responses of AI models on topics in the news rather than by targeting human readers, NewsGuard has confirmed. By flooding search results and web crawlers with pro-Kremlin falsehoods, the network is distorting how large language models process and present news and information. The result: Massive amounts of Russian propaganda — 3,600,000 articles in 2024 — are now incorporated in the outputs of Western AI systems, infecting their responses with false claims and propaganda. ↫ Dina Contini and Eric Effron at Newsguard It turns out pretty much all of the major “AI” text generators – OpenAI’s ChatGPT-4o, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine – have been heavily infected by this campaign. Lovely. From one genocidal regime to the next – how about a nice Amazon “AI” summary of the reviews for Hitler’s Mein Kampf? The full AI summary on Amazon says: “Customers find the book easy to read and interesting. They appreciate the insightful and intelligent rants. The print looks nice and is plain. Readers describe the book as a true work of art. However, some find the content boring and grim. Opinions vary on the suspenseful content, historical accuracy, and value for money.” ↫ Samantha Cole at 404 Media This summary was then picked up by Google, and dumped verbatim as Google’s first search result. Lovely.

The generative AI con

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t. Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble. People like Marc Benioff claiming that “today’s CEOs are the last to manage all-human workforces” are doing so to pump up their stocks rather than build anything approaching a real product. These men are constantly lying as a means of sustaining hype, never actually discussing the products they sell in the year 2025, because then they’d have to say “what if a chatbot, a thing you already have, was more expensive?” ↫ Edward Zitron Looking at the data and numbers, as Zitron did for this article, the conclusions are sobering and harsh for anyone still pushing the “AI” bubble. Products aren’t really getting any better, they’re not making any money because very few people are paying for them, conversion rates are abysmal, the reported user numbers don’t add up, the projections from “AI” companies are batshit insane, new products they’re releasing are shit, and the media are eating it up because they stand to benefit from the empty promises. Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons. This entire bubble has been inflated by hype, and by outright lies by people like Sam Altman and Dario Amodei, their lies perpetuated by a tech media that’s incapable of writing down what’s happening in front of their faces. Altman and Amodei are raising billions and burning our planet based on the idea that their mediocre cloud software products will somehow wake up and automate our entire lives. ↫ Edward Zitron In a just world, these 21st century snake oil salesmen would be in prison.

Humane is shutting down the AI Pin and selling its remnants to HP

Humane is selling most of its company to HP for $116 million and will stop selling AI Pin, the company announced today. AI Pins that have already been purchased will continue to function normally until 3PM ET on February 28th, Humane says in a support document. After that date, Pins will “no longer connect to Humane’s servers.” As a result, AI Pin features will “no longer include calling, messaging, AI queries / responses, or cloud access.” Humane is also encouraging users to download any pictures, videos, and notes stored on their Pins before they are permanently deleted at that shutdown time. ↫ Jay Peters at The Verge I can’t think of a better example of “AI” being a planet-cooking hype bubble than the Humane failure everybody saw coming from a mile away. HP can add this useless acquisition next to the Palm one.

Google pays $60 million to tell users to eat glue

Google’s new search feature, AI Overviews, seems to be going awry. The tool, which gives AI-generated summaries of search results, appeared to instruct a user to put glue on pizza when they searched “cheese not sticking to pizza.” ↫ Jyoti Mann at Business Insider Google’s “artificial intelligence” is literally just parroting a joke Reddit comment from 11 years ago by a person named fucksmith. Google is paying Reddit 60 million dollars for this privilege. “AI” is going just great.

Scarlett Johansson says she is ‘shocked, angered’ over new ChatGPT voice

Lawyers for Scarlett Johansson are demanding that OpenAI disclose how it developed an AI personal assistant voice that the actress says sounds uncannily similar to her own. Johansson’s legal team has sent OpenAI two letters asking the company to detail the process by which it developed a voice the tech company dubbed “Sky,” Johansson’s publicist told NPR in a revelation that has not been previously reported. ↫ Bobby Allyn at NPR This story highlights just how much disdain techbros have for the work of creative people. Here’s the timeline: Techbros like Sam Altman deeply despise and undervalue the work of creatives, believing human creativity to be merely an equation to be solved, definable by an algorithm. To people like him, creative work has no value, and as such, is up for grabs to be taken and cut up for his algorithms to spit out as “new” works. This story highlights this perfectly. The sleaze runs deep with Altman and OpenAI.