This is a great post, but obviously it hasn’t convinced me:
The folks waving their arms and yelling about recent models’ capabilities have a point: the thing works. This project finished in three weeks. Compare that to Ringspace, a similarly-sized project that took me about six months of nights and early mornings to complete, while not doing my day job or being Dad to an amazing, but demanding toddler. I simply could not have built this project as well or as quickly without help. And as other developers have noted, this is the help that’s showing up.
I’m not entirely onboard with Mike Masnick’s optimistic view of this technology’s democratizing power. I don’t think it’s as easy to separate the tech from its provenance or corporate control. But CertGen, my certificate application, exists now. It didn’t and couldn’t without the help of a tool like Claude Code. Open source in particular needs to reckon with this, because the current situation of demanding developers starve and bleed themselves dry without support isn’t tenable. We need to grapple with this. I’m not yet sure how it all breaks down, and anyone who says they do is lying, foolish, or fanatical.
↫ Michael Taggart
If you disregard that “AI” models are trained on stolen data, that such data was prepared by exploited workers, that “AI” data centres have a hugely negative impact on the environment, that “AI” data centers are distorting the entire computing market, that “AI” models they feed the endless firehose of intentional misinformation, that they are wreaking havoc in education, that they increase your reliance on American big tech companies, that you pay “AI” companies for taking your work, that “AI” models are a vital component in the technofascist wet dreams of their creators, that they are the cornerstone of politicians’ dream of ending anonymity, and that they contribute to racist and abusive policing, then yes, sometimes, they produce code that works and isn’t total horseshit.
It’s a deeply depressing reversed “what have the Romans ever done for us?” that makes me sad, more than anything. I’ve seen so many otherwise smart, caring, and genuine people just shove all of these massive downsides aside for the mere novelty, the peer pressure, the occasional sense that their “lines of code” metric is going up.
It’s the digital equivalent of rolling coal.

You are standing in front of a steamroller Thom. Maybe don’t stand there and get on it. Or at least out of the way.
The steamroller is coming for me too, but at least I see it for what it is.
Why would anyone want to get on a steamroller that’s targeting people except for the express purpose of immediately punching whoever’s piloting it and stopping it?
(I may have broken the analogy a bit, if so it was fully intentional.)
Let’s suppose that it really works. In this case these companies are engaging in dirty practices as they are heavily subsidizing its prices, effectively dumping the labor market.
So if the companies admit they are doing that, shouldn’t that be prosecuted?
The author seemed to have successfully finished “Hello World” of agentic coding, and tries to make sense of the experience.
Do not be alarmed, if you know what you are doing it is a good thing. And there is nothing bad to feel about it. (Except trying to pass AI code as your own). If you have read all the code, tested the software, and had another pair of eyes check it is up to par, there is technically no difference between using an AI or any other advanced IDE feature.
@sukru:
Except that traditional IDE wasn’t created at the expense of actual human lives, as well as on the back of the largest instance of plagiarism the world has ever known. Regardless of whether the tools work, they are forged in blood and theft, and no amount of hand-waving or word twisting will change that fact. There is no moral use of them, period, and to argue otherwise is to lie not only to one’s self but to everyone who is preached to about the “good” these tools can achieve.
(Ugh, struck by the nested comment bug on WordPress once more)
Morgan,
I think it is a bit of a too strong claim to say LLMs have no moral use. And both practically and theoretically, one should need to look in from both sides of the argument.
From practical side, there is no choice. We can decide to personally not use any AI tools. Great, that would take us back to Nokia 3310 and Windows 8.1. But that is just a self enforced limitation, and the world will move forward. Even if we get everyone in the USA, and the western world behind such an idea, our adversaries like China will not stop their progress.
“The only winning move is not to play” becomes precisely inverted here.
From theoretical side, LLMs are just highly advanced models to understand human language. They have been with us for at about 10 years now. (Most people do not realize, but their “autocorrect” and grammar tools have long been powered by models like BERT).
Their architecture has nothing to do with morality. They are just a natural extension of LSTMs. They came from the “cross attention” layer added to improve LSTM recall performance. And then Google published “Attention is all you need”, basically they kept attention layer (Transformer architecture) and dropped the original LSTM.
As per training data, there are many “ethically sourced” ones. You can start with public domain sources like Project Guthenberg, and free licensed ones like Wikipedia. Or choose something curated like the https://huggingface.co/blog/Pclanglais/common-corpus Hugging Face public corpus.
The only non-ethical thing in all this discourse is people pushing them prematurely in domains they are not designed for. (For example, one should never use an LLM for “therapy”, it causes actual misery)