This is a great post, but obviously it hasn’t convinced me:
The folks waving their arms and yelling about recent models’ capabilities have a point: the thing works. This project finished in three weeks. Compare that to Ringspace, a similarly-sized project that took me about six months of nights and early mornings to complete, while not doing my day job or being Dad to an amazing, but demanding toddler. I simply could not have built this project as well or as quickly without help. And as other developers have noted, this is the help that’s showing up.
I’m not entirely onboard with Mike Masnick’s optimistic view of this technology’s democratizing power. I don’t think it’s as easy to separate the tech from its provenance or corporate control. But CertGen, my certificate application, exists now. It didn’t and couldn’t without the help of a tool like Claude Code. Open source in particular needs to reckon with this, because the current situation of demanding developers starve and bleed themselves dry without support isn’t tenable. We need to grapple with this. I’m not yet sure how it all breaks down, and anyone who says they do is lying, foolish, or fanatical.
↫ Michael Taggart
If you disregard that “AI” models are trained on stolen data, that such data was prepared by exploited workers, that “AI” data centres have a hugely negative impact on the environment, that “AI” data centers are distorting the entire computing market, that “AI” models they feed the endless firehose of intentional misinformation, that they are wreaking havoc in education, that they increase your reliance on American big tech companies, that you pay “AI” companies for taking your work, that “AI” models are a vital component in the technofascist wet dreams of their creators, that they are the cornerstone of politicians’ dream of ending anonymity, and that they contribute to racist and abusive policing, then yes, sometimes, they produce code that works and isn’t total horseshit.
It’s a deeply depressing reversed “what have the Romans ever done for us?” that makes me sad, more than anything. I’ve seen so many otherwise smart, caring, and genuine people just shove all of these massive downsides aside for the mere novelty, the peer pressure, the occasional sense that their “lines of code” metric is going up.
It’s the digital equivalent of rolling coal.

You are standing in front of a steamroller Thom. Maybe don’t stand there and get on it. Or at least out of the way.
The steamroller is coming for me too, but at least I see it for what it is.
Why would anyone want to get on a steamroller that’s targeting people except for the express purpose of immediately punching whoever’s piloting it and stopping it?
(I may have broken the analogy a bit, if so it was fully intentional.)
Weavers asking the steam engine? Bow-man asking the rifle manufacturer?
Why can’t we LLMs/AI are just tools making work more efficient (at least for some people). Nobody is targeting anyone.
I am trying (my best) riding it instead.
Cheers!
I’m still not sure what to make of this whole AI thing. But if the technology is so great, I’d expect it’s proponents to have more convincing arguments than your variation of “We are the Borg. You will be assimilated”
You get it wrong: nobody needs to provide any argument here. We use a new tool when it gets the job done for us and move on. You are free and welcome to queue up in front …
If it doesn’t work locally, I’m not using it for work-critical tasks*. Now that the AMD Strix Point NPU is actually usable for LLM acceleration, I might give it a whirl. But NPU-enabled, tool calling, coding agents are still MIA.
*unless it’s for my $$-paying job, then it’s not my problem. I remember Apple’s signing servers being offline for a day, nothing got done.
Serafean,
Isn’t this intentionally tying your own hands, and then complaining you cannot type?
There are local models like Qwen/Qwen3.5-27B that are good for local agentic tasks, including tool calling.
https://www.reddit.com/r/LocalLLaMA/comments/1s7p0u9/running_qwen3527b_locally_as_the_primary_model_in/
(To be honest the setup they do in that post is convoluted. Just go with ollama, and it preparss almost everything for you).
sukru,
It’s being able to keep functioning if the backend fails/changes. That’s why I carry cash, that’s why I buy films/series on bluray, why I run my mail server (yeah, this one’s probably over the top…)
I am using ollama + opencode with local glm-4.7 in a sandbox (no access to system or to network). I’m going to give gemma 4 a whirl. (Both on GPU)
The thing is : to run on amdxdna NPU, the model needs to be adapted for FastFlowLM, or Lemonade-server.ai, so the list of available models is quite limited.
Also so far LLMs (local or not) have never helped me solve a task. Only created new hurdles.
Serafean,
Those are nice pieces of software
That is an interesting choice. But the most important question is: what is your VRAM? And which version of the model are you using?
Depending on the previous question, this is very likely the “tying your own hands”
I hear this often. And I’m not sure why the reason is. But maybe there is a mismatch between the tools and the goals.
Hope it gets better over time. For my workflows, LLMs became indispensable.
sukru,
> That is an interesting choice. But the most important question is: what is your VRAM?
The choice was based on some random review of local coding agent setups.
64GB of unified memory. Ryzen 9 HX 370 (framework 13), using the Radeon 890M.
The NPU is reportedly limited to half the system memory.
> I hear this often. And I’m not sure why the reason is.
My personal benchmark is “Generate an example of C++20 coroutines”. I haven’t yet seen one generate a correct example. But it is getting better: now it at least compiles in most cases. Crashes when run. Gemma4 was really close.
Next year I’ll probably start requesting examples of C++26 reflection.
You would think Thom is already experienced with AI changing the translation industry. Feeding the words into a machine and then fixing up the mistakes has to be many times faster than doing 100% of it on your own at this point. I’m not a programmer, but I’ve tried the AI when getting stuck on leetcode challenges and can easily see when used properly it can teach you how to think like a programmer and increase your skill 20x faster than if you just tried to do it solo. Plus spending 20 seconds letting the AI find small bugs like missing end brackets or hours on your own makes no sense. If you don’t use AI at all you’ll be left behind like a luddite.
Let’s suppose that it really works. In this case these companies are engaging in dirty practices as they are heavily subsidizing its prices, effectively dumping the labor market.
So if the companies admit they are doing that, shouldn’t that be prosecuted?
*nod* I recently watched AI Bubble: Nobody will pay for unsubsidised AI | Ed Zitron and The AI boom is a lie: Fake data centres and unused GPUs | Ed Zitron and Ed Zitron really is a breath of fresh air as he reassures you that you’re not crazy.
(The first one touches on how things like Claude Code are using “let people use compute resources that cost us 10x what they’re paying for their subscription” as their “Amazon and Walmart ran at a loss until their competitors went bankrupt and then jacked up the prices” strategy.)
Oh, the whole thing has deep “Enron-y” vibes to it. Dumping much? Maaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaybe someone minor will be prosecuted after the music stops, but absolutely never while the line goes up.
I do not expect a Bankman-Fried moment and see Scam Altman in jail (the name joke I think came from Musk, but I will take it), but maybe some accountants will be blamed for the whole mess and catch 3 years.
Shiunbird,
Altman situation is a mess, since he realized there is no commercial success for “Open” AI and tangled everyone with his enterprise so the narrative became “if I fall everyone falls”. Microsoft, nvidia, Oracle, even US government is tied to them.
But ironically, actual open source AI became viable. I’m not sure given there are better commerical ones form Google and Claude, and very good open source ones that everyone can use, how they can continue burning money with no future.
Well, I am usually stingy as a Scotsman. But I have started paying for Claude Max 4 months ago because for me this cost is a drop against the lake of work it gets done for me. I know 2 similar SMEs who did the same (even before me).
You barely go into a screen-sharing on coding anymore w/o a LLM module popping up somewhere. I would not be surprised if at least Anthrophic was doing well commercially. Also, you saw prices of Software company slashed down. This did not happen out of nowhere or without reason.
Rought speaking Anthrophic spends $5 to earn $1, whole OpenAI spends $10 to earn $1… thus it “does well”, for some definition of “doing well”.
But at some point they both would need to bring profit… and that’s where shit would hit the fan.
Indeed, software share prices go down because of the perceived value add of software companies go down with AI/LLM.
However, I am curious about the subscription prices for the service in the future because not only they will have to bring operational profits, but also recoup all the investment.
For example, we are 8. We are paying for Copilot for all and it is so far worth it. We would maybe pay 4-5 times more instead of hiring someone else but then after some point it would make more sense to hire people, especially if we decide to make it mission-critical: one doesn’t want to depend on something that may have wildly-fluctuating costs.
And we can’t just pass the costs onto our customers indefinitely. We don’t have all that many to begin with and there’s competition in our field.
I asked Claude itself:
2:1 is a great ratio for a startup. Lets assume they can slash down the cost and double the prices over the next 3 years — and we would be good.
Btw, I am always Greeting the LLM and say Thank You and Good-by — just to make my money worth 😀
I find very unlikely that they would not try to settle for stratospheric margins, but I am hoping you are right.
Much better than financial meltdown that’s for sure.
Andreas Reichel,
They might be a exception, as from what I see companies actually pay for actual usage, not just “pro” tiers. With B2B there is much better survival chance.
The author seemed to have successfully finished “Hello World” of agentic coding, and tries to make sense of the experience.
Do not be alarmed, if you know what you are doing it is a good thing. And there is nothing bad to feel about it. (Except trying to pass AI code as your own). If you have read all the code, tested the software, and had another pair of eyes check it is up to par, there is technically no difference between using an AI or any other advanced IDE feature.
@sukru:
Except that traditional IDE wasn’t created at the expense of actual human lives, as well as on the back of the largest instance of plagiarism the world has ever known. Regardless of whether the tools work, they are forged in blood and theft, and no amount of hand-waving or word twisting will change that fact. There is no moral use of them, period, and to argue otherwise is to lie not only to one’s self but to everyone who is preached to about the “good” these tools can achieve.
(Ugh, struck by the nested comment bug on WordPress once more)
Morgan,
I think it is a bit of a too strong claim to say LLMs have no moral use. And both practically and theoretically, one should need to look in from both sides of the argument.
From practical side, there is no choice. We can decide to personally not use any AI tools. Great, that would take us back to Nokia 3310 and Windows 8.1. But that is just a self enforced limitation, and the world will move forward. Even if we get everyone in the USA, and the western world behind such an idea, our adversaries like China will not stop their progress.
“The only winning move is not to play” becomes precisely inverted here.
From theoretical side, LLMs are just highly advanced models to understand human language. They have been with us for at about 10 years now. (Most people do not realize, but their “autocorrect” and grammar tools have long been powered by models like BERT).
Their architecture has nothing to do with morality. They are just a natural extension of LSTMs. They came from the “cross attention” layer added to improve LSTM recall performance. And then Google published “Attention is all you need”, basically they kept attention layer (Transformer architecture) and dropped the original LSTM.
As per training data, there are many “ethically sourced” ones. You can start with public domain sources like Project Guthenberg, and free licensed ones like Wikipedia. Or choose something curated like the https://huggingface.co/blog/Pclanglais/common-corpus Hugging Face public corpus.
The only non-ethical thing in all this discourse is people pushing them prematurely in domains they are not designed for. (For example, one should never use an LLM for “therapy”, it causes actual misery)
You say “there are ethically sourced training data” as if that completely excuses the fact that the most prominent and widely used “AI” tools are absolutely not ethically sourced. That’s a logical fallacy and a pretty damning one at that. It’s like saying “electric cars exist so it’s okay that we’re burning the planet to fuel the other 95% of cars on the road”. It’s utter bullshit and you know it.
Or to use your own argument: Those who use only ethically sourced training data will be left behind, while everyone else will continue to use the unethical ones.
Morgan,
Yes, that is a valid concern. But it is a policy issue.
It is up to the governments to act on it.
Morgan,
the whole “wisdom” of the west is “stolen” wisdom from the Arabs. Math, medicine, physics, astronomy.
Kaiser Friedrich der Staufer sent many scholars to copy/translate texts from the middle east. And when they banned book printing, they lost the race eventually.
If you own a ounce of gold, and I take it by force then I steal (because I hold what you own).
If I read a book you wrote, what did I steal from you and what was missing on your end?
Please look up the definition of plagiarism before making yourself appear a fool. No one in their right mind would be against reading books. It’s taking what is read, and regurgitating it word for word as one’s own creation, that is wrong. That’s exactly what the companies behind all the “AI” tools do: They consume all of the data, store it in massive data centers that are driving up the cost of **everything**, then present that data as their own creation when it’s literally just copy/paste, and then they demand to be paid for it. The fact that they do it on such a massive scale and get away with it is horrifying. The fact that you are defending that action by attempting to mislead people is disgusting.
Or did you really think I just meant “reading” is wrong? Please. Grow some fucking integrity.
@Morgan,
thank you for just making my day again.
Nowhere I wrote anything about plagiarism and “word by word” replication of content. I do accept your opinion, that LLMs may come too close to plagiarism.
But a person with some fucking integrity would at least acknowledge that different opinions on this exist. You may not like me (an attribution I start wearing with pride) but you will have a hard time to convince me that LLMs and the companies behind did anything different from Kaiser Friedrich sending the scholars to the middle east.
Have a nice day and maybe we can find a more respectful tone even when we do not share the same opinion.
@Andreas Reichel:
And there it is. I’ve been trying, I really have, to give you the benefit of the doubt that you aren’t just trolling, that there is a language barrier or something like that going on. But when I start a conversation talking about plagiarism being done by “AI” companies:
…and then you act as if I was only talking about “reading” books:
…then you pull out your fake “I don’t know what you’re talking about” nonsense when I call you out on it, you out yourself as a troll and not interested in serious conversation. That’s why I said you need integrity; trolling is the opposite of that.
I’m done, you’re dead to me. I should never have fallen for the troll bait from you once again and just ignored you, but congratulations, you got another sucker.
…. and I think I shared without too much context
This is the heart of the Attention mechanism. It is asking a Query (Q) based on our current context (Key, Value cache, KV-Cache). Think of it as recent short term memory. A “soft” Hash Table.
(KV-Cache is an inference construct to make it faster. During training there is a lsightly different mechanism)
Anyway, you take this, and add many layers on top of each other, allowing deeper connections between “thoughts”, and train it based on language patterns.
And you get an “LLM”
(Okay there are a few more components, like converting numbers to text pieces and back, and so fort, but this is the main idea)
I think these days this entire thing can fit in 100 lines of Python code.
Great article. It is a breath of fresh air to read a more balanced take for once.
It is unfortunate that LLMs have arrived in such a messed up period in history.
It’s still so strange to see arguments that this is copyright infringement. Copyright, at best, is a “necessary evil” when used by well meaning open source developers. The technicality of whether LLMs infringe is considered a stretch even by lawyers. But leaving that aside, let’s just consider the spirit of why copyright is used: people don’t want other people to come along and sell the same product, while barely adding anything new. LLMs are a completely different product, a humongous thing that tons of work went into. To say that it’s “transformative” in copyright terms would be an understatement (not that it even needs to be, it contains no lines of code of the original). If people built an encyclopedia of software ideas that you could browse through that would clearly be a good thing. That fact that code is automatically generated using those ideas as building blocks doesn’t change anything. This just doesn’t make any sense to me. If people were arguing for software patents that would make more sense (but boy, I sure hope they aren’t).
On the “technofascism” side, putting companies like Anthropic, DeepMind and Google in the same bucket as xAI seems very broad to me. But I guess this is founded on years of FUD directed at Google (some founded, but hugely overblown), which was perceived as a too left leaning company, not dissimilar to how they’re trying to discredit Bill Gates nowadays. Maybe there’s an argument to be made that corporations shouldn’t be allowed to become this big, but that’s separate from AI.
I don’t even know why I’m writing these defenses. I too am a little scared of what’s coming with LLMs. I’m just frustrated with finding too little reason on the side that should be arguing for caution.
I don’t know what to think about Ai. Of course I know this is my perception from my experience in my own little work bubble, and I would not generalize my experience. I only see people praising AI who are not very good at their jobs, because they can finnaly find solutions to some of their work problems they are unable to solve otherwise. For them it is even a time saver, even though I watch them writing prompts for hours trying different prompts until they finally get a result that solves their current task. And they might even feel a sense of accomplishment, because even though ‘someone else’ solved it, they could achieve something they were unable to before. For myself, I don’t save time with AI, I am way more efficient, reading docs and solving the problem myself. And yes, I am still faster then the colleagues using AI tooling heavily. Too often I wasted time with AI hallucinations or half baked solutions that required massive refactoring or fixing to be usable in a long term and safe manner. And when I use AI it takes away all the joy I normally have for my work in IT, learning new things, hunting the elusive bug, creativity and coming up with new solutions. All that is taken away from me at least by AI, and replaced with a sense of frustration, of getting working but non optimal solutions, or fake solutions and taking the fun of genuine achievement and learning new things. Others have different experiences of course, and good for them. I will still use these new tools, as one must apparently, but yeah…