In the world of open source, relicensing is notoriously difficult. It usually requires the unanimous consent of every person who has ever contributed a line of code, a feat nearly impossible for legacy projects. chardet, a Python character encoding detector used by requests and many others, has sat in that tension for years: as a port of Mozilla’s C++ code it was bound to the LGPL, making it a gray area for corporate users and a headache for its most famous consumer.
Recently the maintainers used Claude Code to rewrite the whole codebase and release v7.0.0, relicensing from LGPL to MIT in the process. The original author, a2mark, saw this as a potential GPL violation.
↫ Tuan-Anh Tran
Everything about this feels like a license violation, and in general a really shit thing to do. At the same time, though, the actual legal situation, what lawyers and judges care about, is entirely unsettled and incredibly unclear. I’ve been reading a ton of takes on what happened here, and it seems nobody has any conclusive answers, with seemingly valid arguments on both sides.
Intuitively, this feels deeply and wholly wrong. This is the license-washing “AI” seems to be designed for, so that proprietary vendors can take code under copyleft licenses, feed it into their “AI” model, and tell it to regurgitate something that looks just different enough so a new, different license can be applied. Tim takes Jim’s homework. How many individual words does Tim need to change – without adding anything to Jim’s work – before it’s no longer plagiarism?
I would argue that no matter how many synonyms and slight sentence structure changes Tim employs, it’s still a plagiarised work.
However, what it feels like to me is entirely irrelevant when laws are involved, and even those laws are effectively irrelevant when so much money is riding on the answers to questions like these. The companies who desperately want this to be possible and legal are so wealthy, so powerful, and sucked up to the US government so hard, that whatever they say might very well just become law.
“AI” is the single-greatest coordinated attack on open source in history, and the open source world would do well to realise that.

If it becomes trivial to write using a tool, there’s no significant effort to protect from being stolen. The books containing logarithm tables were very valuable human work in the past, but a trivial computer program can write one in milliseconds today.
And surely there’s nothing creative in the code produced by LLMs. If there is any by means of verbatim copying then it’s easy to check.
I’m a fan of knowledge being unshackled, though fear how it will be misused. It’s a loss for proprietary code and restrictive licenses, but not for code, and means of coding, being democratized. Maybe open source wasn’t the final destination, just an imperfect step along the way. Maybe we won’t keep needing to fight over contracts.
For better or worse ideas can still be patented, and patents will still apply.
Thom Holwerda,
I do not agree with this. AI is merely a tool. Of course tools can be used abusively but AI tools themselves are neither pro nor anti-FOSS.
When you read open source licenses including GPL, there is nothing in it that prohibits it’s use for AI. The GPL is not just imp[icity compatible with training AI, but it goes so far as to explicitly reject all prohibitions on how the code can be used downstream. The only requirement is that derivative works also be GPL licensed. So the argument should not be that we cannot train AI on open source, because this violates both the text and spirit of FOSS. IMHO a more solid argument would be that derivative works should themselves be FOSS, including AI works. If one truly believes in the virtues of GPL & FOSS, then this is what supporters should be clamoring for.