Of course, it was only a matter of time before the time-honoured tradition of the demoscene also got infected by “AI”.
For me personally, generative AI ruins much of the fun. I still enjoy creating pixel art and making little animations and demos. My own creative process remains satisfying as an isolated activity. Alas, obvious AI generated imagery – as well as middle-aged men plagiarizing other, sometimes much younger, hobbyist artists – makes me feel disappointed and empty. It’s not as much about effort as it is about the loss of style and personality; soul, if you will. The result is defacement, to echo T. S. Eliot, rather than inspired improvement. Even in more elaborate AI-based works, it’s hard to tell where the prompt ends and the pixelling begins.
↫ Carl Svensson
A wonderful explanation of the rather unique views on originality, stealing, plagiarism, and related topics within the demoscene, which certainly diverge from many other places.

When humans experience art, music, film, etc, we can’t help but mix and extend works we’ve been exposed to over our lives And this is in line with our human traditions. The demoscene celebrates this: copying ideas, images, samples, etc. It is normal and basically expected, as said by the author.
People want to say it’s ok for humans to learn from existing works and using them to create new expressions, but that it’s wrong for LLMs to learn from existing works and using them to create new expressions. To avoid a logical inconsistency here we’re ultimately lead to a conclusion that automation itself must be the problem: X can be done manually by a person, but if we do the same X using an LLM it’s not allowed. Personally I can’t convince myself of this. I don’t view automation as a problem, but rather a force multiplier. It can make lives better or worse, depending on how it’s used. If we find it is making lives worse, than obviously we need to fix it, but I don’t believe LLMs are going away so that won’t be the solution.
Alfman,
Unfortunately a nuanced discussion is very difficult to have. Some swear by anything AI does, others want to completely ban it.
Just use it “responsibly”. If you are on a demo scene, clearly identify which parts are your contributions, and which parts were helped by the AI. Maybe you are not good at pixel art and music, but maybe you produced the best rotozoom effects out there.
People can shine with their skills witthout doing all the legwork themselvess.
Best and most pointed conclusion of this discussion. I tip my hat.
Andreas Reichel,
Thanks, wish more people would be more civil about these discussions.
To me, the combination of human curiosity and creativity paired with the capacity to semantically express oneself as like a master writer and have a tool capable of searching vast amounts of data from the Internet. Like when you combine a horse and man man+horse is something new. It also depends on how you use it. Treated as a knowledge database one risk to be offered hallucinations presented as facts, but to explore subjects, find supporting references and writing a paper is made fun when you treat it as an assistant to be guided.
I think you’re missing the point here. I have some insight into the “scene” and it’s traditions and they have their own logic. Creative works often compete against each other rather than being enjoyed on their own. This means that works are judged against how hard it is to make them; a clearly labeled AI image may look great but is ranked lower against a hand-pixeled one made using oldschool tools. The real problem is when people are trying to cheat by passing off AI images as something they made themselves painstakingly from scratch. You may not agree with these rules but they are unlikely to change anytime soon.
OlaTheGhost,
I would point out an inconvenient truth here, an AI that is proficient at generating pixel art would not be readily distinguishable from “human art” using old school tools. Adversarial training is a very effective AI technique for this kind of thing. “Looking great” needn’t be the training goal, it could be looking authentic.
Anyone is free to define whatever rules they like, however don’t you agree it creates a conundrum when you have rules where you can’t tell if anyone broke them from the entry itself?
Obviously it’s a matter of opinion, but I don’t really object to modern AI that makes modern content look retro with much less work than in took in the past. It could actually have neat applications in retro-gaming.
But it is still usually distinguishable, just more effort is need. When ultimately some ‘artwork’ that is found to be AI ‘enhanced’ then everybody in the community feels dumb. For me it seems that use of LLMs in the demo-scene has mostly devastating effect.
If you say that this is ‘inconvenient’ true, then for me whole situation can be compared to cheating in chess. Long time ago chess engines (incorporating LLMs too) surpassed all humans, even professional players. Still chess played by humans attracts significant attention and following. Here, and there some pro-player is found to be cheating by augmenting it’s potential with the help of engines. When such player is found (or even becomes suspected) then whole community feels dumb. Not to mention that significantly more effort is needed to mitigate cheaters.
Nobody in chess community says that engines are bad. It is also believed that younger generation of players have now ability to develop their skills much quicker than in the past.
Could we give the same credit to the LLMs used in the demoscene?
jurmcc,
When it comes to adversarial AI training, the very same tools that one would use to distinguish real from fake becomes part of the training process. You have no choice but to increase the sensitivity of your tools to the point where it would give both false positives and false negatives and you couldn’t prove anything. Whether we like it or not, the future of AI will not be detectable (assuming detect-ability is incorporated into AI training).
You’re entitled to this opinion, but I think cutting edge AI techniques would have actually been seen as authentic and innovative for the 80s and 90s demo scene. They were all about subverting expectations, breaking rules, pushing boundaries, etc. Shunning new techniques is a relatively new phenomenon and IMHO it breaks from the traditional values of the scene.
My own take is that AI tech demos should be encouraged to compete in their own categories. By imposing rules to shut them out, you’d just be encouraging them to lie.
LLMs? Honestly I don’t think they would be the most useful or impressive form of AI to use in the demo scene.
Alfman,
I think a major part of this is people fearing for their current jobs. So anything new is approached with disdain instead of wonder.
If this were old days people would publish intros visualizing thought processes of LLMs, or randomized sampling of diffusion models. Just the second one itself, the slow “reveal” of diffused images makes really interesting visuals.
Again, this is about control, and the solution would be coders having mastery of the mysterious.
sukru,
Yes, on this front, I think we’ve only begun to see the transformation AI will have on jobs. It’s going to be painful for many and I don’t think we’re prepared. The owners who stand to benefit the most don’t have much compassion for the rest of society 🙁
Only speaking for myself here, and I’ve been bullied extensively for this opinion, but I don’t have a problem with automation. I have a problem with automation that is actively harming humans and other living things around the world. Specifically, using current “AI” means spending hundreds of thousands of gallons of clean drinking water to produce a few pixels a little bit faster than a human could, or to automate looking up articles and summarizing them in seconds instead of minutes. We are solving laziness using climate and habitat destruction, and it’s sickening.
If “AI” could ever reach a point where it is truly creative, innovative, and capable of rational thought, AND do so without gobbling up all of the resources left on the planet, I’d likely change my position on it.
On the subject of copying versus learning, which I feel is a different discussion from automation: Humans don’t have the ability to read, memorize, and regurgitate word for word millions of written works in mere seconds, but “AI” does. This isn’t creative expression, it’s plagiarism. We humans do have the ability to copy and plagiarize individual works given enough time and effort, and that is also plagiarism, just on a human scale. In case it isn’t clear, plagiarism is wrong no matter how quickly or massively it’s done.
Humans do have the ability to be inspired by existing works, and to organically create something that is an expression of that inspiration; as an amateur musician I am inspired by the professional musicians I enjoy listening to. But I’m not going to take a song by an established artist, shift it up a half step, rearrange the lyrics, and call it my own creation. But, that’s exactly what “AI” does and it’s the only thing it can do: copy and rearrange. The people who claim “AI” generated works as their own creation are delusional at best, and liars at worst. It’s a neat party trick, but there is no intelligence behind it and I don’t think I will ever understand how someone could believe that their computer has a conscience and can think and feel and emote and create like a human can.
In short, current “AI” technology can’t “learn”, it can only “copy”.
Morgan,
You don’t know how common it is then.
What was the saying: “good artists imitate, great artists steal”
Listen to Harry Potter Theme from John WIlliams and then Swan Lake by Tchaikovsky.
I agree. A proper attribution is a must.
Believe me, there is plenty of actual intelligence behind it. Artificial, yes. But intelligent.
I agree. This comes from anthropomorphicization of computers, and is a dangerous road. There are women who “married” ChatGPT. That is not normal at all.
Morgan,
I understand this. The corporations have so much wealth and they’re exactly being responsible. Still, the technology is improving. Never the less big training operations will likely consolidate over time as we learn how to make incremental improvements more efficiently. Last year china achieved major gains to efficiency. Just this past week there was news of google achieving a sixfold reduction in memory requirements.
https://arstechnica.com/ai/2026/03/google-says-new-turboquant-compression-can-lower-ai-memory-usage-without-sacrificing-quality/
The technology will continue to improve.
I disagree with this. An LLM NN trained to generate inferences of a work need not contain a copy of that work.
Usually the goal is for LLMs to create inferences rather than verbatim copies. Granted sometimes this does fail and it can result in copyright infringement. However when they work like they’re supposed to and the inferences are not verbatim copies, I wouldn’t consider that infringing.
Edit “and they’re NOT exactly being responsible”
I’m skeptical. I’m very skeptical about the future of AI with the crazy use of resources to try to maintain data centers etc etc. What I’m not skeptical about is the articles on this site not being news. It says OS news. But when a site uses the word infected towards a major movement in the computer world, it’s pure bias. I’m not saying it’s a wrong concept I’m not saying it can’t be defended. I’m just saying it’s not news it’s opinion and it’s emotional. So OS opinion?
How people learn has evolved over time and where LLM’s/AI is currently is closer to our less evolved learning selves than it is to us now. But, it’s catching up very quickly and wouldn’t surprise me if there’s a point where it blows past us in capability. There’s also the consciousness debate about whether AI can become or is equal-but-different to “life”, particularly human, in that regard. It doesn’t really matter if AI/SI becomes indistinguishable from humans – in some areas it already is – because there will always be a segment of us who prefer or take comfort knowing that something is the product of our own ingenuity and craftsmanship.
Some of us will always be more impressed by what we produce, imperfections and all, on our own. No amount of AI stealing, imitating, or plagiarizing, all things we’ve done as a species since our beginning, changes that. I’m not worried AI will stifle humans innate curiosity and instincts of exploration because that’s baked into our DNA. While creativity can have external influences, creativity itself originates internally. If AI is `ruining` anyone’s “fun”, that person wasn’t really committed or connected to the process in the first place.
The irony is that early artists using computers to create images were heavily criticized, often by older generations, as not making “real” art.
It’s a cycle that repeats with every new technology/medium. A new tool appears, some artists explore its potential, while others, more attached to established methods, dismiss it as illegitimate. Over time, the new approach becomes normalized and part of the cultural mainstream.
Then the next medium arrives, and the very people who once embraced the “new” form start arguing that this one isn’t “real” art.
Rinse and repeat.
People tend to appreciate having the technique as part of what means to be a real artist. The thought process goes more or less like this:
Consider Picasso – he could paint classical portraits, so he went through the process of “earning the right of being called a painter”. If he would throw a bucket of paint on a canvas, people would wonder what is the message he is trying to convey instead of calling him a fraud. Whereas if I go and throw a bucket of paint on a canvas, I am just a fool wasting both canvas and paint.
The difference in the tone of the debate is that, well, we got more and more advanced tools. Composers could write music fast on the computer and even record a full orchestra using virtual sampled instruments but he was still composing it. I have a degree in music and I have used a computer to help me as a tool.
I think what gets people off this time is not the fact that this is a new technology or medium, but that the tool is executing the work itself rather than being a tool to increase the “execution potential” of the artist.
Is the music I stream any less “real” because it isn’t performed live by a group of musicians in front of me?
In the same way, AI can be seen as a tool, and as a form of art in its own right, since it’s human-created. In that sense, we may be dealing with a new medium where art is used to generate more art.
I think it was the dictator of Uzbekistan or Kyrgyzstan who put a ban on public music that is not performed by meat-and-flesh musicians. =)
You are pointing out at where I often think the discussions derail. A music stream is a record of a performance created by musicians of flesh recorded to a digital media and manipulated by electronic means. Yes, when I record with my friends, we often do rounds on Cubase/Logic/Pro Tools/Ardour, pick the best takes, etc.. But we are ALWAYS the ones making the decisions, choosing where to cut, etc..
With AI, the machine is making the editorial decisions for the artist and executing them. I think this is the point people dislike.
Honestly, I don’t have a full opinion formed yet and I quite appreciate some of the art that has been generated using AI tools. I guess crafting a good prompt is a cool art form in itself and one that requires skill to perfect =)
Shiunbird,
At least in principal I don’t think it needs to be all or nothing. You could use as much or as little AI content as you want and mix it how you please. I don’t think it would satisfy the AI critics though. Even if you point out you only used it to help in specific areas of the task, they will consider all AI offensive. In the end the productivity boost is real, so I think AI tools are here to stay regardless in some form or other.
Look at the content aware AI tools in Adobe and in on mobile photo apps…whether we see it as cheating or not, .it’s undeniably a huge time saver. AI can do a good job at what used to be hours of tedious editing work. Is this good or bad for society? I don’t know but it certainly saves time!
I think the issue is that we often confuse the process we’re familiar with, or prefer, with the process.
You see this across generations. Older musicians, for example, may be horrified by your process involving tech in its production, as “not real” music. It’s a recurring pattern whenever new tools emerge.
At the end of the day, most people consuming art don’t really care how it was made. Their reaction is immediate and subjective, they either like it or they don’t, and move on.
That’s why many people are already consuming AI-generated art without even realizing it. Their response isn’t based on the tool, but on whether the piece resonates. Just like someone might not connect with a Picasso simply because it doesn’t appeal to them.
We’re in a period of rapid technological change, and AI is likely here to stay. It will become normalized. If anything, it has the potential to democratize art by lowering technical barriers, allowing more people to bring their ideas to life. It could even help established artists reduce friction in realizing their vision.
Time will tell, but this kind of shift has happened before.
I wouldn’t say those views are unique. Most of the original demosceners deride AI in productions and they don’t tend to score well in competition.
Anyway, Revision Demoparty is about to kick off, as it does every year over easter. There has been an increasing AI presence in the last year or 2, so we can gauge whether there’s a shift in this thinking by how well any AI assisted productions score.
And yet demos often contain algorithmically generated content, and what is AI if not another algorithm?