Seven companies—including OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection—have committed to developing tech to clearly watermark AI-generated content. That will help make it safer to share AI-generated text, video, audio, and images without misleading others about the authenticity of that content, the Biden administration hopes. It’s currently unclear how the watermark will work, but it will likely be embedded in the content so that users can trace its origins to the AI tools used to generate it.
And how easy will it be for bad actors to just remove the watermark? If we live in a world where these tools can create new content out of stealing everybody else’s content, what’s stopping anyone from developing a tool to remove these watermarks?
This feels more like lip service than a real solution.
Doesn’t matter. I don’t believe the AI hype should exist at all. What’s to prevent other countries from not following suit? WHat if the other said country’s AI proves superior since they weren’t watermarked? I don’t believe that you can mix politics or political agendas with technology.
spiderdroid,
I agree, watermarks are not fool proof and nobody should think they are. There’s no putting this cat back in the bag.
Nothing ever has to be 100 % reliable in order to be considered good enough. Besides these tech giants can easily combine watermarking with their existing “content ID” systems and through that detect that a newly uploaded video is actually a transformed version of a previous, AI-watermarked clip.
sj87,
We need to consider the opposite too. Someone can add AI watermarks to legitimate content.
This is actually became a problem with NFTs, where imposters mint NFTs of works they didn’t create themselves, even works of classic artists. The real creator might appear as the imposter, according to the NFTs on the block chain.
I assume watermarks generated by AI services use cryptographic signatures to impede fraud, but for any software that generates images on the local machine, and many already do, PK signatures don’t work since the watermarking keys can be removed and/or copied.
I believe AI is nothing more than rebadged if/then statements
spiderdroid,
We can deconstruct the most intelligent brains into simple primitives, but I believe you are underestimating the impact that artificial intelligence will have on our world. AI and especially training is still relatively expensive today. But as those costs come down, more and more businesses will take an interest. It will start with pilot projects and they’ll say they’re not replacing jobs. But as health insurance rates go up and politicians raise minimum wages, etc. Humans will just become more expensive next to AI and in the long run AI will win because businesses care more about their bottom line than they do about employees.
The financial aspect and motivation, I can fully agree with Alfman!
The genie is out of the bottle.
The open source community already caught up, in fact much much faster than I ever imagined.
Image generation?
Stable Diffusion + Several good methods to post process
“Chat” applications with human like capabilities?
LLama 2 (thank you Facebook, it is weird to day that, though)
Also: Alpaca, Vicuna, Falcon, …
Coding?
Wizard Coder and many others
Voice synthesis?
So many to list
There are now “model” repositories for almost every need:
https://modelzoo.co/ is one of them.
And they run very well on your laptop, some might need a powerful GPU, but many others can even be used directly on mobile devices. (Or Raspberry PI!)
Good luck trying to push the genie back into the bottle, or the toothpaste back into the tube, or …. any other metaphor for things being too late.
Ironically it was the legacy media together with social media giants who created the mentally ill mania about “fake news” and brainwashed people to react to it with rage. Now they have to spend huge amounts of money so that these hordes of zombies don’t get angry at their creators.
Any normal person with somewhat functioning brain is able to live in a world filled with “fake content”. We don’t need to be angry about someone posting a fake video. We’ve just started recovering from the Trump derangement era where even pure comedy had to be “fact-checked” and marked “manipulated”.
sj87,
The problem isn’t just that there’s fake news. Lies could be caught by comparing them to recorded evidence. The problem with AI is that once it becomes good enough to trick fact checkers, recordings and photographic evidence are no longer proof that events actually happened as depicted, creating plausible deniability. This is a huge problem for the media, courts, and so on.
I believe this will have major implications for society in the upcoming years and there won’t be much anybody can do to stop it. Neither political regulation, nor watermarks can solve this.
Alfman,
The standard watermarks we use would not work, since these kind of content would be “re-recorded off screen”. Like they would play this on a monitor, and record again on a phone, clearing any video or audio signatures, especially when done with low quality codecs.
And, have you never had any friend or a relative you could not convince something was fake? Even when obvious tells of a terrible photoshop job was visible, they would still want to believe in the clear forgery. What would you do about it back then? And what can you do, if the newer version requires deep technical knowledge of signal processing algorithms?
sukru,
Indeed. The goal of watermarking (ie adding information through imperceptible changes) is in direct conflict with the goal of lossy codecs (ie only storing perceptible information). A watermark can still survive conversions though high quality codecs at high bitrates, but if the watermarking algorithm is well known, it becomes very easy to detect and remove it.
I don’t recall this coming up exactly, I know the type though. They’ll defend personal beliefs religiously and don’t care about facts. Somehow social media and the internet have either amplified this, or at least made it more visible.
Alfman,
Okay, that example was a bit too specific, but facts not changing beliefs is a well known issue: https://research.com/education/why-facts-dont-change-our-mind
In my experience, I was alienating close friends and family when trying to “rescue” them from false beliefs. They were really offended, almost as if it was a physical attack.
I think it made the spread of different conspiracies more. Some were denying moon landings, some were against vaccination, some believed there was a deep state cabal running everything (possibly with some additional anti-semitic sauce), and some saw lizard people.
Now all those susceptible to misinformation more or less believe in all the misinformation at the same time.
sukru,
Yeah, it is an interesting topic. I often see people forming ideas from non-scientific and non-credible sources. I’m not sure why people are so easily fooled, but youtube is rife with this and in many cases I think misinformation channels are popular because they carry misinformation rather than because they are scientifically valid. People seek them out to satisfy their cognitive bias. In some cases it would be convenient to say there’s a lack of education, but then there are people who are educated and still refute the importance of evidence and data in their understanding.