Seven companies—including OpenAI, Microsoft, Google, Meta, Amazon, Anthropic, and Inflection—have committed to developing tech to clearly watermark AI-generated content. That will help make it safer to share AI-generated text, video, audio, and images without misleading others about the authenticity of that content, the Biden administration hopes. It’s currently unclear how the watermark will work, but it will likely be embedded in the content so that users can trace its origins to the AI tools used to generate it.
And how easy will it be for bad actors to just remove the watermark? If we live in a world where these tools can create new content out of stealing everybody else’s content, what’s stopping anyone from developing a tool to remove these watermarks?
This feels more like lip service than a real solution.