OpenAI CEO Sam Altman

Photo: Mike Coppola / Staff (Getty Images)

OpenAI announced it’s adding watermarks to images generated by its AI tools Tuesday, an effort to combat growing fears over the coming deepfake tsunami. Images spun up with DALL-E and other OpenAI services will include a visual watermark and other details about its origin in the metadata—information encoded in the generated file. Here’s the problem: all you have to do to remove the metadata watermark is take a screenshot. That means OpenAI’s “solution” could leave you more confused, not less, once it goes into effect.

Imagine looking at a suspicious image. If you check and discover the AI watermark, case closed. But if you’re looking at an AI-generated image that’s had its watermark removed, checking the metadata could give you a false sense of security. In other words, looking for the watermark could actually mean you have less information than when you started.

Advertisement

OpenAI itself explains that you people might even remove the watermark by accident. When you upload an image to social media, most platforms automatically remove metadata from the file because it can sometimes reveal a user’s personal information. So when you post one of your AI creations on Instagram, you might unwittingly cause a fake image fiasco.

Advertisement

A screenshot of OpenAI's watermark system.

OpenAI images will include a visual watermark and details about the source of the image embedded in the files.
Graphic: OpenAI

Advertisement

The company maintains this is still a good idea. “We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” OpenAI wrote in a blog post. However, the company admits that the watermark “is not a silver bullet.” OpenAI did not immediately respond to a request for comment.

OpenAI can’t take all the blame here. The company is adopting a new standard developed by the Coalition for Content Provenance and Authenticity (C2PA), an initiative spearheaded by Adobe in partnership with a variety of companies including arm, the BBC, Intel, Microsoft, the New York Times, and X/Twitter. Meta announced it will add its own tags to AI-generated images, though it’s not clear exactly how the company plans to integrate the C2PA standard.

Advertisement

You can already run an image through an AI check system developed with C2PA called Content Credentials Verify. Just don’t assume you’re safe if your image comes up clear.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums