Adobe’s photo-editing flagship Photoshop is so successful that the brand is a synonym for digital fakery. Later this year it will become a standard bearer for a proposed antidote: technology that tags images with data about their origins to help news publishers, social networks, and consumers avoid getting duped.
Adobe started working on its Content Authenticity Initiative last year with partners including Twitter and The New York Times. Last week, it released a white paper laying out an open standard for tagging images, video, and other media with cryptographically signed data such as locations, time stamps, and who captured or edited it.
Adobe says it will build the technology into a preview release of Photoshop later this year. That will be the first real test of an ambitious—or perhaps quixotic—response to concerns about the democracy-corroding effects of online misinformation and fake imagery.
“We imagine a future where if something in the news arrives without CAI data attached to it, you might look at it with extra skepticism and not want to trust that piece of media,” says Andy Parsons, who leads Adobe’s work on the standard.
Under CAI’s system, Photoshop and other software would add metadata to images or other content to log key properties and events, such as which camera or person took a photo and when the file was edited or published to a news site or social network. Cryptography would be used to digitally sign the metadata and bind new tags to the old ones, creating a record of an image’s life.
If that system gains traction, consumers could one day be prodded to mull the origins of images and video they see on social networking sites.
The simplest way would be for services like Twitter to allow users to inspect the tags on an image or video. The standard could also enhance the automated systems that social sites have deployed to add warnings to posts spreading untruths, like those Twitter and Facebook place on Covid-19 misinformation. Posts about an unfolding tragedy such as a shooting might earn a warning label if they use images that tags indicate come from a different location, for example.
It’s unclear whether tech companies will find the tags useful or reliable enough to push at users. Twitter declined to say when it might test the technology, but a spokesperson said in a statement that it will continue to work on the project. “This white paper attempts to provide clear insights into the unique potential of the Content Authenticity Initiative across all media and online platforms,” the statement said. Facebook did not respond to a request for comment.
To make authenticity tagging work, makers of cameras, creators of editing software like Photoshop, and publishers and social platforms will need to support the standard. Trusted authorities selected by the project would control access to the digital certificates needed to cryptographically sign the metadata.
The world should get a chance to try out this vision for a more transparent internet before the end of this year. Adobe plans to build the standard into a prerelease version of Photoshop, as well as its Behance social network where creatives showcase their work.
Truepic, a startup whose photo-verification software is used in apps from insurers and other clients, plans to release beta software that builds CAI tagging into an Android smartphone’s camera and cryptographic hardware. Sherif Hanna, a vice president at the company, says embracing the open standard offers a chance to see wider usage of ideas that Truepic was already working on. Google declined to comment on whether it was taking an interest in CAI; Apple did not respond to a request for comment.
The first live test of CAI in the news business will most likely come from the The New York Times. The newspaper’s head of R&D, Marc Lavallee, had dreams of testing the technology at a major media event this year, perhaps a political convention. Due to the pandemic he is now looking to events after the presidential election.