On March 14, just around a month after Russian troops stormed across Ukraine’s western to begin a year’s long bloody battle, Ukrainian president Vladimir Zelensky appeared on a Ukrainian television station and announced his unconditional surrender. The president, wearing his now iconic military green long-sleeve shirt appeared to stare into a camera and claim the Ukrainian military was “capitulating” and would “give up arms.”

Deepfake video of Volodymyr Zelensky surrendering surfaces on social media

Another brief video appeared on social media sites around the same time appearing to show Zelensky’s foil, Russian president Vladimir Putin, similarly pronouncing a peace deal.

“We’ve managed to reach peace with Ukraine” Putin appears to say while sitting behind a wooden desk.

Advertisement

Both the Zelensky and Putin videos were shared tens of thousands of times on various social media sites. They were also both complete fabrications, materialized into the world using deepfake technology powered by deep learning artificial intelligence models. They are among the clearest examples to date of a once-theoretical reality: Deepfake videos weaponized during war.

Advertisement

Advertisement

Anyone with even a modicum of training spotting digitally altered media could probably sense something off about the two surrender videos. Both clips featured odd facial gestures and unnatural lighting and shadows, each tell-tale signs of AI manipulation.

But even though the two videos were quickly debunked, their rapid proliferation online led to a flurry of commentary and news articles warning of the real danger of alter videos being used to confuse and divide the public during a time of war. New research suggests this uptick of deepfakes and anxiety around their distribution could be contributing to an even more difficult problem to solve: people quickly disregarding legitimate media as deepfakes. That, in turn, leads to further erosion of trust in what’s real online.

Advertisement

Those were some of the findings researchers from University College Cork observed as part of a recent study published in the journal PLOS ONE. The researchers picked out nearly 5,000 tweets posted during the first seven months of 2022 in an effort to analyze the role deepfakes may play in wartime misinformation and propaganda. It’s the first study of its kind to empirically analyze the effect of deepfakes during a time of war.

Though the study does reveal plenty of AI-manipulated media images and videos, a shocking portion of the tweets supposedly discussing deepfakes actually involved users falsely characterizing real, legitimate images and videos as digitally altered. The first-of-its-kind findings add new evidence to bolster the fears of past deepfake researchers who fear the rising quality and proliferation of deepfake videos online could lead to an insidious scenario where bad actors can simply claim a video was a “deepfake” in order to dismiss it. 

Advertisement

“What we found was that people were using it [the term deepfake] as a buzzword to attack people online,” UCC School of Applied Psychology researcher and study co-author John Twomey told Gizmodo. Deepfakes, like the term “bot” or “fake news” before it, are being weaponized against media or information users simply disagree with.

“It [the study] is highlighting how people are using the idea of deepfakes and becoming hyper skeptical, in many ways, to real media, especially when deepfakes aren’t widely prevalent as it is,” he added. “More people are aware of the deepfakes as opposed to them being actually prevalent.

Advertisement

Much of that mismatch stems from the news media’s over-coverage of the issue. Ironically, well-meaning coverage from journalists warning about the dangers of deepfakes may unintentionally contribute to a worsening of trust in the media generally.

“We need to consider if the news focus on deepfakes is disproportionate to the threat we are currently facing and whether this response is creating more distrust and contributing to an epistemic crisis,” the researchers wrote.

Advertisement

What did the study find?

The study, aptly titled “Do Deepfake Videos Undermine our Epistemic Trust?” sought to analyze ways discussions of deepfakes during a time of war could be degrading public knowledge and shared truth. Using Twitter’s API, the researcher pulled 4869 relevant tweets discussing deepfakes between January 1, 2022, and August 1 of the same year. Twomey says the researchers decided to focus on Twitter because it tends to focus more on journalism and political activism than other social media platforms where deepfakes or discussions of them may proliferate.

Advertisement

The researcher saw an uptick in deepfake-related content on Twitter in the weeks leading up to Russia’s invasion, with news sites and commentators speculating about whether or not Putin would use altered media as part of a ruse to justify military action. The largest number of deepfake-related tweets appeared between March 14 and 16, right around the time the Zelensky and Putin videos started gaining attention. Though many of the Twitter users responding to the Zelensky tweet criticized the quality of the deepfake, others fearfully described it as a potential new weapon of war.

“You’d think deepfakes are harmless if you’ve only seen silly videos of deepfaked Keanu Reeves,” one of the tweets read. “Unfortunately deepfakes can be a new and vicious type of propaganda,” one user tweeted. “We’ve seen it now with deepfakes of the Russian and Ukrainian leaders.”

Advertisement

The Zelensky deepfake was particularly concerning because it originated from an otherwise reputable news source, Ukrainian 24. The news station claims the deepfake, which appeared on one of their webcasts, was the result of a malicious hack they attributed to the Russian government. Russia never claimed credit for the supposed hack. Zelensky himself quickly posted a follow-up video disrupting the deepfake, but only after it had gathered attention on social media. If the quality of that video were improved thanks to advances in rapidly evolving generative AI models, it could have done far more damage.

“The usual indicator of truthfulness and trustworthiness of online information, the source of the video, was undermined by the hack,” the researchers wrote. “If the video had been more realistic and more widely believed, it may have had a more harmful impact.”

Advertisement

The ‘deepfake defense’ is picking up steam

Bad actors and liars are already using so-called “deepfake defenses” to try and weasel themselves out of accountability. Lawyers representing a rioter involved in the January 6 attacks on the Capitol previously attempted to convince a jury that video footage presented at trial clearly showing their client jumping a barricade with a holstered weapon was actually a deepfake. He was ultimately convicted and sentenced to seven years in prison.

Advertisement

In another recent case, a lawyer representing Elon Musk tried to evoke the deepfake defense to cast doubt on a legitimate and widely covered 2016 interview of the billionaire where he claimed his vehicles can drive autonomously “with greater safety than a person.” The judge in that case called out the lawyer’s maneuvers, which she called a “deeply troubling” obfuscation that could do lasting damage to the legal system. Gizmodo recently recounted the saga of the “deepfake cheer mom” who found herself the victim of a global media onslaught after a teen falsely accused her of manipulating a video supposedly showing her vaping. The video was real.

All of those cases successful or not, are attempting to use public concerns over the pervasiveness of deepfakes to cast doubt on reality. That phenomenon, which academics dub “the liar’s dividend” could have disastrous implications during times of war.

Advertisement

Variations of wartime versions of the lair’s dividend are currently playing out in real-time in Gaza, where rapid-fire stories supposedly debunking images and videos as deepfakes aren’t holding up to scrutiny. In one of the more widely publicized examples, commentators claimed an image supposedly depicting a burned Israeli baby was the product of AI-generated propaganda after it was labeled as inauthentic by one generative AI image detector. Further analyses of the image, however, showed it was almost certainly authentic. Pro-Israel commenters have tried to discredit legitimate media posted by pro-Palestinina activists by claiming they were deepfakes.

Twomey would not speculate on what his findings in the Russia-Ukraine conflict could suggest about the current information firestorm but he said he said his research proves deepfakes, and their denials, are something that can be used to sow confusion during times of war.

Advertisement

“The evidence in this study shows that efforts to raise awareness around deepfakes may undermine our trust in legitimate videos,” Twomey indeed. “With the prevalence of deepfakes online, this will cause increasing challenges for news media companies who should be careful in how they label suspected deepfakes in case they cause suspicion around real media.”

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums