Earlier this month, a German court ruled that the country’s nationalist far-right party, Alternative for Germany (AfD), was potentially “extremist” and could warrant surveillance by the country’s intelligence apparatus.
Campaign ads placed by AfD have been allowed to appear on Facebook and Instagram anyway, according to a new report from the nonprofit advocacy organization Ekō, shared exclusively with WIRED. Researchers found 23 ads from the party that accrued 472,000 views on Facebook and Instagram and appear to violate Meta’s own policies around hate speech.
The ads push the narrative that immigrants are dangerous and a burden on the German state, ahead of the European Union’s elections in June.
One ad placed by AfD politician Gereon Bollman asserts that Germany has seen “an explosion of sexual violence” since 2015, specifically blaming immigrants from Turkey, Syria, Afghanistan, and Iraq. The ad was seen by between 10,000 and 15,000 people in just four days, between March 16 and 20. Another ad, which had more than 60,000 views, features a man of color lying in a hammock. Overlaid text reads, “AfD reveals: 686,000 illegal foreigners live at our expense!”
Ekō was also able to identify at least three ads that appear to have used generative AI to manipulate images, though only one was run after Meta put its manipulated media policy into place. One shows a white woman with visible injuries, with accompanying text saying “the connection between migration and crime has been denied for years.”
“Meta, and indeed other companies, have very limited ability to detect third-party tools that generate AI imagery,” says Vicky Wyatt, senior campaign director at Ekō. “When extremist parties use those tools with their ads, they can create incredibly emotive imagery that can really move people. So it’s incredibly worrying.”
In its submission to the European Commission’s consultation on election guidelines, obtained by a freedom of information request made by Ekō, Meta says “it is not yet possible for providers to identify all AI-generated content, particularly when actors take steps to seek to avoid detection, including by removing invisible markers.”
Meta’s own policies prohibit ads that “claim people are threats to the safety, health, or survival of others based on their personal characteristics” and ads that “include generalizations that state inferiority, other statements of inferiority, expressions of contempt, expressions of dismissal, expressions of disgust, or cursing based on immigration status.”
“We do not allow hate speech on our platforms and have Community Standards that apply to all content—including ads,” says Meta spokesperson Daniel Roberts. “Our ads review process has several layers of analysis and detection, both before and after an ad goes live, and this system is one of many we have in place to protect European elections.” Roberts told WIRED the company plans to review the ads flagged by Ekō but didn’t respond to questions about whether the German court’s designation of the AfD as potentially extremist would invite further scrutiny from Meta.
Targeted ads, says Wyatt, can be powerful because extremist groups can more effectively target people that might sympathize with their views and “use Meta’s ads library to reach them.” Wyatt also says this allows the group to test which messages are more likely to resonate with voters.
Services Marketplace – Listings, Bookings & Reviews