As the US prepares for next year’s midterm elections, and the slew of foreign and domestic online disinformation and propaganda likely to accompany it, it is crucial to develop sensible social and legal protections for the groups most likely to be targeted by digital spin campaigns. While the timing is right, we must create a renewed blueprint for democratic internet governance so we can protect the diverse array of people affected by ongoing problems in the space.
For the last two years the Propaganda Research Lab at the Center for Media Engagement at UT Austin has been studying the ways in which various global producers of social media-based propaganda efforts focus their strategies. One of the lab’s key findings in the US has been that these individuals—working for an array of political parties, domestic and foreign governments, political consulting firms, and PR groups—often use a combination of private platforms like WhatsApp and Telegram and more open ones like Facebook and YouTube in bids to manipulate minority voting blocks in specific regions or cities. For instance, we’ve found that they pay particular attention to spreading political disinformation amongst immigrant and diaspora communities in Florida, North Carolina, and other swing states.
WIRED OPINION
ABOUT
Samuel Woolley (@samuelwoolley) is an assistant professor in the School of Journalism and program director of propaganda research at the Center for Media Engagement, both at UT Austin. His book, The Reality Game: How the Next Wave of Technology Will Break the Truth, discusses how we can prevent emergent technology from being used for manipulation. Miroslava Sawiris (@MiraSawiris) is a senior research fellow at GLOBSEC. She has led research projects analyzing the impact of disinformation campaigns on electoral processes in Europe and societal vulnerabilities towards information manipulation. She is a review board member of the konspiratori.sk project advocating for defunding disinformation sites and she leads GLOBSEC’s Alliance for Healthy Infosphere which joins organizations from 7 EU member states advocating for meaningful regulation of the digital space.
While some of this content comes from US groups hoping to sway the vote for one candidate, much of it has murky origins and less than clear intentions. It’s not uncommon, for instance, to encounter content either purporting or seeming to come from users in China, Venezuela, Russia, or India, and some of it has hallmarks of organized governmental manipulation campaigns in those countries.
This is perhaps unsurprising considering what we now know about authoritarian-leaning foreign entities’ bids to influence political affairs in the United States and a variety of other countries around the globe. Both China and Russia continue to work to control Big Tech and, correspondingly, their populations’ experiences of the internet. And, indeed, our lab has gathered evidence of campaigns wherein US people with Chinese heritage—first or second-generation immigrants in particular—are targeted with sophisticated digital propaganda campaigns with features of similar efforts out of Beijing. We’ve seen suspicious social media profiles (thousands of which Twitter later deleted) seize upon anti-US and anti-democracy narratives—and effusively Pro-Bejing ones in the wake of the murder of George Floyd, the Capitol insurrection, the Hong Kong protests, and other pivotal events. In our interviews and digital field research around the 2020 US presidential election we encountered people of Arab, Columbian, Brazilian, and Indian descent being targeted by similar efforts. We also spoke to propagandists who were open about their efforts to manipulate broader immigrant, diaspora, and minority groups into, say, falsely believing Joe Biden was a socialist and that they therefore should not support him.
While the impact of China, Russia, or other authoritarian regimes’ control of their own “in-country” internets has been widely reported, the emergence of these regimes’ propaganda campaigns obviously reverberates beyond one nation-state’s borders. These efforts impact communities with ties to these countries living elsewhere—including here in the US—and for countries looking to these undemocratic superpowers for indications of how to manage (or dominate) their own digital information ecosystems.
Russia, China, and other authoritarian states are a step ahead with their segmented versions of the internet, which are based on autocratic principles, surveillance, and suppression of freedom of speech and individual rights. These control campaigns bleed into other information spaces worldwide. For instance, research from the Slovakian thinktank GLOBSEC found Kremlin influence in the digital ecosystems of several EU member states. They argue that both passive and active Russian informational machinations impact public perceptions of governance and, ultimately, undermine European democracy.
However, democratic countries have also failed to reign in efforts to co-opt and control the internet. After years of naïve belief that the tech sector can and should regulate itself, which culminated in the social media-fueled Capitol insurrection, global policymakers and other stakeholders are now asking what a more democratic, more human rights-oriented internet should look like.
If the Biden administration wants to make good on its renewed commitment to transatlantic collaboration, management of the digital sphere should take center stage. As autocratic states develop and cement their influence, democracies need to catch up, and fast. While the EU has led the efforts to protect individual privacy rights and combat against disinformation and hate speech online, the task is far from complete. Even as legislation efforts such as the Digital Services Act and rules on artificial intelligence take shape, neither the EU nor the US can afford to go it alone. Democracies flourish in strong alliances, and risk crumbling without them.
We need a renewed blueprint for democratic internet governance. This is an unprecedented undertaking because our societies do not have a comparable legal or policy experience which can effectively be used as a template for digital efforts. For instance, phenomena created by the digital revolution challenge our understanding of individual rights and force us to redefine their equivalent fit for the 21st century. Does freedom of speech mean automatic access to audiences spanning hundreds of thousands of users? What about users who might be particularly susceptible to manipulation or harassment? Are we sufficiently safeguarding the right to privacy online—a space wherein a variety of dubious organizations continue to freely track our every move? Defining answers to these, and other, pressing questions will not be easy, especially as finding them requires collaboration between a number of often conflicting stakeholders: citizens/users, public servants, civil society groups, academics, and, crucially, the tech sector.