Law enforcement has used surveillance technology to monitor participants of the ongoing Black Lives Matter protests, as it has with many other protests in US history. License plate readers, facial recognition, and wireless text message interception are just some of the tools at its disposal. While none of this is new, the exposure that domestic surveillance is getting in this moment is further exposing a great fallacy among policymakers.
All too often, there is a tendency among the policy community, particularly for those whose work involves national security, to discuss democratic tech regulation purely in terms of geopolitical competition. There are arguments that regulating big tech is vital to national security. There are counterarguments pushing the exact opposite—that promoting big US tech “champions” with minimal regulation is vital to US geopolitical interest, especially vis-à-vis “competing with China.” Many permutations abound.
WIRED OPINION
ABOUT
Justin Sherman (@jshermcyber) is an op-ed contributor at WIRED and a fellow at the Atlantic Council’s Cyber Statecraft Initiative.
Claiming these arguments don’t hold water in Washington would suggest a certain naivete—that’s not what I’m saying. That major tech firms use these narratives to argue for lax regulatory oversight recognizes its worth. But with these framings, policymakers and commentators shouldn’t miss that democratically regulating technology is inherently vital to democracy.
Those who claim the United States does not have a history of oppressive surveillance need to read books like Simone Browne’s Dark Matters: On the Surveillance of Blackness or articles like Alvaro M. Bedoya’s “The Color of Surveillance.” Surveillance in the US goes back to the transatlantic slave trade, and its use has entirely targeted or had the worst impact on marginalized and systemically oppressed communities.
Post-9/11 surveillance of Muslim communities—including through CIA-NYPD cooperation—and the FBI’s COINTELPRO from 1956 to 1971, which targeted, among others, Black civil rights activists and supporters of Puerto Rican independence (though also the KKK), are notable state surveillance programs that may come to mind. But the history of surveillance in the US is much richer, from custodial detention lists of Japanese Americans to intense surveillance of labor movements to stop-and-frisk programs that routinely target people of color.
Thus, “rather than seeing surveillance as something inaugurated by new technologies, such as automated facial recognition or unmanned autonomous vehicles (or drones),” Browne writes, “to see it as ongoing is to insist that we factor in how racism and antiblackness undergird and sustain the intersecting surveillances of our present order.” Browne, along with numerous other scholars, lays bare the origins of digital surveillance and harm that still today has oppressive and disparate effects.
Virginia Eubanks’ Automating Inequality details the use of improperly regulated algorithms in state benefit programs, often with errors and unfairness that reinforce a “digital poorhouse.” These algorithms monitor, profile, and ultimately punish the poor across the US—like in Indiana, where a program rejecting public benefit applications sees application mistakes as “failure to cooperate.” Ruha Benjamin’s Race After Technology explores how automation can deepen discrimination while appearing neutral—the sinister myth of algorithmic objectivity. The obvious example might be facial recognition, but it’s much more than that: sexist résumé-reviewing algorithms, skin cancer predictors that can be trained mostly on lighter-toned skin, gender and ethnic stereotypes literally quantified in word embeddings used in machine learning.
Safiya Umoja Noble is another scholar who has revealed these deep-seated issues. In Algorithms of Oppression, she writes that search engine queries for “‘Black women’ offer sites on ‘angry Black women’ and articles on ‘why Black women are less attractive,’” digitally perpetuating “narratives of the exotic or pathetic black woman, rooted in psychologically damaging stereotypes.” Algorithmic unfairness goes well beyond technical design, reflective as well of US digital culture that forgoes discussion of how tech is interwoven with structural inequalities. Noble writes, “When I teach engineering students at UCLA about the histories of racial stereotyping in the US and how these are encoded in computer programming projects, my students leave the class stunned that no one has ever spoken of these things in their courses.”