Mashable’s series Algorithms explores the mysterious lines of code that increasingly control our lives — and our futures.


Algorithms shape our lives, some more gently than others. 

From dating apps, to news feeds, to streaming and purchase recommendations, we have become accustomed to a subtle prodding by unseen instruction sets, themselves generated by unnamed humans or opaque machines. But there is another, not so gentle side to the way algorithms affect us. A side where the prodding is more forceful, and the consequences more lasting than a song not to your liking, a product you probably shouldn’t have bought, or even a date that fell flat. 

Facial recognition leading to wrongful arrests. Automated license plate readers resulting in children held at gunpoint. Opaque software doubling the length of prison sentences for minor crimes. 

Algorithms have the power to drive pain and oppression, at scale, and unless there is an intentional systematic effort to push back, our ever increasing reliance on algorithmic decision-making will only lead us further down a dark path. 

Thankfully, there are people doing just that, and they’re ready for you to join in the fight. 

WATCH: How algorithms work

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f95373%252f448f9b70 4c20 4dd9 9292 a94f8d36ede5.png%252f930x520.png?signature=ywat0urxcugx117rqjhednu8k e=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Up next: Crime

How do you argue with an algorithm?

For many entwined in the U.S. criminal justice system, this is not a theoretical question. As society becomes increasingly comfortable outsourcing its decision-making on relatively minor matters like dinner recommendations or driving directions, courts and police departments across the country have already gone full in. 

The fields of predictive policing and criminal risk assessment both rely on massive datasets to make hopefully informed estimates about whether individuals will commit crime in the future. These estimates factor into criminal sentencing by judges, and can increase the length of prison sentences for even relatively low-stakes crime. One such case, reported by ProPublica in 2016, showed how a Wisconsin man convicted of stealing a push lawnmower had the length of his prison sentence doubled — from one to two years — after risk assessment software assigned the man a high recidivism score. 

In an investigation published this week, the Tampa Bay Times revealed that police in Pasco County, Florida, rely on an algorithm to attempt to predict who is likely to commit crimes. Law enforcement then reportedly harasses those individuals, relentlessly. 

“Potential prolific offenders are first identified using an algorithm the department invented that gives people scores based on their criminal records,” reports the Times. “People get points each time they’re arrested, even when the charges are dropped. They get points for merely being a suspect.”

Unfortunately, criminal justice and artificial intelligence experts insist the data fed into the algorithmic models powering these risk assessment tools are riddled with bias — themselves producing biased results that have the potential to ruin real lives in the process. Regardless of whether an algorithm is provably biased, it can be used inappropriately. In the case of the Pasco police, their supposed goal was to bother people so much that they moved.

“There is no technical solution that can create an unbiased risk assessment tool.”

AI Now is a research and policy organization, founded in 2017 and based at New York University, which explores the “social implications of artificial intelligence.” In a 2019 paper focusing on civil rights violations’ effect on predictive policing, AI Now highlighted 13 jurisdictions in the U.S. that developed predictive policing models “while under government commission investigations or federal court monitored settlements, consent decrees, or memoranda of agreement stemming from corrupt, racially biased, or otherwise illegal policing practices.”

In other words, the data being fed into the systems was itself the product of racial bias. 

“Given that these algorithms are trained on inherently biased policing data, and deployed within contexts that are in many ways racist at a systemic level — there is no technical solution that can create an unbiased risk assessment tool,” explained AI Now Technology Fellow Varoon Mathur over email. “In essence, the very design and conception of these assessment tools seek to strengthen such systems further, which leads them to be irrevocably biased towards already marginalized populations.”

The bias inherent in risk assessment tools has become especially dire during the coronavirus pandemic. As officials decide which prisoners to release early in an effort to prevent prison outbreaks, algorithms predicting inmates’ recidivism rates have the potential to decide who lives and who dies. 

In March of this year, Attorney General William Barr instructed the Bureau of Prisons to factor inmates’ PATTERN (Prisoner Assessment Tool Targeting Estimated Risk and Needs) scores — an algorithm powered, risk-assessment tool — into the decision whether or not to transfer inmates to home confinement as the coronavirus bore down on the country. 

Too high a PATTERN score, and stewing in jail it is for you. 

Federal Bureau of Prisons Director Michael Carvajal, left, is sworn in before a June Senate Judiciary Committee hearing on incarceration during COVID-19.

Federal Bureau of Prisons Director Michael Carvajal, left, is sworn in before a June Senate Judiciary Committee hearing on incarceration during COVID-19.

Image: ERIN SCOTT / getty

The Partnership on AI, a San Francisco-based organization founded in late 2016, works to (among other goals) “advance public understanding of AI.” Alice Xiang, the Partnership on AI’s head of fairness, transparency, and accountability research, explained over email that there is reason to be concerned about bias in risk assessment tools — especially now. 

“As highlighted in the criminal justice report we released last year and an issue brief on the use of the PATTERN risk assessment tool in the federal COVID response, many of our staff and Partners are concerned about the bias and accuracy issues associated with the use of risk assessment tools in the criminal justice system,” wrote Xiang.

Going further, Xiang explained that the “consensus view in our report was that these tools should not be used in the pretrial context to automate release decisions.”

Both AI Now’s Mathur and the Partnership on AI’s Xiang see an informed and vocal population as a key element in the fight for a more just and equitable algorithmic future.

“As consumers of AI systems, we should ask the developers behind AI products how they collect their data, and how their algorithms are developed,” explained Xiang. “It is easy to think of AI systems as being purely objective, but the reality is that they are the amalgam of many human decisions.”

Mathur took this one step further, noting that an algorithm’s supposed “fairness” is really beyond the point. 

“We push back against biased risk assessment tools, not simply by asking if its computational output is ‘fair,’ but by asking how such tools either bolster or reverse specific power dynamics in place,” he wrote. “Doing so then provides a means for communities and real people to have the necessary and critical conversations that can lead to tangible actions moving forward.” 

A wrench in the machine

Back in 2018, if you decided to go to some popular malls in Orange County, California, just using their parking lots could’ve put you on local police’s radar. The mall’s owner, the Irvine Company, unbeknownst to you, at the time worked with Vigilant Solutions — a private surveillance company that sells data to law enforcement.

As you drove into the parking lot, an automated license plate reader (ALPR) would log the arrival of your car. It stored your arrival and departure times, and added that information to an ever-growing database that was in turn shared with Vigilant Solutions, and made available to police. The Irvine Company collected ALPR data until July 2018, when the Electronic Frontier Foundation (EFF), a nonprofit digital rights group, called it out over the practice.

An ALPR mounted to a police car.

An ALPR mounted to a police car.

Image: Suzanne Kreiter / getty

Even as the Irvine Company stopped collecting license plate data to give to police, ALPRs are still scattered throughout various California cities. If you drive around Huntington Beach, for example, you pass more ALPRs — some of which are also owned by Vigilant Solutions according to the EFF’s Atlas of Surveillance — which may be sharing your location data with ICE. The majority of California law enforcement agencies use ALPRs — often fixed to light poles or on vehicles — and several do so without following state law meant to protect individuals’ privacy, according to a February report by the state’s auditor. 

An innocent trip around town may contribute to a form of algorithm-supported mass surveillance that is taking over the United States, claiming real-life victims in the process. 

This is not theoretical. In early August, a Black family — including four children all under the age of 17 — were held at gunpoint by police in Aurora, Colorado. Face down in a hot parking lot, some of them handcuffed, the children cried for help. 

No one, except the police, had done anything wrong. 

The issue, Aurora police later claimed, was an error with their ALPR system. It confused the family’s SUV with a stolen motorcycle from out of state. The police ignored the obvious inconsistency and drew their guns anyway. The Black family’s frustrating police encounter came at a time of increased focus on systemic racism in American policing. 

ALPR systems, which are designed to rapidly read every single license plate that passes through their field of vision, depend on algorithms and machine learning to translate the captured images into machine-readable characters. And, according to Kate Rose, a security researcher and founder of the pro-privacy fashion line Adversarial Fashion, they make tons of mistakes. 

“The specificity on these systems is low because they’re meant to ingest thousands of plates a minute at high speeds, so they can read in things like billboards or even picket fences by accident,” she wrote over email. 

The threats posed by ALPRs, according to Rose, are multifaceted. 

“In addition to using this data to stalk and terrorize members of our community,” she wrote, “this data is detailed and sensitive for every person whose car is logged, creating a highly detailed map of everywhere your car has been seen, with locations and date and timestamps.” 

So Rose decided to do something about it. She designed and released a line of clothing that, via the patterns printed on it, tricks ALPRs into reading shirts and dresses as license plates. This, in effect, injects “junk” data into the system. 

In other words, simply wearing one of her designs is part anti-surveillance protest, and part privacy activism. 

Polluting the surveillance stream.

Polluting the surveillance stream.

Image: adversarial fashion

“I hope that by seeing how easily ALPRs can be fooled with just a t-shirt, that people can gain a greater understanding of how these systems work and why oversight and regulation are needed to protect the public,” Rose explained. “ALPRs are one of the systems that we consider ‘safety dependent’ systems like for enforcing certain traffic safety laws and collecting tolls. So it’s our duty to point out where they can and likely already are subject to errors and exploitation.”

“People will just sort of go along with surveillance culture until others push back.”

As more and more companies begin to sell inexpensive software that can turn anyone’s camera into an ALPR for $5 a month, the need for ALPR regulation and oversight has only grown.

Thankfully, there are many ways to fight back — and you don’t need to launch your own fashion line to do so. Rose recommended finding out what surveillance tech is being used in your community. You can also contact out your local ACLU chapter to find out what privacy efforts they are currently involved in, and don’t be afraid to contact your legislator

In general, Rose said that if an algorithm-powered surveillance state isn’t your thing, you shouldn’t be afraid to speak up, and continue speaking up. 

“Take a stand if your neighborhood or HOA tries to implement license plate readers to track residents and their guests, and single out others as undesirable or outsiders,” Rose insisted. “People will just sort of go along with surveillance culture until others push back and remind them that not only is it not normal and very invasive, it’s not as effective as building a culture of trust and support between neighbors.” 

What’s in a face

Your own face is being used against you. 

Facial recognition is a biased technology that fuels the oppression of ethnic minorities, and directly contributes to the arrest of innocent people in the U.S. And, without their knowledge, millions of people have played an unwitting role in making that happen. 

At issue are the datasets which are the life blood of facial-recognition algorithms. To train their systems, researchers and corporations need millions of photos of people’s faces from which their programs can learn. So those same researchers and corporations look to where we all look these days: the internet. Much like Clearview AI notoriously scraped Facebook for user photos to power its proprietary facial-recognition software, researchers across the globe have scraped photo-sharing sites and live-video streams to provide the raw material needed for the development of their algorithms. 

Adam Harvey, a Berlin-based privacy and computer vision researcher and artist, put it succinctly. 

“[If] you limit Artificial Intelligence information supply chains,” he explained over email, “you limit the growth of surveillance technologies.”

Harvey, along with his collaborator Jules LaPlace, created and maintain MegaPixels — “an art and research project that investigates the origins and endpoints of biometric datasets created ‘in the wild.'”

The datasets featured on MegaPixels demonstrate the distinctly opaque manner in which your face might end up a key element in oppressive facial-recognition algorithms. 

In late 2014, the now-shuttered San Francisco laundromat and open-mic venue Brainwash Cafe (this author used to wash his clothes there) streamed video of its patrons to the web. Stanford facial-recognition researchers saw this as an opportunity, and used the livestream video to both “train and validate their algorithm’s effectiveness.”

[embedded content]

The resulting dataset, dubbed the Brainwash dataset, contains 11,917 images with “91146 labeled people.” As Harvey and LaPlace note in MegaPixels, that dataset has been used by researchers at the Chinese National University of Defense Technology, and “also appears in a 2018 research paper affiliated with Megvii (Face++)… who has provided surveillance technology to monitor Uighur Muslims in Xinjiang.” Megvii was blacklisted in the United States in October 2019 due to human rights violations.

“It’s possible that you’re already contributing to surveillance technology right now.”

While you may not have visited that particular laundromat in 2014, you may have at some point used the photo-sharing site Flickr. In 2016, researchers at the University of Washington published a dataset of 4,753,320 faces taken from 3,311,471 photos uploaded to Flickr with a Creative Commons license. Dubbed Megaface, the dataset has been used by hundreds of companies and organizations around the globe to train facial-recognition algorithms.

There are scores of datasets like these two, comprised of non-consensually obtained images of unwitting people going about their daily lives, that feed the ever-growing field of facial-recognition technology. 

“[Since] there are too few rules regulating data collection, it’s possible that you’re already contributing to surveillance technology right now for both domestic and foreign commercial and governmental organizations,” explained Harvey. “Everyone should realize that unless better restrictions are put in place, their biometric data will continue to be exploited for commercial and military purposes.”

But people are fighting back, and, in some cases like Harvey and LaPlace, even winning — albeit incrementally.  

Fight for the Future is one group doing just that. A self-described collection of artists, activists, technologists, and engineers, Fight for the Future is actively working to ban facial-recognition tech, guarantee net neutrality, and disrupt Amazon’s surveillance relationship with police. 

“Facial recognition is a uniquely dangerous form of surveillance,” Evan Greer, Fight for the Future’s Deputy Director, explained over email. “In a world where more and more of our daily movements and activities are caught on camera, facial recognition enables that vast trove of footage to be weaponized for surveillance –– not for public safety but for public control.”

Greer cited two specific examples — a successful campaign to ban facial recognition from U.S. concert venues and live music festivals, and convincing 60 colleges and universities to commit to not using the technology on their campuses — to show that the battle against facial-recognition tech is a winnable one. 

“Like nuclear or biological weapons, facial recognition poses such a profound threat to the future of human society that any potential benefits are far outweighed by the inevitable harms,” emphasized Greer. 

Fight for the Future isn’t alone in its battle against facial-recognition enabled oppression. Likely thanks in part to Harvey and LaPlace’s work exposing the Brainwash dataset, it is no longer being distributed. Another dataset of surveillance footage, dubbed the Duke MTMC dataset and obtained without consent on the campus of Duke University, was removed after a 2019 Financial Times article highlighted that people around the world were using it to train their algorithms to track and identify pedestrians from surveillance footage. 

“After the publication of the Financial Times article [the author of the Duke MTMC dataset] not only removed the dataset’s website, but also facilitated removal on GitHub repositories where it is more difficult to control,” explained Harvey. “Duke University then made a public statement to the student body and the author made a formal apology. I think he was honestly unaware of the problem and acted swiftly upon realizing what happened to his dataset.”  

“An algorithm without data is useless.”

Thankfully, you needn’t be a full-time privacy activist to help push back against the growth of this dangerous tech. As Greer explained, making your stance on the matter loud and clear — directly to your elected officials — is fundamental to the fight against facial-recognition technology. 

“[The] reality is that there will always be shady firms willing to do whatever the worst thing you can do with technology is and sell it to whoever will buy it –– unless there’s a law that says you can’t,” she wrote. “There’s legislation that’s been introduced in the House and Senate to ban law enforcement use of facial recognition. Everyone should tell their elected officials to support it.”

On a more fundamental level, fighting against facial-recognition tech also requires us to recognize our part in its creation. 

Harvey highlighted a comment made by the president and CEO of In-Q-Tel, a private investment firm that works to provide technology insights to the CIA, in an episode of the Intelligence Matters podcast: “an algorithm without data is useless.”

The data we create, like the photos we post online, is being used against us. It’s going to take our collective action to change that. 

Read more from Algorithms:

CORRECTION: Sept. 4, 2020, 5:13 p.m. PDT This post has been updated to reflect that the Irvine Company stopped collecting ALPR data in July 2018, after being criticized for the practice by the Electronic Frontier Foundation.