Illustration for article titled Dallas Police Used Face Recognition Software Without Authorization, Installed on Personal Phones

Photo: Justin Sullivan (Getty Images)

Dallas police officers used unauthorized facial recognition software to conduct between 500 and 1,000 searches in attempts to identify people based on photographs. A Dallas Police spokesperson says the searches were never authorized by the department, and that in some cases, officers had installed facial recognition software on their personal phones.

Advertisement

The spokesperson, Senior Cpl. Melinda Gutierrez, said the department first learned of the matter after being contacted by investigative reporters at BuzzFeed News. Use of the face recognition app, known as Clearview AI, was not approved, she said, “for use by any member of the department.”

Department leaders have since ordered the software deleted from all city-issued devices.

Officers are not entirely banned from possessing the software, however. No order has been given to delete copies of the app installed on personal phones. “They were only instructed not to use the app as a part of their job functions,” Gutierrez said.

Clearview AI did not respond Wednesday when asked if it had revoked access for officers whose departments say their use is unauthorized.

The Dallas Police Department says it has never entered into a contract with Clearview AI. Yet officers were still able to download the app by visiting the company’s website. According to BuzzFeed, officers who signed up for a free trial at the time were not required to prove they were authorized to use the software.

What’s more, emails obtained by the news outlet also show that Clearview AI’s CEO, Hoan Ton-That, has not been opposed to letting officers sign up using personal email accounts.

Advertisement

During an internal review, Dallas officers told superiors they had learned about Clearview through word of mouth from other officers.

BuzzFeed News first revealed Clearview AI was being used in Dallas on Tuesday following a yearlong investigation into the company. The Dallas Police Department is only one of 34 agencies to acknowledge employees had used the software without approval.

Advertisement

Using data supplied by a confidential source, reporters found that nearly 2,000 public agencies have used Clearview AI’s facial recognition tool. The source was granted anonymity, BuzzFeed said, due to their fear of retribution.

Nearly 280 agencies told the reporters that employees had never used the software. Sixty nine of those later recanted. Nearly a hundred declined to confirm Clearview AI was used and more than 1,160 organizations didn’t respond at all.

Advertisement

The BuzzFeed data, which begins in 2018 and ends in February 2020, also shows the Dallas Security Division, which oversees security at City Hall, conducted somewhere between 11 and 50 searches. A spokesperson said the division has no record of Clearview AI being used.

Advertisement

Dallas City Mayor Eric Johnson did not immediately respond to an email. A city council member said they needed time to review the matter before speaking on the record.

Misuse of confidential police databases is not an unknown phenomenon. In 2016, the Associated Press unearthed reports of police regularly accessing law enforcement databases to glean information on “romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work.”

Advertisement

Between 2013 and 2015, the AP found at least 325 incidents of officers being fired, suspended, or forced to resign for abusing access to law enforcement databases. In another 250 cases, officers received reprimands or counseling or faced lesser forms of discipline.

Today, facial recognition is considered one of the most controversial technologies used by police. The American Civil Liberties Union has pressed federal lawmakers to impose a moratorium on its use nationwide citing multiple studies showing the software is error-prone, particularly in cases involving people with dark skin.

Advertisement

A study of 189 facial recognition systems conducted by a branch of the U.S. Commerce Department in 2019, for example, found that people of African and Asian descent are misidentified by software at a rate 100 times higher than white individuals. Women and older people are at a greater risk of being misidentified, the tests showed.

One system used in Detroit was estimated to be inaccurate “96 percent of the time” by the city’s own police chief.

Advertisement

Clearview AI, which is known to have scraped billions of images of people off social media without their consent or the consent of platforms, has consistently claimed its software is bias-free and, in fact, helps to “prevent the wrongful identification of people of color.”

Ton-That, the CEO, told BuzzFeed that “independent testing” has shown his product is non-biased; however, he also ignored repeated requests for more information about those alleged tests. The news outlet was also able to send 30 images of people to a source with access to the system and included several photos of computer-generated faces. Clearview AI falsely matched two of the fake faces—one of a woman of color and another of a young girl of color—to images of real people.

Advertisement

In 2019, more than 30 organizations with a combined membership of 15 million people called on U.S. lawmakers to permanently ban the technology, saying that no amount of regulation would ever adequately shield Americans from persistent civil liberties violations.

Correction: A previous version of this article mistakenly said that Clearview AI had “scraped billions of images of people off social media with their consent.” The images were scraped without their consent. We regret the error.

Advertisement

Advantages of local domestic helper.