Michael Williams’ every move was being tracked without his knowledge—even before the fire. In August, Williams, an associate of R&B star and alleged rapist R. Kelly, allegedly used explosives to destroy a potential witness’s car. When police arrested Williams, the evidence cited in a Justice Department affidavit was drawn largely from his smartphone and online behavior: text messages to the victim, cell phone records, and his search history.
The investigators served Google a “keyword warrant,” asking the company to provide information on any user who had searched for the victim’s address around the time of the arson. Police narrowed the search, identified Williams, then filed another search warrant for two Google accounts linked to him. They found other searches: the “detonation properties” of diesel fuel, a list of countries that do not have extradition agreements with the US, and YouTube videos of R. Kelly’s alleged victims speaking to the press. Williams has pleaded not guilty.
Data collected for one purpose can always be used for another. Search history data, for example, is collected to refine recommendation algorithms or build online profiles, not to catch criminals. Usually. Smart devices like speakers, TVs, and wearables keep such precise details of our lives that they’ve been used both as incriminating and exonerating evidence in murder cases. Speakers don’t have to overhear crimes or confessions to be useful to investigators. They keep time-stamped logs of all requests, alongside details of their location and identity. Investigators can access these logs and use them to verify a suspect’s whereabouts or even catch them in a lie.
It isn’t just speakers or wearables. In a year where some in Big Tech pledged support for the activists demanding police reform, they still sold devices and furnished apps that allow government access to far more intimate data from far more people than traditional warrants and police methods would allow.
A November report in Vice found that users of the popular Muslim Pro app may have had data on their whereabouts sold to government agencies. Any number of apps ask for location data, for say, the weather or to track your exercise habits. The Vice report found that X-Mode, a data broker, collected Muslim Pro users’ data for the purpose of prayer reminders, then sold it to others, including federal agencies. Both Apple and Google banned developers from transferring data to X-Mode, but it’s already collected the data from millions of users.
The problem isn’t just any individual app, but an over-complicated, under-scrutinized system of data collection. In December, Apple began requiring developers to disclose key details about privacy policies in a “nutritional label” for apps. Users “consent” to most forms of data collection when they click “Agree” after downloading an app, but privacy policies are notoriously incomprehensible, and people often don’t know what they’re agreeing to.
An easy-to-read summary like Apple’s nutrition label is useful, but not even developers know where the data their apps collect will eventually end up. (Many developers contacted by Vice admitted they didn’t even know X-Mode accessed user data.)
The pipeline between commercial and state surveillance is widening as we adopt more always-on devices and serious privacy concerns are dismissed with a click of “I Agree.” The nationwide debate on policing and racial equity this summer brought that quiet cooperation into stark relief. Despite lagging diversity numbers, indifference to white nationalism, and mistreatment of nonwhite employees, several tech companies raced to offer public support for Black Lives Matter and reconsider their ties to law enforcement.
Amazon, which committed millions to racial equity groups this summer, promised to pause (but not stop) sales of facial-recognition technology to police after defending the practice for years. But the company also noted an increase in police requests for user data, including the internal logs kept by its smart speakers.