Achieving all of that required the researchers to overcome two challenges. The first is that most assistant traffic is encrypted. That prevents LeakyPick from inspecting packet payloads to detect audio codecs or other signs of audio data. Second, with new, previously unseen voice assistants coming out all the time, LeakyPick also has to detect audio streams from devices without prior training for each device. Previous approaches, including one called HomeSnitch, required advanced training for each device model.

To clear the hurdles, LeakyPick periodically transmits audio in a room and monitors the resulting network traffic from connected devices. By temporarily correlating the audio probes with observed characteristics of the network traffic that follows, LeakyPick enumerates connected devices that are likely to transmit audio. One way the device identified likely audio transmissions is by looking for sudden bursts of outgoing traffic. Voice-activated devices typically send limited amounts of data when inactive. A sudden surge usually indicates a device has been activated and is sending audio over the Internet.

Using bursts alone is prone to false positives. To weed them out, LeakyPick employs a statistical approach based on an independent two-sample t-test to compare features of a device’s network traffic when idle and when it responds to audio probes. This method has the added benefit of working on devices the researchers have never analyzed. The method also allows LeakyPick to work not only for voice assistants that use wake words, but also for security cameras and other Internet-of-things devices that transmit audio without wake words.

Guarding Against Accidental and Malicious Leaks

So far, LeakyPick—which gets its name from its mission to pick up the audio leakage of network-connected devices, has uncovered 89 non-wake words that can trigger Alexa into sending audio to Amazon. With more use, LeakyPick is likely to find additional words in Alexa and other voice assistants. The researchers have already found several false positives in Google Home. The 89 words appear on page 13 of the above-linked paper.

Besides detecting inadvertent audio transmissions, the device will spot virtually any activation of a voice assistant, including those that are malicious. An attack demonstrated last year caused devices to unlock doors and start cars when they were connected to a smart home by shining lasers at the Alexa, Google Home, and Apple Siri devices. Sadeghi said LeakyPick would easily detect such a hack.
The prototype hardware consists of a Raspberry Pi 3B connected by Ethernet to the local network. It’s also connected by a headphone jack to a PAM8403 amplifier board, which in turn connects to a single generic 3W speaker. The device captures network traffic using a TP-LINK TL-WN722N USB Wi-Fi dongle that creates a wireless access point using hostapd and dnsmasq as the DHCP server. All wireless IoT devices in the vicinity will then connect to that access point.

To give LeakyPick Internet access, the researchers activated packet forwarding between the ethernet (connected to the network gateway) and wireless network interfaces. The researchers wrote LeakyPick in Python. They use tcpdump to record packets and Google’s text-to-speech engine to generate the audio played by the probing device.

With the increasing usage of devices that stream nearby audio and the growing corpus of ways they can fail or be hacked, it’s good to see research that proposes a simple, low-cost way to repel leaks. Until devices like LeakyPick are available—and even after that—people should carefully question whether the benefits of voice assistants are worth the risks. When assistants are present, users should keep them turned off or unplugged except when they’re in active use.

This story originally appeared on Ars Technica.


More Great WIRED Stories

Advantages of overseas domestic helper.