A new study published by a number of British researchers reveals a hypothetical cyberattack in which a hacker could leverage recorded audio of a person typing to steal their personal data. The attack uses a home-made deep-learning-based algorithm that can acoustically analyze keystroke noises and automatically decode what that person is typing. The research showed that typing could be accurately de-coded in this fashion 95 percent of the time.
Researchers say that such recordings could be easily achieved via a cell phone microphone, as well as through the conferencing app Zoom. After that, the recording can be fed into an easily compiled algorithm that analyzes the sounds and translates them into readable text.
Advertisement
This is an interesting variation on what is technically known as an “acoustic side channel attack.” Acoustic attacks (which use sonic surveillance to capture sensitive information) are not a new phenomenon, but the integration of AI capabilities promises to make them that much more effective at pilfering data. The big threat, from researchers’ point of view, is if a hacker were able to use this form of eavesdropping to nab information related to a user’s passwords and online credentials. According to researchers, this is actually fairly easy to do if the cybercriminal deploys the attack in the right conditions. They write:
“Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms…The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output.”
Advertisement
Advertisement
You can definitely imagine a number of scenarios in which a bad actor could feasibly pull this off and nab a hapless computer/phone user’s data. Since the attack model relies on having an audio recording of the victim’s activity, an attacker could hypothetically wait until you were out in public (at a coffee shop, for instance) and then clandestinely snoop from a safe distance. If the attacker had high-quality parabolics or other sophisticated listening devices, on the other hand, they might even be able to penetrate the walls of your apartment.
How do you protect against an acoustic keyboard attack?
Just how do you protect yourself against such a bizarre cyberattack? To be honest, it’s not entirely clear. In their paper, researchers suggest a number of defensive tactics that—I’m sorry to say—don’t sound super feasible for the average web user. These include:
- Using “randomised passwords featuring multiple cases,” which apparently may throw off coherent interpretation of a vulnerable login credential. Credentials with full words are easier to decipher.
- Researchers also suggest that, in the scenarios where a recording might be made during a voice call, “adding randomly generated fake keystrokes to the transmitted audio appears to have the best performance and least annoyance to the user.”
- Researchers also suggest that “simple typing style changes could be sufficient to avoid attack.”
- Finally, researchers suggest just using biometric login mechanisms more frequently than passwords, since this side-steps the whole issue of a hacker recording the acoustics associated with your typed password.
I think there’s very little likelihood that most people are going to deploy fake typing noises or overhaul their entire “typing style” just on the offhand chance that it might throw off some sort of acoustic spy lurking nearby. Sure, biometrics are a good idea in general, though it doesn’t cancel out the invasive potential that acoustic spying poses generally. I guess the best thing we can do is hope that this is mostly a hypothetical threat and that there aren’t too many lunatics out there that would actually try something like this.
Services Marketplace – Listings, Bookings & Reviews