Voice-activated virtual assistants are nifty little devices, but they have a strong predisposition for evil.. Business Reporter’s resident U.S. ‘blogger Keil Hubert suggests that the risk-versus-reward analysis might make you reconsider how dangerous these gizmos can be.
I’m not surprised that the US Department of Homeland Security chose October to be National Cyber Security Awareness Month. October is associated with Halloween, a time that emphasizes being scared, threatened, and generally menaced. It only makes sense to add keystroke loggers, zombie botnets, and mysterious faceless hacker groups to the usual roster of disembodied spirits, shambling undead, and otherworldly ick normally associated with the holiday. Fear’s fear, after all.
Speaking of, a recurring horror movie trope is to make one of the ‘monsters’ in a haunted house story be the house itself. Maybe the dwelling is infused with evil from the land it’s built atop, as in the 1982 classic Poltergeist. Or maybe the house is an innocent bystander, and the evil is an unfriendly ghost cohabitating with the protagonists, as in 1944’s The Uninvited. Sometimes the house itself is cursed, as in Poe’s The Fall of the House of Usher. In all of these cases, the house becomes a character in its own right (and not a benevolent one).
Nowadays, we can replicate the same creepy haunted house feeling with technology, thanks to the fact that darned near every piece of consumer electronics kit we own is connected to the bloody Internet. Hackers can steal your Google account credentials directly from your Samsung ‘smart’ refrigerator. Your ‘smart’ television set can be hacked over a TV signal without the attacker needing access to it. Merry pranksters can hack your ‘smart’ toilet from a phone as soon as they get into Bluetooth range. Everything you own that has an IP address can be subverted. The worst offender of all is the one household device that that’s always monitoring you, even while you sleep. The one device that hears everything you say, and then whispers all of your darkest secrets to its shadowy cabal of faceless masters: the dreaded (but quite helpful) voice-activated personal assistant.
Essentially, this … but in an appliance the size of a gin bottle.
I have one of these. Mine’s a Google Home that I won in a raffle earlier this summer. I was thrilled to get it; I’ve been curious about these gizmos since they first hit the market. I was excited to tear open the package and start playing with it … until I got it home and realized that I couldn’t find a safe place in my house to test it. Anywhere. That’s why it’s still in the box. Every time my curiosity flares up, I reach for it … and then remember that I’m a professional security guy and put the gizmo back on the shelf until I can work out a reasonably safe test protocol.
The problem vexing me is that all of these gizmos work by voice activation. You say a certain key phrase, and the device responds. On mine, I’m supposed to say ‘Okay, Google …’ and then pose my question. On Apple’s HomePod, I’d start by saying ‘Hey Siri …’ On Amazon’s Echo, I’d simply call its virtual assistant by name, starting with ‘Alexa …’ In all three cases, a user can then ask their gizmo nearly anything. ‘What’s today’s weather forecast?’ or ‘Who won last night’s football match?’ These are nifty little devices to have sitting around the house for resolving arguments, if nothing else. Even better, they’re always ready to help, 24/7, because they’re always attentively listening for their cue to spring into action.
Therein lies the problem: they’re always listening … That’s a bit of a problem if you think back on all the different topics that come up in spoken conversation during an average day. Your gizmo is listening when you gripe about your neighbours. They’re listening when you rant about the government’s latest scandal. They’re listening when you authenticate your identity on phone calls with technical support. They’re listening when you and your significant other have a row over accusations of infidelity  … which were dutifully recorded in stereo and are available for playback on-demand because your helpful little was listening all throughout the ill-advised tryst, too. Ooo, er. Hmm. That seems … about as bad as having the disembodied spectre of a malevolent court stenographer living in your walls. Not recommended. Zero stars.
‘That’s no problem at all!’ the gizmos’ manufacturers’ claim. ‘Our virtual assistant may always be listening, but only for our activation key phrase! It’s not like we’re listening to everything said by everyone in your house all the time …’
’Think of me as a friend, just hanging out. Except that I never go away, never stop hearing everything you say, and transcribe a record of your every utterance to be archived for all eternity in a remote data centre.’
That reassurance isn’t exactly helpful when you discover that it’s nowhere near accurate, as Google had to admit last week. On 10th October, AndroidPolice.com’s ‘blogger Artem Russakovskii reported that his Google Home Mini was recording everything around it … pretty much 24/7. That is, the device turned itself on (thanks to a hardware flaw) and recorded whatever was happening without its owner’s knowledge or permission, and without the ‘required’ key phrase being uttered first. This is … pretty much exactly what everyone in the security business is afraid of.
To be fair to Google, they did immediately release a fix to address the flaw. Unfortunately, this glitch isn’t likely to be an isolated incident. These tech companies have designed cool consumer products that require always-on microphones. These gizmos are optimized to reside in a person’s home. They have to have the voice activation feature, or else they’re no more practical than just using Siri on your iPhone. So, the always-on microphones are required, not an optional extra.
More to the point, a ‘voice activation’ feature doesn’t work unless it’s listening to you all the time. That’s inherent in their design. It can’t not be listening. So, if the microphone is always on, always straining to hear its key phrase, then it is for all intents and purposes a persistent surveillance device. Intelligence services, law enforcement agencies, private eyes, and cyber criminals will all want to technically exploit these devices to vacuum up everyone’s most incriminating conversations.
This isn’t new. Remember back in 2010 when Lower Merion School District was caught spying on its students in their homes via the webcams in their school-issued laptops. A forensic investigation discovered that the school system had captured at least 30,564 webcam photos and 27,428 screenshots from 2,306 high school students’ computers. From a threat perspective, it doesn’t matter if the people responsible for that egregious violation of students’ privacy were confused amateurs or felonious paedophiles … The fact is, the technology that these victims brought into their homes was turned against them without their knowledge or consent. It could just as easily have been government agents recording seditious statements or pervy creeps recording kids undressing. The golden rule of security is: any technology that can be abused inevitably will be abused.
Inevitably by the worst possible people in the worst possible fashion.
That’s why my new-in-box Google Home is still sitting on my desk, un-activated. I know that there are ways to safely set it up, patch it, lock down its configuration, and so be able to play with it and see how well it works. The trouble is, no lockdown procedure is ever completely effective. Defects can undermine security settings even in a fully-patched device. Making matters worse, I can’t afford to drop five thousand quid on a business-grade firewall. I also don’t have a house large enough to have a ‘safe’ room to isolate the gizmo in. If you leave the door open – cracked, really – then a conversation in one corner of the house can be heard anywhere else. Finally, these gizmos aren’t built to operate outside in the hostile Texas heat, so backyard testing is right out.
That’s why my shiny little gizmo sits on my desk and mocks me, making it a fine analogue for the classic ‘evil sentient house’ trope. I’m torn between wanting to learn all that I can about it, but I appreciate the risks inherent in letting it take up residence in my home. It could be sweet, helpful, and innocent, or it could turn into a tireless, privacy-shredding automaton. The worst thing is – in classic horror movie fashion – I won’t know which it is until after it’s too late. In fact, given its lack of glowing red eyes and theme music, my gizmo could viciously turn on me and I’d never realize it.
If that doesn’t make for the good modern haunted house yarn, I don’t know what will. I suppose I could ask my Google Home …
 Strictly a theoretical use case, provided for amusing and/or disquieting example purposes only.
POC is Keil Hubert, firstname.lastname@example.org
Follow him on Twitter at @keilhubert.
Keil Hubert is a retired U.S. Air Force ‘Cyberspace Operations’ officer, with over ten years of military command experience. He currently consults on business, security and technology issues in Texas. He’s built dot-com start-ups for KPMG Consulting, created an in-house consulting practice for Yahoo!, and helped to launch four small businesses (including his own).
Keil’s experience creating and leading IT teams in the defense, healthcare, media, government and non-profit sectors has afforded him an eclectic perspective on the integration of business needs, technical services and creative employee development… This serves him well as Business Technology’s resident U.S. blogger.