Trellis White paper Ad 770 Spot #6
digital assistants cybersecurity
share TWEET PIN IT share share 0

Digital Assistants Are Listening: Is Your Law Firm at Risk?

We tend to forget that our smart devices are "always on."

By Sam Bocetta

Most of us are well aware by now that remote working does not come easy for everyone. Its swift and dramatic increase due to the global pandemic presents both challenges and opportunities. On the one hand, startups like Zoom and established giants, including Google and Microsoft, now offer some tools free of charge, hoping that people who start using them in a crisis will continue once normality returns.

On the other hand, a remote workforce brings new threats to the foreground, from network vulnerabilities to concerns over the smart devices we use in our homes. Voice-activated digital assistants, such as Apple’s Siri and Amazon Echo, are designed to connect users to services through an easy-to-use voice interface. However, voice assistants can also make the work of cyber-attackers easier.

Data Volumes Are Skyrocketing

The Internet of Things (IoT) generates new volumes of data that can be collected, processed and examined by the individual device manufacturers. Even with the most efficient safety measures in place, including encrypted virtual private networks (VPNs) and strong passwords, the data you willingly provide via your device can be used in several different ways, from targeted advertising to customized experiences online. However, this is also a security threat, as personalized and sensitive user data have the potential to be exploited by malicious actors.

There is ongoing confusion on how these devices manage and convert data, and consumers seem to have no idea how their data is being used. A 2019 survey by YouGov found that a third of smartphone owners did not know their devices were collecting and storing their voice recordings.

A smart device’s performance requires access to and management of sensitive or personal data. (Otherwise, they would not be “smart.”) If we take away “always-on” features, the device would not be able to perform its most basic functions, and optimizing the customer experience entails collecting certain sets of data.

Nonetheless, this does not exempt device manufacturers from data management, in particular from taking responsibility for sensitive data throughout the data lifecycle.

Digital Assistants in Our Homes

The world has fallen in love with smart devices. So much so that we are now installing IoT networks in our homes that need to be kept safe from a multitude of nefarious actors. The receptive nature of these smart devices ensures that they are “always on,” which has led to fears about when the device is “listening” and exactly what data it documents and saves.

Even though consumers are requesting more privacy and voicing concerns over how their behaviors are being tracked, they are installing integrated microphones as well as other sensors in their homes.

From lights to home surveillance cameras, we’re increasingly making use of “always-on” technology. Smart devices, especially speakers, are incredibly popular at home and work, but we tend to forget that they are continuously listening to our private discussions and queries. Although there is a dilemma as to who owns the recorded data, there is an even larger issue: Where is the data stored and how is it monitored through to the end of life?

The integration of digital assistants (also known as AI assistants or AIs) has become widespread in smartphones and other smart devices. At the moment, they are also making the leap to the business world. With Amazon’s announcement of the Alexa Business Platform, AIs may soon be able to help with everything from conference calls to office furniture orders. However, the functionality may come at the cost of security.

Digital Assistants in the Workplace

Over the last two years, digital assistants have exploded in popularity. We all knew that artificial intelligence and machine learning would affect cybersecurity in numerous ways, but could not plan for what we are now experiencing.

Amazon’s Echo devices were the No. 1 selling product on its website last year. Over 100 million Alexa devices have already been sold. Google and Apple are looking at increasing their market shares as the new developments envisioned for Google Home and the Apple HomeKit close the AI gap.

The Alexa Business Platform has brought additional functionality (called Alexa’s “skills”) to office spaces all over the world.

However, there are still obstacles to the technology. Persistent concerns about privacy leave some companies questioning if including a digital assistant will create additional vulnerabilities for an already overloaded work network — and their fears go further than run-of-the-mill security breaches.

Digital Assistants and Their Security Risks

Remote work is one of the most prominent cybersecurity risks. If staff must work from home, provide cybersecurity tools to protect them and keep company data safe from hackers. Traditionally, firewalls have been a useful tool for protecting against malware and viruses, but the recent trend to work from home has put the demand for encrypted VPN services front and center. But we’ve all been caught off guard with the pandemic, and everyone should be aware of the risks associated with digital assistants to avoid being hacked when working from home.

1. The hidden commands in the audio

Attacks against machine learning and AI systems include a class that attempts to change input, an image for vision systems, and an audio clip for microphone systems so that the system recognizes it as being entirely different.

At the University of California at Berkeley, a technique was uncovered that could modify an audio clip that reproduces a single sentence to a 99.9% similar clip that translates into a different sentence. This technique can even hide instructions within the music. At present, the technique can only be applied in controlled environments, but it can be used to create a generalized attack.

2. The machines can hear it (while you probably can’t)

The ability to hide commands within other audio types is only one known way hackers can manipulate voice assistants. In 2017, six scientists from Zhejiang University demonstrated they could use audio inaudible to the human ear to command Siri to make phone calls or take other actions.

Known as the DolphinAttack, their hack revealed that security vulnerabilities and low levels of device protection could be used to control a digital assistant to visit malicious websites, spy on other users, embed fake data, or engage in a denial-of-service attack.

3. Digital assistants are always on

Even if a voice assistant doesn’t act on your behalf, it’s still listening and waiting for commands. Like smartphones, your home voice assistants are instruments that manage a lot of sensitive data about you. This provides the corporations behind the devices a coveted spot right inside your home, making them a prime target for hackers.

Barring targeted attacks, smart devices have also been proved to unintentionally disclose private dialog. In a well-publicized incident in 2018, one couple’s conversation was recorded and sent to one of their contacts by Amazon Echo. Digital assistants such as Alexa, Google Assistant and Siri use voice recognition technologies as their primary interface. This means they’re always listening, even when they’re not in use. And this makes any digital assistant a potential listening device and a security flaw.

The Privacy Hurdle

Privacy is a big challenge that needs to be met head-on before the business world widely embraces automated assistants. If you are working from home, there are a range of safeguards that you can put in place for exactly that reason. Private data exchanges can make use of end-to-end encryption, which limits data access to only the sender and receiver.

Unfortunately, end-to-end encryption may not always be the standard, and many applications and devices do not make use of it, such as Google’s Allo Messaging software, which uses voice recognition technologies without the encryption. Businesses are constantly becoming the primary target for these types of attacks. In a method called “whale phishing,” hackers deliberately target high-value individuals within an organization for phishing scams and identity theft. Larger organizations are vulnerable as they give hackers better targets for these kinds of intrusions.

Protecting Your Practice

One in five businesses has been targeted in a cyberattack. Most of these attacks can indeed be traced back to missing personal data such as passwords or employee identities, raising questions about digital assistants and their possible use as listening devices.

Beyond the known figures, it is difficult to identify the recurring rate of attacks that can be directly linked to digital assistants, but their vulnerabilities are hard to overlook. All connected devices are potential entry points into secure systems. Smart devices should never be installed in spaces where compromising information may be overheard.

If the advantages of AIs outweigh the possible risks, you should take action to mitigate the possibility of a security breach, both physically and digitally. Implementing a prevention and response plan offers the best possible security for your firm.

Looking Toward the Future

The quick growth of machine learning and voice-powered AI is pointing to a rapidly expanding future. Chips developed by MIT point to the development of digital assistants that no longer need a web link to perform AI-related tasks such as voice recognition, potentially closing many of the security vulnerabilities that these devices have.

The rapid growth of digital assistants’ underlying technologies suggests big changes on the horizon for offices everywhere. Some fears of listening devices may even be exaggerated. The truth remains that almost all of us already have a mobile mic in our pocket, and we’re OK with that.

Categories: Artificial Intelligence, Legal Cybersecurity, Legal Technology, Virtual Assistants
Originally published June 11, 2020
Last updated July 24, 2020
share TWEET PIN IT share share
Sam Bocetta Sam Bocetta

Sam Bocetta is a writer specializing in U.S. diplomacy and national security, with emphasis on technology trends in cyberwarfare, cyberdefense and cryptography. Much of his previous career was spent penetration testing ballistics computer systems for a variety of spacecraft, aircraft, and a few marine vessels. The cat-and-mouse game of finding security vulnerabilities and figuring out how to strengthen/eradicate them remains a fascination that he now explores through the written word. You can find his best articles and cyber guides at and follow him on Twitter @sambocetta.

More Posts By This Author
MUST READ Articles for Law Firms Click to expand

Welcome to Attorney at Work!

Sign up for our free newsletter.


All fields are required. By signing up, you are opting in to Attorney at Work's free practice tips newsletter and occasional emails with news and offers. By using this service, you indicate that you agree to our Terms and Conditions and have read and understand our Privacy Policy.