Is Siri/Alexa listening to your conversations?

Are your online search results mysteriously referencing things you’ve only talked about in person? Do you suspect your voice assistant is hearing more than it should? Well, you’re not alone. Many people have a deep-seated suspicion of voice assistants.

In this article, we examine whether these concerns are justified and whether Siri or Alexa is listening to your conversations.

What’s the problem?

As voice assistants have grown in popularity, so has the concern that they are eavesdropping on real-world conversations. Oftentimes, the eavesdropping appears to result in suitably targeted online adverts. You may have experienced this yourself or, at least, know someone else who has.

For example, a USA Today reporter became suspicious of Siri after moving to a more Spanish-speaking area of Oakland, California. Her partner had been speaking Spanish with building contractors in the new house, within earshot of her iPad. In the days that followed, her iPad started showing her commercials in Spanish.     

There are two explanations for this phenomenon: one is that the voice assistant was somehow activated and listened in when it shouldn’t have. The other, more likely scenario is that people have already provided the data needed to target the ads through other means.

It can be unpleasant to consider the amount of data we hand over online and what it says about us. Think of all those times you blithely accepted cookies from a website or granted permissions in an app. Or how often you’ve used Amazon or searched for something using Google

Many of us don’t even realize how much personal data we disclose nor how that data is used to target ads. It seems plausible to accuse voice assistants of recording our every utterance. When consolidated, however, our seemingly innocuous online behaviors can create a fairly accurate picture of who we are. From there, it’s just a small jump to predict what we might like and when we might like it.

That’s not to say voice assistants are blameless – they are not. As we’ll see, Siri and Alexa are sometimes dangerously easy to activate.

Keyword spotting

Smart speakers are activated when a “wake” word or phrase is spoken. Of course, the devices must already be listening for this to work. 

Amazon uses technology it calls “keyword spotting” to listen out for the user’s wake word. The company says that “Echo devices are designed by default to detect only the sound waves of your chosen wake word.” When this word is recognized, it is verified by sending it to Amazon’s cloud servers. Siri works the same way, as do other voice-assistants from Google and Microsoft. 

In effect, the voice assistant is listening on a local level until it registers the wake word, after which the conversation passes into the cloud. But what happens if the device mishears its wake word?

How do we know that Siri and Alexa aren’t always listening? Such a practice would require constantly uploading recorded audio to the cloud, which would balloon users’ data consumption. Transmitting that volume of data wouldn’t be practical and would quickly be flagged by watchdogs.

Accidental activations

In 2020, researchers played TV programs like Narcos and Grey’s Anatomy near smart speakers (some equipped with Siri and Alexa) to see how many times they were accidentally activated. They observed 0.95 mis-activations per hour, which equated to 1.43 mis-activations for every 10,000 words spoken. More worryingly, they found that, for some devices, 1 in 10 mis-activations lasted at least 10 seconds. 

If you have your voice assistant within earshot of a TV, it’s likely that at least some of your conversations with someone else have been listened to and subsequently stored on remote servers. The researchers say that it’s possible that you’ll be recorded wherever the voice assistant is placed. 

Siri, for example, can be awakened by words rhyming with “Hey” or “Hi” followed by a voiceless “s”/“f”/“th” sound and a “i”/“ee” vowel. Examples might include “they … secretly”, “I’m sorry” or “hey … is here”.

For Alexa, sentences starting with “I” followed by a “K” or a voiceless “S” can activate devices. For example, “I care about,” “I messed up,” or “I got something.” These words and phrases are fairly likely to crop up in normal conversations. Other combinations increase the likelihood that you will be inadvertently recorded – albeit only briefly.

Consumer concern over potential mis-activations led to a class action lawsuit against Apple in 2019. The instigator claimed to have been recorded without giving consent i.e. without explicitly saying “Hey Siri”, and also that these recordings were disclosed to third parties, such as advertisers.

The case was dismissed, but then readmitted in 2021, when a federal judge said that Apple would have to face nearly all of the lawsuit, and that plaintiffs could try to prove Siri routinely recorded their private conversations because of “accidental activations”.

In 2021, a class action lawsuit was filed against Amazon for similar reasons. The Complaint alleged that Alexa-enabled devices frequently captured conversations by accident. Examples of non-wake words that caused Alexa to activate included the words “unacceptable” and “election”. 

That same year, four healthcare workers filed a lawsuit against Amazon alleging that Alexa devices may have recorded private conversations with patients after being accidentally activated – a clear breach of patient confidentiality.

The lawsuit pointed out that “despite Alexa’s built-in listening and recording functionalities, Amazon failed to disclose that it makes, stores, analyzes and uses recordings of these interactions” when the devices were purchased.

What’s wrong with voice assistants listening to conversations?

Digital voice assistants, seemingly trapped within the confines of their devices, seem far removed from the day-to-day human experience. Many people assume they are powered by a combination of algorithms and the internet. While this is partially true, humans are still intimately involved in making the service work.

In 2019, the Guardian reported how Apple contractors regularly heard “confidential medical information, drug deals, and recordings of couples having sex”, as part of their job providing “quality control”. The Irish Examiner reported that contractors were required to listen to approximately 1,000 Siri recordings every day. This was despite Apple not explicitly disclosing the presence of these contractors to users. 

That same year, Bloomberg reported that Amazon employees were routinely listening to Alexa recordings as part of quality control or for other services – for example, transcribing artists names and linking them to musicians in the company’s database. 

Very few people who signed up for Siri or Alexa imagined their conversations would be analyzed by other people. In the case of Apple, the revelations resulted in the company temporarily suspending its contracted listeners. 

For its part, Amazon says that its employees listen to “a fraction of one percent of interactions” in order to keep improving the Alexa experience. This might not sound like a lot, until you consider how many conversations are being had at any one time on the “hundreds of millions of Alexa-enabled devices” that Amazon says have been sold. 

It’s perhaps understandable that Amazon and other voice-assistant makers would want to use a percentage of voice recordings to “train” their speech recognition and natural language understanding systems. But can we trust that this is all they’re doing? After all, Amazon in particular is in the business of data. 

The more cynical will not be surprised to find out that Amazon appears to be using information gleaned from conversations to serve targeted ads. In a 2022 study, a team of researchers found that “Amazon processes smart speaker interaction data to infer user interests and uses those inferences to serve targeted ads to users”. This data is particularly profitable, resulting in “as much as 30X higher bids in ad auctions, from third party advertisers”. The researchers note that this use of consumer data is “often not clearly disclosed in their policy documents”. 

But what about Siri? The good news is that requests to Siri aren’t used to build a marketing profile or tied to a user’s Apple ID. That’s not to say Apple isn’t trading in data – it is. The difference is that Apple creates targeted ads using everything apart from voice data. 

How to protect your privacy while using Siri/Alexa

Apple and Amazon have, in recent years, made it easier for users to control their privacy while using the company’s respective voice assistants. 

Amazon

You can opt-out of interest-based ads from Amazon on its Advertising Preferences Page. Click the button next to “Do not show me interest-based ads provided by Amazon”. While you’re on this page, you can choose to “Delete ad data” by clicking the relevant button under the “Delete personal information linked to this device from our ad systems” heading.

To view and delete voice recordings, visit Review Voice History Page. Users can also request their data from this page. Select “Alexa and Echo devices” from the “Select data category” drop-down menu, and then click “Submit requests”.  

Alternatively, you can manage your privacy options using Alexa. Simply say, “Alexa, how do I review my privacy settings?” and Alexa will send you a direct link in the Alexa app to your Alexa Privacy Settings.

Apple

You can turn personalized ads off by using the toggle switch on the Apple Advertising page. Go to “Settings”, then “Privacy & Security”, and then “Apple Advertising”. You can download and view any other personal data Apple collects via the Data and Privacy portal.

There are a few options for configuring Siri. Select “settings” from the menu, and then “Siri & Search”. This page has toggle switches which enable you to stop Siri listening, prevent Siri from being activated by the side button, and stop access to Siri when the device is locked.