Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

When Speakers Are All Ears

When Speakers Are All Ears – Understanding when smart speakers mistakenly record conversations. Daniel J. Dubois (Northeastern University), Roman Kolcun (Imperial College London), Anna Maria Mandalari (Imperial College London), Muhammad Talha Paracha (Northeastern University), David Choffnes (Northeastern University), Hamed Haddadi (Imperial College London) Last updated: 02/14/2020

Summary – Voice assistants such as Amazon’s Alexa, OK Google, Apple’s Siri, and Microsoft’s Cortana are becoming increasingly pervasive in our homes, offices, and public spaces. While convenient, these systems also raise important privacy concerns—namely, what exactly are these systems recording from their surroundings, and does that include sensitive and personal conversations that were never meant to be shared with companies or their contractors? These aren’t just hypothetical concerns from paranoid users: there have been a slew of recent reports about devices constantly recording audio and cloud providers outsourcing to contractors transcription of audio recordings of private and intimate interactions.  Anyone who has used voice assistants knows that they accidentally wake up and record when the “wake word” isn’t spoken—for example, “Seriously” sounds like the wake word “Siri” and often causes Apple’s Siri-enabled devices to start listening. There are many other anecdotal reports of everyday words in normal conversation being mistaken for wake words.  For the past six months, our team has been conducting research to go beyond anecdotes through the use of repeatable, controlled experiments that shed light on what causes voice assistants to mistakenly wake up and record.  Below, we provide a brief summary of our approach, findings so far, and their implications. This is ongoing research, and we will update this page as we learn more…”

Sorry, comments are closed for this post.