Future of Privacy Forum (FPF) released a new infographic: Microphones & the Internet of Things: Understanding Uses of Audio Sensors in Connected Devices (read the press release here). From Amazon Echos to smart TVs, we are seeing more home devices integrate microphones, often to provide a voice user interface powered by cloud-based speech recognition.
Last year, we wrote about the “voice first revolution” in a paper entitled “Always On: Privacy Implications of Microphone-Enabled Devices.” This paper created early distinctions between different types of consumer devices and provided initial best practices for companies to design their devices and policies in a way that builds trust and understanding. Since then, microphones in home devices — and increasingly, in city sensors and other out-of-home systems — have continued to generate privacy concerns. This has been particularly notable in the world of children’s toys, where the sensitivity of the underlying data invites heightened scrutiny (leading the Federal Trade Commission to update to its guidance and clarify that the Children’s Online Privacy Protection Act applies to data collected from toys). Meanwhile, voice-first user interfaces are becoming more ubiquitous and may one day represent the “normal,” default method of interacting with many online services and connected devices, from our cars to our home security systems.
As policymakers consider the existing legal protections and future direction for the Internet of Things, it’s important to first understand the wide range of ways that these devices can operate. In this infographic, we propose that regulators and advocates thinking about microphone-enabled devices should be asking three questions: (1) how the device is activated; (2) what kind of data is transmitted; and, on the basis of those two questions, (3) what are the legal protections that may already be in place (or not yet in place).
In this section, we distinguish between Manual, Always Ready (i.e., speech-activated), and Always On devices. Always Ready devices often have familiar “wake phrases” (e.g. “Hey Siri,”). Careful readers will notice that the term “Always Ready” applies broadly to devices that buffer and re-record locally (e.g., for Amazon Echo it is roughly every 1-3 seconds), and transmit data when they detect a sound pattern. Sometimes that pattern is a specific phrase (“Alexa”), but it can sometimes be customizable (e.g. Moto Voice let’s you record your own launch phrase) and sometimes it need not be a phrase at all — for example, a home security camera might begin recording when it detects any noise. Overall, Always Ready devices have serious benefits and (if designed with the right safeguards) can be more privacy protective than devices designed to be on and running 100% of the time.
#2 – DATA TRANSMITTED
In this section, we demonstrate the variety of data that can be transmitted via microphones. If a device is designed to enable speech-to-text translation, for example, it will probably need to transmit data from within the normal range of human hearing — which, depending on the sensitivity, might include background noises like traffic or dogs barking. Other devices might be designed to detect sound in specialized ranges, and still others might not require audio to be transmitted at all. With the help of efficient local processing, we may begin to see more devices that operate 100% locally and only transmit data about what they detect. For example, a city sensor might alert law enforcement when a “gunshot” pattern is detected.
#3 – WHAT ARE THE EXISTING LEGAL PROTECTIONS?
In this section, we identify the federal and state laws in the United States that may be leveraged to protect consumers from unexpected or unfair collection of data using microphones. Although not all laws will apply in all cases, it’s important to note that certain sectoral laws (e.g. HIPAA) are likely to apply regardless of whether the same kind of data is collected through writing or through voice. In other instances, the broad terms of state anti-surveillance statutes and privacy torts may be broadly applicable. Finally, we outline a few considerations for companies seeking to innovate, noting that privacy safeguards must be two-fold: technical and policy-driven.
About the Author
Stacey Gray is a CIPP/US-certified attorney and policy counsel at the FPF, focusing on issues of data collection in online and mobile platforms, ad tech and the Internet of Things. At FPF, she has worked on FCC and FTC public filings and publishes extensive work related to cross-device tracking, smart home technologies and federal regulation and enforcement actions. Stacey graduated cum laude from Georgetown University Law Center in 2015, where she first worked in civil rights litigation as a law clerk for Victor M. Glasberg & Associates and as a member of the civil rights division of the Institute for Public Representation. With a background in biotech and coding, Stacey is interested in the ways in which technology can be harnessed to advance civic knowledge and civil rights while safeguarding consumer privacy. Recent publications include “Cross-Device: Understanding the State of State Management” and “Always On: Privacy Implications of Microphone-Enabled Devices.”