In a demonstration video, an SRLabs researcher asks a Google Home for a random number, which it produces and voices. However, even though the action seemed to be complete, the program continued listening. A third-party computer then received a transcription of anything said.
For Alexa, the analysts created a simple horoscope skill (below). The analyst asks Alexa for a "lucky" reading, and Alexa queries for her zodiac sign. After answering, the device begins relaying the related horoscope reading while still listening through the mic. Even when told to stop the action, Alexa continues to monitor the sounds in the room and sends them to the receiving software.
SRLabs methods in all cases relied on a flaw that allowed them to continuously feed the smart speakers a series of characters (U+D801, dot, space) that they cannot verbalize. This algorithm keeps the communication channel for both speaking and listening open, even though the device remains silent.
Google and Amazon carefully examine smart speaker software before allowing it on their platforms, but they are not as careful with updates. Malicious parties can easily add spyware to patches for the apps without notice, which is precisely what the researchers did for US versions. In German iterations of the same malware, SRLabs was able to get approval without the subterfuge.
The analysts warned both companies well before making the security flaws public on Monday. It also posted several videos to YouTube showing the software in action. There is no evidence suggesting anyone other than the research team has used these exploits.
Because of the findings, Amazon has implemented countermeasures to detect and prevent skills from being misused this way. Likewise, Google said that it has updated its review process to look for this type of behavior and will remove any actions that violate its operating procedures.
“All Actions on Google are required to follow our developer policies, and we prohibit and remove any Action that violates these policies,” a Google spokesperson told Ars Technica. “We have review processes to detect the type of behavior described in this report, and we removed the Actions that we found from these researchers. We are putting additional mechanisms in place to prevent these issues from occurring in the future.”