Posts: 13,080 +131
The “lamphone” measures the tiny fluctuations in light created as sound waves hit the bulb and cause it to vibrate ever so slightly. The electro-optical sensor is able to isolate the audio signal from the optical signal through four stages.
Unlike similar attacks that analyze the effects of sound waves on nearby objects, this version works passively, externally and most critically, in real time.
To demonstrate the effectiveness of the attack, the team set up a test using an office building with a hanging e27 light bulb and positioned a telescope 25 meters away on a walkway. Two songs and an excerpt of a speech from Donald Trump were then played in the room, none of which could be heard from the outside location.
Impressively, the technique was able to capture and convert the audio at a high enough quality to be recognized by music identifying app Shazam. The speech, meanwhile, was successfully transcribed by Google’s text-to-speech API.
Ben Nassi, the security researcher at Ben-Gurion University who developed the technique with fellow researchers Boris Zadov and Yaron Pirutin, said they want to raise awareness for this kind of attack vector so both sides of surveillance know what is possible.
As Wired notes, the technique does have its limitations. The use of a hanging light bulb rather than a bulb in a fixed socket suggests that the latter (and more common) solution might not afford enough vibration to be effective. It was also noted that the voice and music recordings were played at volumes that were “louder than the average human conversation.”
Furthermore, the test was conducted at a range of 25 meters although the distance could be increased with proper equipment, like a bigger telescope or a more capable electro-optical sensor.