The Limited Times

Now you can see non-English news...

Siri, Google Assistant, Alexa: researchers hack language assistant with laser attack

2019-11-05T17:49:48.147Z


This attack even works through the windowpane: Scientists have found a way to secretly give orders to language assistants - by laser. The researchers have discovered a weak spot.



Imagine you're using a voice assistant like Siri or the Google Assistant, and suddenly you're executing those commands you never gave him. Commands no one spoke in the room, because you are alone. But on the device that houses the voice assistant, you notice a small point of light.

The scenario sounds scary - and beyond research experiments, fortunately no such attacks are known. It is not unrealistic. The findings of a team of scientists from the University of Michigan and the Tokyo University of Electro-Communications, which have now been put on the net. The five scientists use the project name "Light Commands" to show how speech assistants can hack speech assistants by laser.

With such an attack, the researchers outlined, language assistants could, for example, be given orders to open house or garage doors or for online purchases - depending on how the devices are networked at the site. The user could not hear the inserted commands themselves, but - if he is present at all - only the acoustic response of the device on it, which is partly supplemented by illuminated displays. Otherwise, the attack can be detected by the reflection of the laser beam used on the target device.

Attack example of the researchers as a short video

In principle, many devices are at risk

According to the researchers, many widespread devices running the Google Assistant, Apple's Siri, or Amazon's Alexa are vulnerable to the attack. This is due to the built-in technology. The presented attack focuses on so-called MEMS microphones, which are installed in smart speakers, but also in smartphones, and with which the assistants can recognize voice commands of their users. The sound is converted into electrical signals.

However, these electrical signals can also be elicited from the microphones by exposing them to laser beams of varying intensity, the researchers found. In their attack, which has so far only been tested in test environments, they aimed a laser at a microphone, partly through a window. The commands are thus - in contrast to the usual voice commands - transmitted inaudible. It is "possible to let microphones react to light in such a way that they react to noises", said researcher Takeshi Sugawara the tech magazine "Wired".

Exploratory video of the researchers to light commands

Practically, however, such an attack has limits, as the scientists themselves clarify. First of all, the laser must be aimed at the microphone, which is not always easy. And even if it succeeds, in many scenarios even simply putting the device away from the window would make the attack impossible.

You need the right equipment

In addition, an attacker needs matching, a few hundred euros expensive equipment, especially if he is not in close proximity to the microphone. The researchers report that they were able to transmit signals from a distance of at least 110 meters on a corridor - a longer distance had not been tested.

Another attack example of the researchers as a short video

On their website, the researchers present test results on various devices. This tells you what strength a laser must have for the attack to work from a distance of 30 centimeters. In the case of the Google Home, for example, these are reportedly 0.5 milliwatts, for a first generation Amazon Echo Plus 2.4 milliwatts, for an echo spot 29 milliwatts. A Samsung Galaxy S9 (with integrated Google Assistant) requires at least 60 milliwatts.

With such a laser S9 an attack from a maximum of five meters away is possible, it is said in a table. For most other test equipment, an attack was more than fifty meters away from the microphone.

A speaker recognition does not necessarily protect

By the way, in the defense against laser attacks, it does not help to activate speaker recognition on a gadget - so the gadget is set to respond only to the commands of a particular voice. The laser commands were also applicable to such devices, they say. In general, the speaker recognition refers only to the Aufweckwörter for a device, will continue to run: In case of doubt, so could wait for a command from the real user before you inject your own commands via laser.

For example, it would be useful protection if, prior to executing a command, the device poses a random question to the user and waits to see if it is correctly answered. That would at least slow down attackers who are not within earshot of the device.

Also on the part of the hardware manufacturers could protect their products better against laser attacks, it is said. One approach proposed by the researchers is the installation of additional microphones. If, as in the case of an attack, only a single microphone receives a signal, the device could consider this an anomaly: executing the command could then refuse it.

Google and Amazon said in a statement for "Wired", they will examine the research results. According to the magazine, Apple did not want to comment on the topic.

Source: spiegel

All tech articles on 2019-11-05

You may like

Life/Entertain 2024-03-19T07:19:44.696Z

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.