The Limited Times

Now you can see non-English news...

Experiment at the Max Planck Institute: splashes of color disrupt self-driving cars

2019-10-29T20:37:49.737Z


Autonomous cars need to reliably detect obstacles and hazards. Exactly this central ability disturbed researchers of the Max Planck Institute in one attempt - with the simplest means.



Autonomous cars are considered an important technology of the future. They are supposed to revolutionize individual mobility, and droves of robotic vehicles will soon be shivering through the traffic. Bookable via App, should arise for the customers when traveling with them only a fraction of the cost of a taxi ride. Because the biggest cost factor is missing - the human driver. Therefore not only classic car manufacturers work on this technology, but also technology companies like Google's mother Alphabet.

But the autopilot systems are far from being as robust as hoped. In order for the vehicles to become an everyday technology, they must be able to travel safely on the road and be able to reliably see obstacles. Cameras permanently provide data that the car must then correctly interpret. Exactly this ability, researchers of the Max Planck Institute for Intelligent Systems in Tübingen in an attempt paralyzed - with a small, colorful spot.

MPI-IS

This pattern confused the optical flow algorithms

The interpretation of the camera images take over in advanced systems optical flow algorithms. They are based on so-called neural networks, a form of artificial intelligence that works in a similar way to the mechanisms of the brain. Apparently, these algorithms are amazingly easy to disturb - with a blue-red pattern. "It took us three, maybe four hours to create the pattern," explains Anurag Ranjan, one of the authors of the study.

Small spot leads to severe calculation errors

Even if the pattern does not move in the image captured by the camera, such as on a traffic sign, researchers say that the neural networks may be calculating incorrectly - and suddenly they can no longer detect motion. Thus, the algorithm was unable to detect the movements of a subject in an experiment as long as the pattern was visible in the image.

If the person hid the pattern, the system recognized the subject's movements. Even though the camera passed the pattern that was positioned like a traffic sign in the laboratory test, this effect was observed.

The patch with the pattern does not have to be particularly large: according to the researchers, a size of one percent of the entire image captured by the system is sufficient to attack the system and lead to serious calculation errors. The bigger the stain, the worse the researchers are getting worse. Based on these findings, the researchers had informed automakers developing self-propelled models before publishing their study. According to the Max Planck Institute, however, they would not have reacted.

Nevertheless, the danger that currently available cars are affected, is very low, says Oliver Wasenmüller from the German Research Center for Artificial Intelligence: "Deep neural networks are still very rarely used in the series, here are more traditional methods are applied, not on this Fall for the deception. "

People see Tempo 30, the car reads 100 km

However, optical flow algorithms are considered a likely component of future self-driving cars. "Neural networks are important in object recognition, so when you see traffic signs and pedestrians," explains Oliver Wasenmüller. For example, you could make a Speed ​​100 sign for the car with such a pattern from a sign that shows the tempo 30 for the human eye.

"But this problem can be managed," says Wasenmüller. Either by training the neural network to detect and sort out such interference patterns. Or by redundancy, so additional systems that test the same shield with a different algorithm and therefore do not fall for the deception.

Source: spiegel

All tech articles on 2019-10-29

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.