Challenge: The so-called adversarial attacks represent a real challenge for the safety of the autonomous driving. They refer to a specific manipulation of the visual input data that forces a prediction system to come to a false conclusion. For example, a traffic sign classifier, which works perfectly under normal conditions, can be fooled by an adversarially manipulated input to mistake a STOP sign for a speed limit of 80 km/h.
Moreover, such attacks are either imperceptible or not recognisable by humans as they can be camouflaged to look like normal cases of vandalism or graffiti. Due to the safety requirements there is a need for automatic detection of such adversarial manipulations to prevent malicious behaviour of a self driving cars.
Given an input image, the so-called explanation or heatmapping methods aim at identifying the pixels in the image responsible for the final classification decision.
The main goal of this project was to access the effectiveness of these methods for detection of the adversarial manipulation.
Results and Impact
The customer is able to assess the possibility of adversarial risk mitigation strategies in autonomous vehicles.
We provide our customers with innovative solutions build on cutting-edge technologies from artificial intelligence and machine learning.
13355 Berlin, Germany
+49 151 19569392
Since 1996 the P3 group represents innovative Engineering services in the areas automotive, telecommunications, aviation, energy and public sector. The foundation are 3,900 highly qualified and motivated employees at over 40 locations in 17 countries.