Perception Deception: Physical Adversarial Attack Challenges and Tactics for DNN-Based Object Detection

Presented at Black Hat Europe 2018, Dec. 6, 2018, 1:30 p.m. (50 minutes).

DNN has been successful for Object Detection, which is critical to the perceptions of Autonomous Driving, and it also has been found vulnerable to adversarial examples. There has been an ongoing debate whether the perturbations to the sensor input, such as video streaming data from the camera, is practically achievable. Instead of tampering with the input streaming data, we added perturbations to the target object which is more practical. Our goal of this talk is to shed a light to the challenges of the physical adversarial attack against computer vision-based object detection system, and the tactics we applied to achieve success. At the same time, we'd like to raise the security concerns of AI-powered perception system, and urge the research efforts to harden the DNN models.

The presentation starts with an overview of YOLOv3 to introduce the fundamentals of the state-of-the-art object detection method, which takes in the camera input and produces accurate detections. It is followed by the threat models we design to achieve the physical attack by applying carefully crafted perturbations to the actual physical objects. We further reveal our attack algorithms and attack strategies respectively. Throughout the presentation, we will show examples about our initial digital attack, and how we adapt it to a physical attack given the environmental constraints, for example, an object is seen at various distances and various angles etc.,

Finally, we wrap up the presentation with a demo to make the audience aware that with a careful setup, computer vision-based object detection can be deceived. A robust, adversarial example resistant model is required in safety critical system like autonomous driving system.


Presenters:

  • Zhenyu Zhong - Staff Security Scientist, X-Lab, Baidu USA
    Zhenyu Zhong's current research focuses on adversarial machine learning, particularly deep learning. He explores physical attack tactics against autonomous perception models, as well as defensive approaches to harden the deep learning model. Previously, Dr. Zhong worked for Microsoft and McAfee, mainly applying large scale machine learning solutions to security problems such as malware classification, intrusion detection, malicious URL detection, spam filtering, etc..
  • Yunhan Jia - Senior Security Scientist, Baidu X-Lab
    <span>Yunhan Jia is a senior security scientist at Baidu X-Lab. He obtained his PhD from University of Michigan with a research focus on smartphone, IoT, and autonomous vehicle security. His past research revealed the open port vulnerabilities in apps that exposed millions of Android devices to remote exploits. He is currently working on the memory safety and deep learning model security issues of autonomous vehicle platform.</span>
  • Tao Wei - Chief Security Scientist, X-Lab, Baidu USA
    Tao Wei is a Chief Security Scientist at X-Lab.
  • Weilin Xu - PhD candidate, Department of Computer Science at the University of Virginia
    Weilin Xu is a PhD candidate in the Department of Computer Science at the University of Virginia, co-advised by Prof. David Evans and Prof. Yanjun Qi. He is interested in creating robust machine learning-based classifiers. His research has developed a generic method for generating adversarial examples using genetic programming, and a general technique named Feature Squeezing to harden deep learning models by eliminating unnecessary features.

Links:

Similar Presentations: