Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service

Presented at DEF CON China 1.0 (2019), May 31, 2019, 11 a.m. (45 minutes)

In recent years, Deep Learning(DL) techniques have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance . While many recent works demonstrated that DL models are vulnerable to adversarial examples.Fortunately, generating adversarial examples usually requires white-box access to the victim model, and real-world cloud-based image classifier services are more complex than white-box classification and the architecture and parameters of DL models on cloud platforms cannot be obtained by the attacker. The attacker can only access the APIs opened by cloud platforms. Thus, keeping models in the cloud can usually give a (false) sense of security.In this paper, we mainly focus on studying the security of real-world cloud-based image classifier services. Specifically, (1) We propose a novel attack methods, Fast Featuremap Loss PGD (FFL-PGD) attack based on Substitution model ,which achieve a high bypass rate with a very limited number of queries. Instead of millions of queries in previous studies, our methods find the adversarial examples using only two queries per image ; and (2) we make the first attempt to conduct an extensive empirical study of black-box attacks against real-world cloud-based classifier services. Through evaluations on four popular cloud platforms including Amazon, Google, Microsoft, Clarifai, we demonstrate that Spatial Transformation (ST) attack has a success rate of approximately 100% except Amazon approximately 50%, FFL-PGD attack have a success rate over 90% among different classifier services.


Presenters:

  • Wei Tao - Chief Security Scientist, Baidu
  • Wang Yang - Security Researcher, Baidu X-Lab
    Liu Yan (Dou Goodman), Head of AI security team of Baidu X-Lab, is a technology writer of AI Safety Trilogy. His research interests include AI and network security. He starts the open source project Advbox. Wang Yang is a senior security researcher of Baidu X-Lab. His interests lie in face recognition, adversarial learning, and data mining. He maintains and actively contributes to Advbox project that is an open source toolbox for AI safety. Hao Xin has been engaged in security product development for many years in Baidu. His main research directions include object detection and image classification. Dr. Tao (Lenx) Wei is the head of Baidu X-Lab. Prior to joining Baidu, he was an associate professor at Peking University. His research interests include software analysis and system protection, web trust and privacy, programming languages, and mobile security.
  • Hao Xin - Security Researcher, Baidu X-Lab
    Liu Yan (Dou Goodman), Head of AI security team of Baidu X-Lab, is a technology writer of AI Safety Trilogy. His research interests include AI and network security. He starts the open source project Advbox. Wang Yang is a senior security researcher of Baidu X-Lab. His interests lie in face recognition, adversarial learning, and data mining. He maintains and actively contributes to Advbox project that is an open source toolbox for AI safety. Hao Xin has been engaged in security product development for many years in Baidu. His main research directions include object detection and image classification. Dr. Tao (Lenx) Wei is the head of Baidu X-Lab. Prior to joining Baidu, he was an associate professor at Peking University. His research interests include software analysis and system protection, web trust and privacy, programming languages, and mobile security.
  • Liu Yan (Dou Goodman) - Senior Security Researcher, Baidu X-Lab
    Liu Yan (Dou Goodman), Head of AI security team of Baidu X-Lab, is a technology writer of AI Safety Trilogy. His research interests include AI and network security. He starts the open source project Advbox. Wang Yang is a senior security researcher of Baidu X-Lab. His interests lie in face recognition, adversarial learning, and data mining. He maintains and actively contributes to Advbox project that is an open source toolbox for AI safety. Hao Xin has been engaged in security product development for many years in Baidu. His main research directions include object detection and image classification. Dr. Tao (Lenx) Wei is the head of Baidu X-Lab. Prior to joining Baidu, he was an associate professor at Peking University. His research interests include software analysis and system protection, web trust and privacy, programming languages, and mobile security.

Links:

Similar Presentations: