Presented at
Black Hat Asia 2020 Virtual,
Oct. 2, 2020, 1:30 p.m.
(30 minutes).
In recent years, Machine Learning (ML) techniques have been extensively deployed for computer vision tasks, particularly visual classification problems, where new algorithms reported to achieve or even surpass the human performance. Success of ML algorithms has led to an explosion in demand. To further broaden and simplify the use of ML algorithms, cloud-based services offered by Amazon, Google, Microsoft, Clarifai and other public cloud companies have developed ML-as-a-service tools. Thus, users and companies can readily benefit from ML applications without having to train or host their own models. For example, Google introduced the Cloud Vision API for image analysis. A demonstration website has been also launched, where for any selected image, the API outputs the image labels, identifies and reads the texts contained in the image and detects the faces within the image. It also determines how likely is that the image contains inappropriate contents, including adult, spoof, medical, or violence contents.Unlike common attacks against web applications, such as SQL injection and XSS, there are very special attack methods for machine learning applications. Obviously, neither public cloud companies nor traditional security companies pay much attention to these new attacks and defenses.<br><br>This presentation focuses on the Cloud Vision API of public cloud companies and explores the attacks against the machine learning applications and describes effective defenses and mitigation. While the content is focused on the Cloud Vision API, some of the attack and defense topics are applicable to other machine learning applications such as natural language processing (NLP) applications and speech processing applications. Our research involves attacks, intrusion detection, security testing and security reinforcement, which can become security development lifecycle(SDL) for machine learning applications.We will release a tool (https://github.com/advboxes/AdvBox) to support SDL for ML.<br><br>Key items covered:<br><br><ul><li>Transfer attack against image classification service</li><li>FFL-PGD attack against image classification service</li><li>Spacial attack against pornographic image detection service</li><li>Adversarial attack detection</li><li>Security testing for model robustness</li><li>Securing machine learning applications against attacks</li></ul>
Presenters:
-
Hao Xin
- Staff Scientist, Baidu X-Lab
Hao Xin is a Staff Scientist at Baidu X-Lab.
-
Dou Goodman
- Lead Researcher, AI Security Team, Baidu
Dou Goodman is the lead researcher of the AI security team at Baidu X-ab.
-
Yang Wang
- Senior Security Researcher, Baidu
Yang Wang is a senior development Engineer of Baidu Security. His interests lie in forgery detection, face recognition and adversarial learning. He maintains and actively contributes to Advbox project that is a toolbox for AI safety.
Links:
Similar Presentations: