AIModel-Mutator: Finding Vulnerabilities in TensorFlow

Presented at Black Hat Europe 2021, Nov. 10, 2021, 2:30 p.m. (30 minutes).

Like other software, machine learning frameworks could contain vulnerabilities. Various advanced machine learning frameworks, such as Tensorflow, Pytorch, PaddlePaddle, etc. keep in active development to catch up with increasing demand. The rapid development style brings security risks along with their benefits. For example, from 2019 to 2021, the number of CVEs for Tensorflow increased 15 times.<br> <br>API fuzzing is a common way for vulnerability detection, but it is not enough in machine learning frameworks. In this work, we found that API fuzzing cannot find "deep" vulnerabilities hidden in complicated code logic. These vulnerabilities have to be triggered under a certain semantic context. It is hard for API fuzzing to construct such semantic context from scratch. <br><br>In this session, we will demonstrate a new fuzzing approach for machine learning frameworks. It can help to find deep vulnerabilities without manual context construction. We evaluated our tool AIModel-mutator in the TensorFlow framework and found at least 4 deep vulnerabilities in the TensorFlow framework, which cannot be found by API fuzzing. Two of them are found at the model inference stage. An attack can easily craft an input to defeat a model and crash the system. We also constructed one attack based on our findings, and it can successfully crash the TensorFlow framework. Our findings have been confirmed by the Google TensorFlow security team.

Presenters:

  • Qian Feng - Senior Security Researcher, Baidu Security
    Qian Feng is currently working at Baidu USA as a security researcher. She obtained her PhD degree in Computer Engineering from Syracuse University in 2017. Her research focuses on program analysis, memory forensics, machine learning, and reverse engineering. Her works have been published at mainstream security conferences such as CCS, ACSAC and ASIACCS. She has received the best paper candidates at ACM Asia Conference on Computer and Communications Security in 2016, and the student travel grant for the Annual Computer Security Applications Conference in 2014. Currently, her research topics include practical formal verification, side-channel detection, and vulnerability detection.
  • Jie Hu - Phd Candidate, UC Riverside
    Jie Hu is a PhD student at the University of California, Riverside, where she is advised by Prof. Heng Yin. Prior to UCR, Jie obtained a B.E. degree from Huazhong University of Science and Technology in 2017, where she was advised by Prof. Deqing Zou. Jie's research is focused on program analysis and vulnerability detection.
  • Zhaofeng Chen - Staff Security Researcher, Baidu USA
    Zhaofeng Chen is a security researcher from Baidu Security. He is experienced in both offensive and defensive security on confidential computing, system security, and mobile security. He has designed multiple data/mobile security products and is the PPMC of the Apache Teaclave (Incubating) project. Over the years, he has also discovered various TEE and iOS framework vulnerabilities with 20+ CVEs credited by Google, Microsoft, and Apple.
  • Zhenyu Zhong - Staff Security Researcher, Baidu USA
    Dr. Zhenyu Zhong is a Staff Security Researcher with current research focuses on adversarial machine learning, particularly deep learning. He explores physical attack tactics against autonomous perception models, as well as defensive approaches to harden the deep learning model. Previously, Dr. Zhong worked for Microsoft and McAfee, mainly applying large scale machine learning solutions to security problems such as malware classification, intrusion detection, malicious URL detection, spam filtering, etc.
  • Heng Yin - Professor, UC Riverside
    Dr. Heng Yin is a Professor in the Department of Computer Science and Engineering at the University of California, Riverside. He is the director of CRESP (Center for Research and Education in Cyber Security and Privacy) at UCR. He obtained his PhD degree from the College of William and Mary in 2009. His research interests lie in computer security, with an emphasis on binary code analysis. His publications appear in top-notch technical conferences and journals, such as ACM CCS, USENIX Security, NDSS, TSE, TDSC, etc. His research is sponsored by the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Air Force Office of Scientific Research (AFOSR), and Office of Navy Research (ONR). In 2011, he received the prestigious NSF Career award. He received Google Security and Privacy Research Award in 2019.
  • Kang Li - Director of Baidu Security Research, Baidu USA
    Kang Li is the director of Baidu security research. He has spoken at Black Hat multiple times in the past. Dr. Kang Li is the founder and mentor of multiple CTF security teams, including SecDawg and Blue-Lotus. He is also a founder and player of Team Disekt, one of the finalist teams in the 2016 DARPA Cyber Grand Challenge.

Links:

Similar Presentations: