Superman Powered by Kryptonite: Turn the Adversarial Attack into Your Defense Weapon

Presented at Black Hat USA 2020 Virtual, Aug. 5, 2020, 2:30 p.m. (40 minutes)

Artificial Intelligence (AI) is wielding a profound impact on global economic and social progress as well as ordinary citizens' daily life. However, with the advancement of AI technology, the next-generation hackers have successfully built a deep learning model that can more easily and efficiently destroy previously unbreakable security mechanisms (e.g., for the most complex CAPTCHAs, the recognition rate is 99%).

Similar to the scene in 'Avengers 3' when 'Thanos' (Hackers) created the "Infinity Gauntlet" (AI-powered exploit toolkit) with six gems, and inevitably erases half the universe creature with a finger snap. In reality, as avengers (security defenders), we propose to leverage the weakness of the omnipotent 'Infinity Gauntlet' (AI) to flight evils (hackers). The irony is that the weapon, named 'Adversarial Machine Learning (ML)' used to explore the weakness of AI, was developed by attackers themselves.

Adversarial ML exploits vulnerabilities in AI models and crafts inputs to machine learning models that an attacker has intentionally designed to cause the model to make mistakes (i.e., optical illusions for machines). The rationale behind our idea is that we deliberately add "adversarial perturbation" to our "target asset" that does not affect human use, but entirely misleads hacker's AI tools. In the example of "CAPTCHAs" service, we demonstrate how to use multiple levels of adversarial attack methods to fool hacker's AI tools and to detect hackers when they use AI toolkits. Another contribution we made in this paper is to "reprogram" hacker's AI toolkit and steal hacker's computing resources to perform tasks for us.


Presenters:

  • Xinyu Xing - Assistant Professor, The Pennsylvania State University
    Xinyu Xing is an Assistant Professor at Pennsylvania State University. His research interests include exploring, designing, and developing new program analysis to facilitate vulnerability identification, diagnosis, and exploitation assessment. In addition, he explores solutions to safeguarding various AI systems. His past research has been featured by many mainstream media and received the best paper awards from ACM CCS and ACSAC.
  • Jimmy Su - Senior Director, JD Security Research Center
    Dr. Jimmy Su leads the JD security research center in Silicon Valley. He joined JD in January 2017. Before joining JD, he was the Director of Advanced Threat Research at FireEye Labs. He led the research and development of multiple world leading security products at FireEye, including network security, email security, mobile security, fraud detection, and end-point security. He led a global team including members from the United States, Pakistan, and Singapore from research to product releases on FireEye's first machine learning based malware similarity analysis Cloud platform. This key technology advance was released on all core FireEye products including network security, email security, and mobile security. He won the Q2 2016 FireEye innovation award for his seminal work on similarity analysis. He earned his PhD degree in Computer Science at the University of California, Berkeley in 2010. After his graduation, he joined Professor Dawn Song's team as a post doc focusing on similarity analysis of x86 and Android applications. In 2011, he joined Professor Song in the mobile security startup Ensighta, leading the research and development of the automatic malware analysis platform. Ensighta was acquired by FireEye in December of 2012. He joined FireEye through the acquisition. JD security research center in Silicon Valley focuses on these seven areas: account security, APT detection, bot detection, data security, AI applications in security, Big Data applications in security, and IoT security.
  • Tongbo Luo - Security Researcher, Robinhood Markets, Inc.
    Tongbo Luo is a Chief AI Security Scientist at JD.com and was most recently Senior Principal Security Researcher at Palo Alto Networks. He obtained his MS and PhD in computer science from Syracuse University in 2014. He is active on docker security, cyber security, IoT security, and applied machine learning for security problems.
  • Kailiang Ying - Security Researcher, Syracuse University
    Kailiang Ying is a Security Software Engineer at Google. He earned his PhD degree in 2019 at Syracuse University majoring in Computer Science. His research focuses on insider risk control, AI security, mobile security, and Trusted Execution Environment.

Links:

Similar Presentations: