Bot vs. Bot for Evading Machine Learning Malware Detection

Presented at Black Hat USA 2017, July 27, 2017, 9 a.m. (25 minutes)

Machine learning offers opportunities to improve malware detection because of its ability to generalize to never-before-seen malware families and polymorphic strains. This has resulted in its practical use for either primary detection engines or supplementary heuristic detections by AV vendors. However, machine learning is also especially susceptible to evasion attacks by, ironically but unsurprisingly, other machine learning methods. We demonstrate how to evade machine learning malware detection by setting up an AI agent to compete against the malware detector that proactively probes it for blind spots that can be exploited. We focus on static Windows PE malware evasion, but the framework is generic and could be extended to other domains.

Reinforcement learning has produced models that top human performance in a myriad of games. Using similar techniques, our PE malware evasion technique can be framed as a competitive game between our agent and the machine learning model detector. Our agent inspects a PE file and selects a sequence of functionality-preserving mutations to the PE file which best evade the malware detection model. The agent learns through the experience of thousands of "games" against the detector, which sequence of actions is most likely to result in an evasive variant. Then, given any new PE malware that the agent has never before seen, the agent deploys a policy that results in a functionally-equivalent malware variant that has a good chance of evading the opposing machine learning detector.

We conclude with key defender takeaways. Teaching the machine learning detector about its blind spots is a simple and powerful idea. However, correct implementation is as much art as it is science. Finally, we caution attendees that without an adversarially-minded approach, machine learning offers early successes, but can quickly become a porous defense in the face of sophisticated adversaries.


Presenters:

  • Hyrum Anderson - Technical Director of Data Science, Endgame
    Hyrum Anderson is the technical director for data science at Endgame, where he leads research on detecting adversaries and their tools using machine learning. Prior to joining Endgame he conducted information security and situational awareness research at FireEye, Mandiant, Sandia National Laboratories and MIT Lincoln Laboratory. He received his PhD in Electrical Engineering (signal and image processing + machine learning) from the University of Washington and BS/MS degrees from Brigham Young University. Research interests include adversarial machine learning, deep learning, large-scale malware classification, and early time-series classification.

Links:

Similar Presentations: