A survey on practical adversarial examples for malware classifiers

Presented at DeepSec 2020 „The Masquerade“, Unknown date/time (Unknown duration)

Machine learning based solutions have been very helpful in solving problems that deal with immense amounts of data, such as malware detection and classification. However, deep neural networks have been found to be vulnerable to adversarial examples, or inputs that have been purposefully perturbed to result in an incorrect label. Researchers have shown that this vulnerability can be exploited to create evasive malware samples. However, many proposed attacks do not generate an executable and instead generate a feature vector. To fully understand the impact of adversarial examples on malware detection, we review practical attacks against malware classifiers that generate executable adversarial malware examples. We also discuss current challenges in this area of research, as well as suggestions for improvement and future research directions.


Presenters:

  • Daniel Park - Rensselaer Polytechnic Institute (RPI)
    Daniel Park is a Ph.D Candidate in the Computer Science Department at Rensselaer Polytechnic Institute. His research currently focuses on the intersection of computer security and machine learning, most recently focusing on the security of deep learning models. He is also interested in binary analysis techniques and participates in CTFs with RPISEC.

Links:

Similar Presentations: