Presented at
Black Hat Europe 2021,
Nov. 11, 2021, 1:30 p.m.
(40 minutes).
Machine learning has so far been relatively unchecked on its way to world domination. As the high pace of ML research continues, ML is being integrated into all manner of business processes – chatbots, sales lead generation, maintenance decisions, policing, medicine, recommendations... However, there are several security concerns that have been unaccounted for which has led to some less than desirable outcomes. Researchers have been able to extract PII from language models, red teamers have stolen (and then bypassed) spam and malware classification models, citizens have been incorrectly identified as criminals, otherwise qualified home buyers have been denied mortgages. This is just scratching the surface. While attacks on AI systems are talked about as futuristic, the consequences of not securing them are already being experienced. This talk will discuss the current state of ML security, the symmetry found in adversarial ML, and how offensive security professionals can approach the topic. We will provide a compendium of attacks, and cover the fundamentals of attacking ML such as:<br><br> - Where to find models to attack, what should you be looking for? <br> - Given all available options, what should you do?<br> - What is needed for a successful attack?<br> - Will the attack take months or minutes, is it worth it?<br><br>Offensive teams might not have as many papers published or as many PhDs among their ranks, but they have data, domain knowledge, and the right mindset to challenge AI systems in real-world environments. This talk aims to be a defining resource for offensive security professionals looking to expand their skillsets.<br>
Presenters:
-
Will Pearce
- Red Team Lead, Azure Trustworthy ML, Microsoft
Will Pearce is the Red Team Lead for Azure Trustworthy ML at Microsoft. In his current role, he is responsible for running and supporting offensive engagements against AI systems at Microsoft and with partners. This includes building assessment methodologies, developing tools, and creating research. Previously, he was a Senior Security Consultant at Silent Break Security where he performed network operations, security research, and was an instructor for the popular Darkside Ops courses given at industry conferences and private/public sector groups. His work on the use of machine learning for offensive security has appeared at industry conferences including Arsenal, DerbyCon, BSidesLV/SLC, and Defcon AI Village as well an academic appearance at the SAI Conference on Computing.
-
Giorgio Severi
- PhD Student, Northeastern University
Giorgio Severi is a PhD student in the NDS2 Lab at Northeastern University, advised by professor Alina Oprea. Prior to that, he obtained a Master's degree in computer science at Sapienza Univesity of Rome, Italy. His research focuses on adversarial machine learning, especially targeting cyber security applications. Giorgio is particularly interested in developing methods to subvert the result of machine learning training processes.
Links:
Similar Presentations: