Presented at
VB2015,
Sept. 30, 2015, 4 p.m.
(30 minutes).
Most anti-malware tests count each 'miss' equally. If one sample out of 100 is missed, the score for that set is 99 percent, regardless of the sample missed. But should all samples be treated equally? Should vendors receive a lower test score when they miss samples that have victimized more people? Should vendors receive an equal score if they miss the same number of low-prevalence samples, rather than the high-prevalence ones? Even if you agree with the principle that not all misses are the same, how would you factor in polymorphism where a particular sample may impact only one victim, but the malware family impacts millions? How is a sample measured if there is no record of the sample or the family in the wild at all?
In this paper, one of the leading comparative testers and other anti-malware industry leaders will take you through several prevalence-weighted models using real-world data from hundreds of millions of computers. We will show how the prevalence-weighted models compare to the standard method of scoring sample detection. In the session, we'll discuss each model's benefits, deficits, and the lessons learned along the way.
Presenters:
-
Holly Stewart
- Microsoft
Holly Stewart Holly has 15 years of experience in the security industry, seven years of experience in product management, and has spent the past eight years focused on security incident response, security content development, data analysis and communications. In her product management roles, she had responsibilities in project management, operations management, marketing and product lifecycle management. She has been at Microsoft for the past five years. During that time, she has written and contributed to Microsoft publications like the Microsoft Security Intelligence Report and the MMPC blog, managed a response and data analytics team that helped turn the malware research and response process into a data-driven model, and ran a team focused on sharing data and supporting Microsoft partners through programs like MVI (Microsoft Virus Initiative), VIA (Virus Information Alliance) and CME (Coordinated Malware Eradication). Currently, she's going deep into the data science realm on the Microsoft Defender team building analysis models using big data to detect and measure the impact of detecting more malware faster. @ollijoi
-
Peter Stelzhammer
- AV-Comparatives
Peter Stelzhammer Peter Stelzhammer MBA started working in IT in 1989. After five years working as the IT System Administrator of Alois Wild Group (Champion, Mexx, Benetton, Etienne Aigner), he became COO at Telesystem Tirol (an ISP and TV broadcasting company). He later set up Kompetenzzentrum.IT (IT security consulting), which has customers all over the world. Whilst running this organization he met Andreas Clementi, with whom he founded AV-Comparatives. Peter Stelzhammer is on the board of directors of the Tyrolean Cluster IT (Tyrolean government's IT strategy group) and also the board of AMTSO (Anti-Malware Testing Standard Organization). Peter is a frequent speaker at major security conferences. He studied at the University of Innsbruck and the Management Center Innsbruck.
-
Philippe Rödlach
- AV-Comparatives
-
Andreas Clementi
- AV-Comparatives
Links:
Similar Presentations: