Security Metrics are often about the performance of information security professionals - traditional ones are centered around vulnerability close rates, timelines, or criticality ratings. But how does one measure if those metrics are the rights ones? How does one measure risk reduction, or how successful your metrics program is at operationalizing that which is necessary to prevent a breach? The data we'll explore defined the 2016 Verizon DBIR Vulnerabilities section.
This talk will borrow concepts from epidemiology, repeated game theory, classical and causal probability theory in order to demonstrate some inventive metrics for evaluating vulnerability management strategies. Not all vulnerabilities are at risk of being breached. Not all people are at risk for catching the flu. By analogy, we are trying to be effective at catching the "disease" of vulnerabilities which are susceptible to breaches, and not all are. How do we determine what is truly critical? How do we determine if we are effective at remediating what is truly critical? Because the incidence of disease is unknown, the absolute risk can not be calculated. This talk will introduce some concepts from other fields for dealing with infosec uncertainty.
Attackers are human too - and currently available data allows us to make some predictions about how they'll behave. And to predict is to prevent.