Reducing Inactionable Alerts via Policy Layer

Presented at BSidesLV 2019, Aug. 6, 2019, 6 p.m. (25 minutes).

A SOC requires trust in the alert process, especially for machine learning models; alerts which cannot be mapped back to system logs or seem as obvious false positives to an analyst endanger that trust. Rule-based whitelists are one attempt at circumventing these issues, but they can cause detection teams to miss legitimate attacks. For example, preventing alerts from being generated when both the source and destination of a network flow are inside the network boundary will obscure attackers proxying through internal hosts.

We demonstrate how modeling improves upon whitelists to solve the problem of inactionable alerts. For our detection models, we stack a new model answering the question, ""How likely is this event to be characterized as inactionable?"" in a policy layer. This serves two purposes: it deprioritizes alerts that are unable to be investigated because of detectable data correlation loss, and it acts as a blanket policy to suppress results which investigators will see and throw away as obvious false positives. We show how doing so drastically decreases our false positive rates while continuing to alert on truly suspicious events.


Presenters:

  • John Seymour / Delta Zero as John Seymour
    John works on the Detection and Response team at Salesforce, which focuses on aggregating all security logs at Salesforce, applying rules and models to obtain high fidelity alerts, and sharing those results with other pertinent Security teams. In particular, John performs machine learning on security logs to alert to new attacks, to improve our existing alerts and rules, and to find/make new contextual data to help in investigations. He previously focused on data science at a startup focused on social media security. He has presented at several security cons, including BSidesLV, Black Hat, DEF CON, and SecTor.

Links:

Similar Presentations: