A SOC requires trust in the alert process, especially for machine learning models; alerts which cannot be mapped back to system logs or seem as obvious false positives to an analyst endanger that trust. Rule-based whitelists are one attempt at circumventing these issues, but they can cause detection teams to miss legitimate attacks. For example, preventing alerts from being generated when both the source and destination of a network flow are inside the network boundary will obscure attackers proxying through internal hosts.
We demonstrate how modeling improves upon whitelists to solve the problem of inactionable alerts. For our detection models, we stack a new model answering the question, ""How likely is this event to be characterized as inactionable?"" in a policy layer. This serves two purposes: it deprioritizes alerts that are unable to be investigated because of detectable data correlation loss, and it acts as a blanket policy to suppress results which investigators will see and throw away as obvious false positives. We show how doing so drastically decreases our false positive rates while continuing to alert on truly suspicious events.