Hidden Hot Battle Lessons of Cold War: All Learning Models Have Flaws, Some Have Casualties

Presented at BSidesLV 2017, July 25, 2017, 11:30 a.m. (30 minutes).

In a pursuit of realistic expectations for learning models can we better prepare for adversarial environments by examining failures in the field? All models have flaws, given any usual menu of problems with learning; it is the rapidly increasing risk of a catastrophic-level failure that is making data /robustness/ a far more immediate concern. This talk pulls forward surprising and obscured learning errors during the Cold War to give context to modern machine learning successes and how things quickly may fall apart in evolving domains with cyber conflict.


Presenters:

  • Davi Ottenheimer - product security - mongoDB
    flyingpenguins, Cyberwar History, Threat Intel, Hunt, Active Defense, Cyber Letters of Marque, Cloudy Virtualization Container Security, Adversarial Machine Learning, Data Integrity and Ethics in Machine Learning (Formerly Known as Realities of Securing Big Data).

Links:

Similar Presentations: