Presented at
BSidesLV 2015,
Aug. 4, 2015, 6 p.m.
(55 minutes).
Machine learning techniques used in network intrusion detection are susceptible to 'model poisoning' by attackers. We dissect this attack and analyze some proposals for how to circumvent these attacks, then consider specific use cases of how machine learning and anomaly detection can be used in the web security context.
Presenters:
-
Clarence Chio
- Software Engineer - Shape Security
Clarence recently graduated with a B.S. and M.S. in Computer Science from Stanford University, specializing in data mining and artificial intelligence. He currently works at Shape Security, a startup in Silicon Valley building a product that protects its customers from malicious bot intrusion. At Shape, he works on the system that tackles this problem from the angle of big data analysis. Clarence is a community speaker with Intel, traveling around the USA speaking about topics related to the Internet of Things and hardware hacking. He is also the organizer of the "Data Mining for Cyber Security" meetup group in the SF Bay Area. Clarence is based in Mountain View, California. Linkedin profile: www.linkedin.com/pub/clarence-chio/3a/713/521/en
Links:
Similar Presentations: