Over the past decade there have been a number of automatic and semi-automatic approaches used to help security professionals find bugs including dataflow analysis, blackbox web application scanning, fuzzing, and more. Despite the fact that a number of these techniques are now widely used, there tends to be a lack of discussion and clarity around the fundamental underpinnings of the approaches and their inherent tradeoffs.
In this talk, we'll provide an overview of a number of automated bug finding techniques ranging from the well known, such as fuzzing, dataflow analysis, and blackbox scanning, to less common techniques that are gaining traction, such as symbolic execution, model checking, and abstract interpretation.
For each technique we'll give a high level introduction that provides you with an intuition for how and why the technique works. Then, we'll discuss the strengths and weaknesses of the technique including the types of bugs and classes of problems it's effective at finding and those it's not. We'll also describe inherent implementation tradeoffs any creator of such a tool needs to make and link to open source tools you can play with as well as conference talks and academic papers you can review to learn more.
You'll leave this talk with an understanding of the wide variety of automated bug finding approaches out there. You'll know which techniques might be applicable in your own work and you'll have the background knowledge necessary to make more informed decisions when assessing potential open source or commercial tools.