Detecting and managing bot activity more efficiently

Presented at AppSec USA 2015, Sept. 25, 2015, 10:30 a.m. (55 minutes).

Bots, also commonly referred to as scrapers or spiders, are omnipresent on the Internet. Studies show that bot activity represents a great percentage of the overall traffic on the Internet. Bots are built for different purposes from simple health check to ensure the site is up to site spidering for the purpose of indexing the content or collecting specific information en mass. Not all bots are bad: - The ones operated by search engines, audience analytics, SEO companies, web site performance monitoring services or partners drive users to the site, are vital to its success and the business it supports. But like with any automated activity, sometimes with the best of intentions, bot activity can have a negative load impact on the web site infrastructure. - Other bot activity, sometimes more difficult to detect, can have more questionable benefits, hurt the image of company that owns the targeted site or even have some impact on the company's revenue in the case of content theft or competitive scraping. The amount of bot activity seen on a given web site is generally proportional to the value of the content hosted on the site. The value of the content is defined by the dollar amount that can be gained by exploiting the data collected. Bots are usually part of botnets and come in all shapes and sizes. Some are very simplistic scripts that run on a single machine and can only support a single task. Others are highly distributed and have the same abilities as a web browser and support a wide variety of tasks. In order to efficiently detect as much bot activity as possible, it is essential to implement many different techniques to match the different types of bots. In this talk, we'll discuss different detection methods including evaluating the HTTP header signature, testing the ability of a client and evaluating the client behavior. Detecting bots is however half of the trouble. Because a bot has a certain header signature or behavior doesn't mean it has bad intentions and would have a negative impact on the business. Clearly identifying and categorizing the bots is key, this talk will provide some guidelines on how to identify and categorize bots. Once detected and categorized, bots that are considered good for the business should be allowed access to the content. However, the one that do not appear to bring any benefits should be handled appropriately. Denying the traffic has the immediate effect of sending a signal to the bot operator, telling him that the activity was detected. Although such action may provide immediate relief, the bot operator may adapt, redeploy and resume its activity undetected. To conclude the session, we'll go over some guidelines on how best to respond to "bad bot" activity.

Presenters:

  • David Senecal - Product Architect - Akamai Technologies
    I am a Security Product Architect at Akamai Technologies based in the San Francisco bay area (California). I have over 15 years of hands on experience and expertise in computer networking and security. I started my career in France as a Network Administrator followed by a few years in England as a presales / post-sales engineer focusing on network and security product for the enterprise. I moved to the US and joined Akamai Technologies over 8 years ago as a Technical Support Engineer where I specialized in troubleshooting web performance issues and also helping customers defend their sites against web attacks. I then joined the Professional Services team as an Enterprise Architect where I quickly specialized in helping customers protect their web sites using Akamai's cloud security product line. I am now a Product Architect in the Akamai Cloud Security Solutions division where I focus on helping enhance the existing cloud security product offerings with protections against the latest attack vectors.

Links:

Similar Presentations: